idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
4,401 | Mathematical Statistics Videos | The folks at SLAC put videos of their lecture series online. Given that their audience is mostly physicists, they tend to be fairly mathematical.
SLUO Lecture Series (see the "Stat" links) | Mathematical Statistics Videos | The folks at SLAC put videos of their lecture series online. Given that their audience is mostly physicists, they tend to be fairly mathematical.
SLUO Lecture Series (see the "Stat" links) | Mathematical Statistics Videos
The folks at SLAC put videos of their lecture series online. Given that their audience is mostly physicists, they tend to be fairly mathematical.
SLUO Lecture Series (see the "Stat" links) | Mathematical Statistics Videos
The folks at SLAC put videos of their lecture series online. Given that their audience is mostly physicists, they tend to be fairly mathematical.
SLUO Lecture Series (see the "Stat" links) |
4,402 | Mathematical Statistics Videos | There is one called Math and probability for life sciences, but I haven't followed it so I can't tell you if its good or not. | Mathematical Statistics Videos | There is one called Math and probability for life sciences, but I haven't followed it so I can't tell you if its good or not. | Mathematical Statistics Videos
There is one called Math and probability for life sciences, but I haven't followed it so I can't tell you if its good or not. | Mathematical Statistics Videos
There is one called Math and probability for life sciences, but I haven't followed it so I can't tell you if its good or not. |
4,403 | Mathematical Statistics Videos | This site from Ecole normal Supérieure de Paris contains a lot of very interesting video
http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30
I greatly encourage you to visit this site !!
Among other you will find there all video presentation from the conference "Mathematical Foundations of Learning Theory" that held in 2006. | Mathematical Statistics Videos | This site from Ecole normal Supérieure de Paris contains a lot of very interesting video
http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30
I greatly encourage you to visit this site !!
Amo | Mathematical Statistics Videos
This site from Ecole normal Supérieure de Paris contains a lot of very interesting video
http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30
I greatly encourage you to visit this site !!
Among other you will find there all video presentation from the conference "Mathematical Foundations of Learning Theory" that held in 2006. | Mathematical Statistics Videos
This site from Ecole normal Supérieure de Paris contains a lot of very interesting video
http://www.diffusion.ens.fr/index.php?res=themes&idtheme=30
I greatly encourage you to visit this site !!
Amo |
4,404 | Mathematical Statistics Videos | I do not know at what level you want the videos to be but I have heard good things about Khan's Academy: http://www.khanacademy.org/#Statistics | Mathematical Statistics Videos | I do not know at what level you want the videos to be but I have heard good things about Khan's Academy: http://www.khanacademy.org/#Statistics | Mathematical Statistics Videos
I do not know at what level you want the videos to be but I have heard good things about Khan's Academy: http://www.khanacademy.org/#Statistics | Mathematical Statistics Videos
I do not know at what level you want the videos to be but I have heard good things about Khan's Academy: http://www.khanacademy.org/#Statistics |
4,405 | Mathematical Statistics Videos | Many of the Berkeley introductory statistics courses are available online (and on iTunes). Here's an example: Stats 2. You can find more here. | Mathematical Statistics Videos | Many of the Berkeley introductory statistics courses are available online (and on iTunes). Here's an example: Stats 2. You can find more here. | Mathematical Statistics Videos
Many of the Berkeley introductory statistics courses are available online (and on iTunes). Here's an example: Stats 2. You can find more here. | Mathematical Statistics Videos
Many of the Berkeley introductory statistics courses are available online (and on iTunes). Here's an example: Stats 2. You can find more here. |
4,406 | Mathematical Statistics Videos | There is a new resources forming these days for talks about R:
https://www.r-bloggers.com/RUG/
Compiled by the organizers of "R Users Groups" around the world (right now, mainly around the States).
It is a new project (just a few weeks old), but already got good content on it, and good people wanting to take part in it.
(source: r-bloggers.com) | Mathematical Statistics Videos | There is a new resources forming these days for talks about R:
https://www.r-bloggers.com/RUG/
Compiled by the organizers of "R Users Groups" around the world (right now, mainly around the States).
It | Mathematical Statistics Videos
There is a new resources forming these days for talks about R:
https://www.r-bloggers.com/RUG/
Compiled by the organizers of "R Users Groups" around the world (right now, mainly around the States).
It is a new project (just a few weeks old), but already got good content on it, and good people wanting to take part in it.
(source: r-bloggers.com) | Mathematical Statistics Videos
There is a new resources forming these days for talks about R:
https://www.r-bloggers.com/RUG/
Compiled by the organizers of "R Users Groups" around the world (right now, mainly around the States).
It |
4,407 | Mathematical Statistics Videos | There are a bunch of helpful video tutorials on basic statistics & data mining with R and Weka at SentimentMining.net.
http://sentimentmining.net/StatisticsVideos/ | Mathematical Statistics Videos | There are a bunch of helpful video tutorials on basic statistics & data mining with R and Weka at SentimentMining.net.
http://sentimentmining.net/StatisticsVideos/ | Mathematical Statistics Videos
There are a bunch of helpful video tutorials on basic statistics & data mining with R and Weka at SentimentMining.net.
http://sentimentmining.net/StatisticsVideos/ | Mathematical Statistics Videos
There are a bunch of helpful video tutorials on basic statistics & data mining with R and Weka at SentimentMining.net.
http://sentimentmining.net/StatisticsVideos/ |
4,408 | Mathematical Statistics Videos | There is a series of Google Tech Talk videos called Stats 202 - Statistical Aspects of Data Mining | Mathematical Statistics Videos | There is a series of Google Tech Talk videos called Stats 202 - Statistical Aspects of Data Mining | Mathematical Statistics Videos
There is a series of Google Tech Talk videos called Stats 202 - Statistical Aspects of Data Mining | Mathematical Statistics Videos
There is a series of Google Tech Talk videos called Stats 202 - Statistical Aspects of Data Mining |
4,409 | Mathematical Statistics Videos | I found the 'Probability Primer' Lectures very useful and informative :
http://www.youtube.com/playlist?list=PL17567A1A3F5DB5E4&feature=plcp
A series of videos giving an introduction
to some of the basic definitions, notation, and concepts one
would encounter in a 1st year graduate probability course. | Mathematical Statistics Videos | I found the 'Probability Primer' Lectures very useful and informative :
http://www.youtube.com/playlist?list=PL17567A1A3F5DB5E4&feature=plcp
A series of videos giving an introduction
to some of | Mathematical Statistics Videos
I found the 'Probability Primer' Lectures very useful and informative :
http://www.youtube.com/playlist?list=PL17567A1A3F5DB5E4&feature=plcp
A series of videos giving an introduction
to some of the basic definitions, notation, and concepts one
would encounter in a 1st year graduate probability course. | Mathematical Statistics Videos
I found the 'Probability Primer' Lectures very useful and informative :
http://www.youtube.com/playlist?list=PL17567A1A3F5DB5E4&feature=plcp
A series of videos giving an introduction
to some of |
4,410 | Mathematical Statistics Videos | UCCS mathematics video archive has
archived videos from a range of courses in mathematics. Several subjects called Mathematical Statistics I and Mathematical Statistics II are available. The main site requires a free registration to access.
Slightly more accessible are the videos for a subset of the courses on the UCCS MathOnline YouTube page. Two instances of this are as follows.
The lecture style often involves Dr. Morrow working through problems on the whiteboard.
Linear Models
Taught by Dr. Greg Morrow, Math 483 from UCCS. Methods and results of
linear algebra are developed to formulate and study a fundamental and
widely applied area of statistics. Topics include generalized
inverses, multivariate normal distribution and the general linear
model. Applications focus on model building, design models, and
computing methods. The Statistical Analysis System (software) is
introduced as a tool for doing computations.
Course info: Seems to
use Introduction to Linear Regression by Montgomery, Peck, and Vining.
Mathematical Statistics 1
Greg Morrow's Math 481 course from Math Online at the University of
Colorado in Colorado Springs
Course Description: Exponential, Beta, Gamma, Student, Fisher and Chi-square
distributions are covered in this course, along with joint and conditional
distributions, moment generating techniques, transformations of random
variables and vectors.
Course info
Syllabus from one year
Mathematical Statistics and Data Analysis, 3rd ed., by John A. Rice. | Mathematical Statistics Videos | UCCS mathematics video archive has
archived videos from a range of courses in mathematics. Several subjects called Mathematical Statistics I and Mathematical Statistics II are available. The main si | Mathematical Statistics Videos
UCCS mathematics video archive has
archived videos from a range of courses in mathematics. Several subjects called Mathematical Statistics I and Mathematical Statistics II are available. The main site requires a free registration to access.
Slightly more accessible are the videos for a subset of the courses on the UCCS MathOnline YouTube page. Two instances of this are as follows.
The lecture style often involves Dr. Morrow working through problems on the whiteboard.
Linear Models
Taught by Dr. Greg Morrow, Math 483 from UCCS. Methods and results of
linear algebra are developed to formulate and study a fundamental and
widely applied area of statistics. Topics include generalized
inverses, multivariate normal distribution and the general linear
model. Applications focus on model building, design models, and
computing methods. The Statistical Analysis System (software) is
introduced as a tool for doing computations.
Course info: Seems to
use Introduction to Linear Regression by Montgomery, Peck, and Vining.
Mathematical Statistics 1
Greg Morrow's Math 481 course from Math Online at the University of
Colorado in Colorado Springs
Course Description: Exponential, Beta, Gamma, Student, Fisher and Chi-square
distributions are covered in this course, along with joint and conditional
distributions, moment generating techniques, transformations of random
variables and vectors.
Course info
Syllabus from one year
Mathematical Statistics and Data Analysis, 3rd ed., by John A. Rice. | Mathematical Statistics Videos
UCCS mathematics video archive has
archived videos from a range of courses in mathematics. Several subjects called Mathematical Statistics I and Mathematical Statistics II are available. The main si |
4,411 | Mathematical Statistics Videos | MIT Open Courseware Discrete Stochastic Processes
Discrete stochastic processes are essentially probabilistic systems
that evolve in time via random changes occurring at discrete fixed or
random intervals. This course aims to help students acquire both the
mathematical principles and the intuition necessary to create,
analyze, and understand insightful models for a broad range of these
processes. The range of areas for which discrete stochastic-process
models are useful is constantly expanding, and includes many
applications in engineering, physics, biology, operations research and
finance.
The course includes videos, practice questions, slides, and an extensive set of notes. | Mathematical Statistics Videos | MIT Open Courseware Discrete Stochastic Processes
Discrete stochastic processes are essentially probabilistic systems
that evolve in time via random changes occurring at discrete fixed or
random | Mathematical Statistics Videos
MIT Open Courseware Discrete Stochastic Processes
Discrete stochastic processes are essentially probabilistic systems
that evolve in time via random changes occurring at discrete fixed or
random intervals. This course aims to help students acquire both the
mathematical principles and the intuition necessary to create,
analyze, and understand insightful models for a broad range of these
processes. The range of areas for which discrete stochastic-process
models are useful is constantly expanding, and includes many
applications in engineering, physics, biology, operations research and
finance.
The course includes videos, practice questions, slides, and an extensive set of notes. | Mathematical Statistics Videos
MIT Open Courseware Discrete Stochastic Processes
Discrete stochastic processes are essentially probabilistic systems
that evolve in time via random changes occurring at discrete fixed or
random |
4,412 | Mathematical Statistics Videos | I just came across this website, CensusAtSchool -- Informal inference. Maybe worth looking at the videos and handouts... | Mathematical Statistics Videos | I just came across this website, CensusAtSchool -- Informal inference. Maybe worth looking at the videos and handouts... | Mathematical Statistics Videos
I just came across this website, CensusAtSchool -- Informal inference. Maybe worth looking at the videos and handouts... | Mathematical Statistics Videos
I just came across this website, CensusAtSchool -- Informal inference. Maybe worth looking at the videos and handouts... |
4,413 | Mathematical Statistics Videos | An introductory set of statistics lectures with a voice over a slide presentation.
http://www.online.math.uh.edu/Math2311/index.htm
The lecture series is elementary, but I like how the lecturer communicates clearly and shows how to speak the formulas encountered in statistics. | Mathematical Statistics Videos | An introductory set of statistics lectures with a voice over a slide presentation.
http://www.online.math.uh.edu/Math2311/index.htm
The lecture series is elementary, but I like how the lecturer commu | Mathematical Statistics Videos
An introductory set of statistics lectures with a voice over a slide presentation.
http://www.online.math.uh.edu/Math2311/index.htm
The lecture series is elementary, but I like how the lecturer communicates clearly and shows how to speak the formulas encountered in statistics. | Mathematical Statistics Videos
An introductory set of statistics lectures with a voice over a slide presentation.
http://www.online.math.uh.edu/Math2311/index.htm
The lecture series is elementary, but I like how the lecturer commu |
4,414 | Mathematical Statistics Videos | Years ago the ASA video taped workshop /short courses on special topics such as time series and survival analysis and categorical data analysis. They were available for chapters to rent. You might check to see what they have. Short courses at the jSM were occasionally video taped. I don't know if general math stat courses are available. | Mathematical Statistics Videos | Years ago the ASA video taped workshop /short courses on special topics such as time series and survival analysis and categorical data analysis. They were available for chapters to rent. You might c | Mathematical Statistics Videos
Years ago the ASA video taped workshop /short courses on special topics such as time series and survival analysis and categorical data analysis. They were available for chapters to rent. You might check to see what they have. Short courses at the jSM were occasionally video taped. I don't know if general math stat courses are available. | Mathematical Statistics Videos
Years ago the ASA video taped workshop /short courses on special topics such as time series and survival analysis and categorical data analysis. They were available for chapters to rent. You might c |
4,415 | Mathematical Statistics Videos | Bookmark http://www.edxonline.org, it's bound to have all the math videos you could wish for. I believe they are hoping to launch this fall. | Mathematical Statistics Videos | Bookmark http://www.edxonline.org, it's bound to have all the math videos you could wish for. I believe they are hoping to launch this fall. | Mathematical Statistics Videos
Bookmark http://www.edxonline.org, it's bound to have all the math videos you could wish for. I believe they are hoping to launch this fall. | Mathematical Statistics Videos
Bookmark http://www.edxonline.org, it's bound to have all the math videos you could wish for. I believe they are hoping to launch this fall. |
4,416 | Mathematical Statistics Videos | Opinionated Lessons in Statistics
http://wiki.opinionatedlessons.org/coursewiki/index.php/OpinionatedLessons.org/
Around 50 videos on statistics by Professor William H. Press of the University of Texas at Austin. Each video is around 10 to 30 minutes long.
A number of more advanced topics are coverd such as mixture models, EM methods, MCMC, PCA, and more. | Mathematical Statistics Videos | Opinionated Lessons in Statistics
http://wiki.opinionatedlessons.org/coursewiki/index.php/OpinionatedLessons.org/
Around 50 videos on statistics by Professor William H. Press of the University of Texa | Mathematical Statistics Videos
Opinionated Lessons in Statistics
http://wiki.opinionatedlessons.org/coursewiki/index.php/OpinionatedLessons.org/
Around 50 videos on statistics by Professor William H. Press of the University of Texas at Austin. Each video is around 10 to 30 minutes long.
A number of more advanced topics are coverd such as mixture models, EM methods, MCMC, PCA, and more. | Mathematical Statistics Videos
Opinionated Lessons in Statistics
http://wiki.opinionatedlessons.org/coursewiki/index.php/OpinionatedLessons.org/
Around 50 videos on statistics by Professor William H. Press of the University of Texa |
4,417 | Mathematical Statistics Videos | Biostatistical bootcamp is a Coursera course on mathematical statistics. The videos are also available on Brian Caffo's YouTube Channel. | Mathematical Statistics Videos | Biostatistical bootcamp is a Coursera course on mathematical statistics. The videos are also available on Brian Caffo's YouTube Channel. | Mathematical Statistics Videos
Biostatistical bootcamp is a Coursera course on mathematical statistics. The videos are also available on Brian Caffo's YouTube Channel. | Mathematical Statistics Videos
Biostatistical bootcamp is a Coursera course on mathematical statistics. The videos are also available on Brian Caffo's YouTube Channel. |
4,418 | Why would R return NA as a lm() coefficient? | NA as a coefficient in a regression indicates that the variable in question is linearly related to the other variables. In your case, this means that $Q3 = a \times Q1 + b \times Q2 + c$ for some $a, b, c$. If this is the case, then there's no unique solution to the regression without dropping one of the variables. Adding $Q4$ is only going to make matters worse. | Why would R return NA as a lm() coefficient? | NA as a coefficient in a regression indicates that the variable in question is linearly related to the other variables. In your case, this means that $Q3 = a \times Q1 + b \times Q2 + c$ for some $a, | Why would R return NA as a lm() coefficient?
NA as a coefficient in a regression indicates that the variable in question is linearly related to the other variables. In your case, this means that $Q3 = a \times Q1 + b \times Q2 + c$ for some $a, b, c$. If this is the case, then there's no unique solution to the regression without dropping one of the variables. Adding $Q4$ is only going to make matters worse. | Why would R return NA as a lm() coefficient?
NA as a coefficient in a regression indicates that the variable in question is linearly related to the other variables. In your case, this means that $Q3 = a \times Q1 + b \times Q2 + c$ for some $a, |
4,419 | Why would R return NA as a lm() coefficient? | I found this behavior when attempting to fit observations vs time, where time was given as POSIXct. lm and lsfit() both determined that the x's were co-linear. The problem was solved by subtracting the mean of the time datum to do the fit.
This appears to be a deficiency in the underlying code -- there must be some single precision operations, or non-optimal order of operations. I have never seen it before, so it may be new. | Why would R return NA as a lm() coefficient? | I found this behavior when attempting to fit observations vs time, where time was given as POSIXct. lm and lsfit() both determined that the x's were co-linear. The problem was solved by subtracting th | Why would R return NA as a lm() coefficient?
I found this behavior when attempting to fit observations vs time, where time was given as POSIXct. lm and lsfit() both determined that the x's were co-linear. The problem was solved by subtracting the mean of the time datum to do the fit.
This appears to be a deficiency in the underlying code -- there must be some single precision operations, or non-optimal order of operations. I have never seen it before, so it may be new. | Why would R return NA as a lm() coefficient?
I found this behavior when attempting to fit observations vs time, where time was given as POSIXct. lm and lsfit() both determined that the x's were co-linear. The problem was solved by subtracting th |
4,420 | Why would R return NA as a lm() coefficient? | I also got this behavior in R version 4.2.0 with an integer64 dataframe column.
It was being fetched by an SQL query via RPostgres, from a PostgreSQL column of type int8 (that's 8-byte/64-bit integer).
Luckily, the data didn't actually exceed the 2-billion 32-bit cap; so a simple downconversion helped:
df$some_field <- as.integer(df$some_field)
Before I did that, lm would produce both NaN's and NA's in the coefficients.
How to diagnose: use class(df$some_field) to see which type the fields have. | Why would R return NA as a lm() coefficient? | I also got this behavior in R version 4.2.0 with an integer64 dataframe column.
It was being fetched by an SQL query via RPostgres, from a PostgreSQL column of type int8 (that's 8-byte/64-bit integer) | Why would R return NA as a lm() coefficient?
I also got this behavior in R version 4.2.0 with an integer64 dataframe column.
It was being fetched by an SQL query via RPostgres, from a PostgreSQL column of type int8 (that's 8-byte/64-bit integer).
Luckily, the data didn't actually exceed the 2-billion 32-bit cap; so a simple downconversion helped:
df$some_field <- as.integer(df$some_field)
Before I did that, lm would produce both NaN's and NA's in the coefficients.
How to diagnose: use class(df$some_field) to see which type the fields have. | Why would R return NA as a lm() coefficient?
I also got this behavior in R version 4.2.0 with an integer64 dataframe column.
It was being fetched by an SQL query via RPostgres, from a PostgreSQL column of type int8 (that's 8-byte/64-bit integer) |
4,421 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Could you group the data set into much smaller data sets (say 100 or 1000 or 10,000 data points) If you then calculated the median of each of the groups. If you did this with enough data sets you could plot something like the average of the results of each of the smaller sets and this woul, by running enough smaller data sets converge to an 'average' solution. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Could you group the data set into much smaller data sets (say 100 or 1000 or 10,000 data points) If you then calculated the median of each of the groups. If you did this with enough data sets you coul | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Could you group the data set into much smaller data sets (say 100 or 1000 or 10,000 data points) If you then calculated the median of each of the groups. If you did this with enough data sets you could plot something like the average of the results of each of the smaller sets and this woul, by running enough smaller data sets converge to an 'average' solution. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Could you group the data set into much smaller data sets (say 100 or 1000 or 10,000 data points) If you then calculated the median of each of the groups. If you did this with enough data sets you coul |
4,422 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | How about something like a binning procedure? Assume (for illustration purposes) that you know that the values are between 1 and 1 million. Set up N bins, of size S. So if S=10000, you'd have 100 bins, corresponding to values [1:10000, 10001:20000, ... , 990001:1000000]
Then, step through the values. Instead of storing each value, just increment the counter in the appropriate bin. Using the midpoint of each bin as an estimate, you can make a reasonable approximation of the median. You can scale this to as fine or coarse of a resolution as you want by changing the size of the bins. You're limited only by how much memory you have.
Since you don't know how big your values may get, just pick a bin size large enough that you aren't likely to run out of memory, using some quick back-of-the-envelope calculations. You might also store the bins sparsely, such that you only add a bin if it contains a value.
Edit:
The link ryfm provides gives an example of doing this, with the additional step of using the cumulative percentages to more accurately estimate the point within the median bin, instead of just using midpoints. This is a nice improvement. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | How about something like a binning procedure? Assume (for illustration purposes) that you know that the values are between 1 and 1 million. Set up N bins, of size S. So if S=10000, you'd have 100 bi | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
How about something like a binning procedure? Assume (for illustration purposes) that you know that the values are between 1 and 1 million. Set up N bins, of size S. So if S=10000, you'd have 100 bins, corresponding to values [1:10000, 10001:20000, ... , 990001:1000000]
Then, step through the values. Instead of storing each value, just increment the counter in the appropriate bin. Using the midpoint of each bin as an estimate, you can make a reasonable approximation of the median. You can scale this to as fine or coarse of a resolution as you want by changing the size of the bins. You're limited only by how much memory you have.
Since you don't know how big your values may get, just pick a bin size large enough that you aren't likely to run out of memory, using some quick back-of-the-envelope calculations. You might also store the bins sparsely, such that you only add a bin if it contains a value.
Edit:
The link ryfm provides gives an example of doing this, with the additional step of using the cumulative percentages to more accurately estimate the point within the median bin, instead of just using midpoints. This is a nice improvement. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
How about something like a binning procedure? Assume (for illustration purposes) that you know that the values are between 1 and 1 million. Set up N bins, of size S. So if S=10000, you'd have 100 bi |
4,423 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I re-direct you to my answer to a similar question. In a nutshell, it's a read once, 'on the fly' algorithm with $O(n)$ worst case complexity to compute the (exact) median. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I re-direct you to my answer to a similar question. In a nutshell, it's a read once, 'on the fly' algorithm with $O(n)$ worst case complexity to compute the (exact) median. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I re-direct you to my answer to a similar question. In a nutshell, it's a read once, 'on the fly' algorithm with $O(n)$ worst case complexity to compute the (exact) median. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I re-direct you to my answer to a similar question. In a nutshell, it's a read once, 'on the fly' algorithm with $O(n)$ worst case complexity to compute the (exact) median. |
4,424 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I implemented the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms without Storing Observations in a neat Python module I wrote called LiveStats. It should solve your problem quite effectively. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I implemented the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms without Storing Observations in a neat Python module I wrote called LiveStats. It should solve your problem qui | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I implemented the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms without Storing Observations in a neat Python module I wrote called LiveStats. It should solve your problem quite effectively. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I implemented the P-Square Algorithm for Dynamic Calculation of Quantiles and Histograms without Storing Observations in a neat Python module I wrote called LiveStats. It should solve your problem qui |
4,425 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | The Rivest-Tarjan-Selection algorithm (sometimes also called the median-of-medians algorithm) will let you compute the median element in linear-time without any sorting. For large data sets this is can be quite a bit faster than log-linear sorting. However, it won't solve your memory storage problem. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | The Rivest-Tarjan-Selection algorithm (sometimes also called the median-of-medians algorithm) will let you compute the median element in linear-time without any sorting. For large data sets this is ca | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
The Rivest-Tarjan-Selection algorithm (sometimes also called the median-of-medians algorithm) will let you compute the median element in linear-time without any sorting. For large data sets this is can be quite a bit faster than log-linear sorting. However, it won't solve your memory storage problem. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
The Rivest-Tarjan-Selection algorithm (sometimes also called the median-of-medians algorithm) will let you compute the median element in linear-time without any sorting. For large data sets this is ca |
4,426 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I've never had to do this, so this is just a suggestion.
I see two (other) possibilities.
Half data
Load in half the data and sort
Next read in the remaining values and compare against the your sorted list.
If the new value is larger, discard it.
else put the value in the sorted list and removing the largest value from that list.
Sampling distribution
The other option, is to use an approximation involving the sampling distribution. If your data is Normal, then the standard error for moderate n is:
1.253 * sd / sqrt(n)
To determine the size of n that you would be happy with, I ran a quick Monte-Carlo simulation in R
n = 10000
outside.ci.uni = 0
outside.ci.nor = 0
N=1000
for(i in 1:N){
#Theoretical median is 0
uni = runif(n, -10, 10)
nor = rnorm(n, 0, 10)
if(abs(median(uni)) > 1.96*1.253*sd(uni)/sqrt(n))
outside.ci.uni = outside.ci.uni + 1
if(abs(median(nor)) > 1.96*1.253*sd(nor)/sqrt(n))
outside.ci.nor = outside.ci.nor + 1
}
outside.ci.uni/N
outside.ci.nor/N
For n=10000, 15% of the uniform median estimates were outside the CI. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | I've never had to do this, so this is just a suggestion.
I see two (other) possibilities.
Half data
Load in half the data and sort
Next read in the remaining values and compare against the your sort | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I've never had to do this, so this is just a suggestion.
I see two (other) possibilities.
Half data
Load in half the data and sort
Next read in the remaining values and compare against the your sorted list.
If the new value is larger, discard it.
else put the value in the sorted list and removing the largest value from that list.
Sampling distribution
The other option, is to use an approximation involving the sampling distribution. If your data is Normal, then the standard error for moderate n is:
1.253 * sd / sqrt(n)
To determine the size of n that you would be happy with, I ran a quick Monte-Carlo simulation in R
n = 10000
outside.ci.uni = 0
outside.ci.nor = 0
N=1000
for(i in 1:N){
#Theoretical median is 0
uni = runif(n, -10, 10)
nor = rnorm(n, 0, 10)
if(abs(median(uni)) > 1.96*1.253*sd(uni)/sqrt(n))
outside.ci.uni = outside.ci.uni + 1
if(abs(median(nor)) > 1.96*1.253*sd(nor)/sqrt(n))
outside.ci.nor = outside.ci.nor + 1
}
outside.ci.uni/N
outside.ci.nor/N
For n=10000, 15% of the uniform median estimates were outside the CI. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
I've never had to do this, so this is just a suggestion.
I see two (other) possibilities.
Half data
Load in half the data and sort
Next read in the remaining values and compare against the your sort |
4,427 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | The Remedian Algorithm (PDF) gives a one-pass median estimate with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observations, and then medians of these medians, until only a single estimate remains. This method merely needs k arrays of size b (where n = b^k)... | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | The Remedian Algorithm (PDF) gives a one-pass median estimate with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observatio | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
The Remedian Algorithm (PDF) gives a one-pass median estimate with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observations, and then medians of these medians, until only a single estimate remains. This method merely needs k arrays of size b (where n = b^k)... | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
The Remedian Algorithm (PDF) gives a one-pass median estimate with low storage requirements and well defined accuracy.
The remedian with base b proceeds by computing medians of groups of b observatio |
4,428 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | If the values you are using are within a certain range, say 1 to 100000, you can efficiently compute the median on an extremely large number of values (say, trillions of entries), with an integer bucket (this code taken from BSD licensed ea-utils/sam-stats.cpp)
class ibucket {
public:
int tot;
vector<int> dat;
ibucket(int max) {dat.resize(max+1);tot=0;}
int size() const {return tot;};
int operator[] (int n) const {
assert(n < size());
int i;
for (i=0;i<dat.size();++i) {
if (n < dat[i]) {
return i;
}
n-=dat[i];
}
}
void push(int v) {
assert(v<dat.size());
++dat[v];
++tot;
}
};
template <class vtype>
double quantile(const vtype &vec, double p) {
int l = vec.size();
if (!l) return 0;
double t = ((double)l-1)*p;
int it = (int) t;
int v=vec[it];
if (t > (double)it) {
return (v + (t-it) * (vec[it+1] - v));
} else {
return v;
}
} | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | If the values you are using are within a certain range, say 1 to 100000, you can efficiently compute the median on an extremely large number of values (say, trillions of entries), with an integer buck | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
If the values you are using are within a certain range, say 1 to 100000, you can efficiently compute the median on an extremely large number of values (say, trillions of entries), with an integer bucket (this code taken from BSD licensed ea-utils/sam-stats.cpp)
class ibucket {
public:
int tot;
vector<int> dat;
ibucket(int max) {dat.resize(max+1);tot=0;}
int size() const {return tot;};
int operator[] (int n) const {
assert(n < size());
int i;
for (i=0;i<dat.size();++i) {
if (n < dat[i]) {
return i;
}
n-=dat[i];
}
}
void push(int v) {
assert(v<dat.size());
++dat[v];
++tot;
}
};
template <class vtype>
double quantile(const vtype &vec, double p) {
int l = vec.size();
if (!l) return 0;
double t = ((double)l-1)*p;
int it = (int) t;
int v=vec[it];
if (t > (double)it) {
return (v + (t-it) * (vec[it+1] - v));
} else {
return v;
}
} | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
If the values you are using are within a certain range, say 1 to 100000, you can efficiently compute the median on an extremely large number of values (say, trillions of entries), with an integer buck |
4,429 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Another thought is in the line of random sampling. I had similar problem. My problem is that I have > 100 million data (each has 10 million). The computation took too long. You can just random sample N data points from the 10 million, then find the median on those. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Another thought is in the line of random sampling. I had similar problem. My problem is that I have > 100 million data (each has 10 million). The computation took too long. You can just random sampl | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Another thought is in the line of random sampling. I had similar problem. My problem is that I have > 100 million data (each has 10 million). The computation took too long. You can just random sample N data points from the 10 million, then find the median on those. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Another thought is in the line of random sampling. I had similar problem. My problem is that I have > 100 million data (each has 10 million). The computation took too long. You can just random sampl |
4,430 | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here's an answer to the question asked on stackoverflow: https://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness/2144754#2144754
The iterative update median += eta * sgn(sample - median) sounds like it could be a way to go. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate] | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Here's an answer to the question asked on stackoverflow: https://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness/2144754#2144754
The iterative update median += eta * sgn(sample - median) sounds like it could be a way to go. | What is a good algorithm for estimating the median of a huge read-once data set? [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
4,431 | How to determine whether or not the y-axis of a graph should start at zero? | Don't use space in a graph in any way that doesn't help understanding. Space is needed to show the data!
Use your scientific (engineering, medical, social, business, ...) judgement as well as your statistical judgement. (If you are not the client or customer, talk to someone in the field to get an idea of what is interesting or important, preferably those commissioning the analysis.)
Show zero on the $y$ axis if comparisons with zero are central to the problem, or even of some interest.
Those are three simple rules. (Nothing rules out some tension between them on occasion.)
Here is a simple example, but all three points arise: You measure body temperature of a patient in Celsius, or in Fahrenheit, or even in kelvin: take your pick. In what sense whatsoever is it either helpful or even logical to insist on showing zero temperatures? Important, even medically or physiologically crucial, information will be obscured otherwise.
Here is a true story from a presentation. A researcher was showing data on sex ratios for various states and union territories in India. The graphic was a bar chart with all bars starting at zero. All bars were close to the same length despite some considerable variation. That was correct, but the interesting story was that areas were different despite similarities, not that they were similar despite differences. I suggested that parity between males and females (1 or 100 females/100 males) was a much more natural reference level. (I would also be open to using some overall level, such as the national mean, as a reference.) Even some statistical people who have heard this little story have sometimes replied, "No; bars should always start at zero." To me that is no better than irrelevant dogma in such a case. (I would also argue that dot charts make as much or more sense for such data.)
EDIT 27 December 2022. See Smith, Alan. 2022. How Charts Work: Understand and Explain Data with Confidence. Harlow: Pearson, pp.155-161 for an extended example with similar flavour, using the principle that bars showing Gender Parity Index may and should start at the reference value of 1 (genders equally represented).
Mentioning bar charts points up that the kind of graph used is important too.
Suppose for body temperatures a $y$ axis range from 35 to 40$^\circ$C is chosen for convenience as including all the data, so that the $y$ axis "starts" at 35. Clearly bars all starting at 35 would be a poor encoding of the data. But here the problem would be inappropriate choice of graph element, not poorly chosen axis range.
A common kind of plot, especially it seems in some biological and medical sciences, shows means or other summaries by thick bars starting at zero and standard error or standard deviation-based intervals indicating uncertainty by thin bars. Such detonator or dynamite plots, as they have been called by those who disapprove, may be popular partly because of a dictum that zero should always be shown. The net effect is to emphasise comparisons with zero that are often lacking in interest or utility.
Some people would want to show zero, but also to add a scale break to show that the scale is interrupted. Fashions change and technology changes. Decades ago, when researchers drew their own graphs or delegated the task to technicians, it was easier to ask that this be done by hand. Now graphics programs often don't support scale breaks, which I think is no loss. Even if they do, that is fussy addition that can waste a moderate fraction of the graphic's area.
Note that no-one insists on the same rule for the $x$ axis. Why not? If you show climatic or economic fluctuations for the last century or so, it would be bizarre to be told that the scale should start at the BC/CE boundary or any other origin.
There is naturally a zeroth rule that applies in addition to the three mentioned.
Whatever you do, be very clear. Label your axes consistently and informatively. Then trust that careful readers will look to see what you have done.
Thus on this point I agree strongly with Edward Tufte, and I disagree with Darrell Huff.
EDIT 9 May 2016:
rather than trying to invariably include a 0-baseline in all your
charts, use logical and meaningful baselines instead
Cairo, A. 2016.
The Truthful Art: Data, Charts, and Maps for Communication.
San Francisco, CA: New Riders, p.136. | How to determine whether or not the y-axis of a graph should start at zero? | Don't use space in a graph in any way that doesn't help understanding. Space is needed to show the data!
Use your scientific (engineering, medical, social, business, ...) judgement as well as your st | How to determine whether or not the y-axis of a graph should start at zero?
Don't use space in a graph in any way that doesn't help understanding. Space is needed to show the data!
Use your scientific (engineering, medical, social, business, ...) judgement as well as your statistical judgement. (If you are not the client or customer, talk to someone in the field to get an idea of what is interesting or important, preferably those commissioning the analysis.)
Show zero on the $y$ axis if comparisons with zero are central to the problem, or even of some interest.
Those are three simple rules. (Nothing rules out some tension between them on occasion.)
Here is a simple example, but all three points arise: You measure body temperature of a patient in Celsius, or in Fahrenheit, or even in kelvin: take your pick. In what sense whatsoever is it either helpful or even logical to insist on showing zero temperatures? Important, even medically or physiologically crucial, information will be obscured otherwise.
Here is a true story from a presentation. A researcher was showing data on sex ratios for various states and union territories in India. The graphic was a bar chart with all bars starting at zero. All bars were close to the same length despite some considerable variation. That was correct, but the interesting story was that areas were different despite similarities, not that they were similar despite differences. I suggested that parity between males and females (1 or 100 females/100 males) was a much more natural reference level. (I would also be open to using some overall level, such as the national mean, as a reference.) Even some statistical people who have heard this little story have sometimes replied, "No; bars should always start at zero." To me that is no better than irrelevant dogma in such a case. (I would also argue that dot charts make as much or more sense for such data.)
EDIT 27 December 2022. See Smith, Alan. 2022. How Charts Work: Understand and Explain Data with Confidence. Harlow: Pearson, pp.155-161 for an extended example with similar flavour, using the principle that bars showing Gender Parity Index may and should start at the reference value of 1 (genders equally represented).
Mentioning bar charts points up that the kind of graph used is important too.
Suppose for body temperatures a $y$ axis range from 35 to 40$^\circ$C is chosen for convenience as including all the data, so that the $y$ axis "starts" at 35. Clearly bars all starting at 35 would be a poor encoding of the data. But here the problem would be inappropriate choice of graph element, not poorly chosen axis range.
A common kind of plot, especially it seems in some biological and medical sciences, shows means or other summaries by thick bars starting at zero and standard error or standard deviation-based intervals indicating uncertainty by thin bars. Such detonator or dynamite plots, as they have been called by those who disapprove, may be popular partly because of a dictum that zero should always be shown. The net effect is to emphasise comparisons with zero that are often lacking in interest or utility.
Some people would want to show zero, but also to add a scale break to show that the scale is interrupted. Fashions change and technology changes. Decades ago, when researchers drew their own graphs or delegated the task to technicians, it was easier to ask that this be done by hand. Now graphics programs often don't support scale breaks, which I think is no loss. Even if they do, that is fussy addition that can waste a moderate fraction of the graphic's area.
Note that no-one insists on the same rule for the $x$ axis. Why not? If you show climatic or economic fluctuations for the last century or so, it would be bizarre to be told that the scale should start at the BC/CE boundary or any other origin.
There is naturally a zeroth rule that applies in addition to the three mentioned.
Whatever you do, be very clear. Label your axes consistently and informatively. Then trust that careful readers will look to see what you have done.
Thus on this point I agree strongly with Edward Tufte, and I disagree with Darrell Huff.
EDIT 9 May 2016:
rather than trying to invariably include a 0-baseline in all your
charts, use logical and meaningful baselines instead
Cairo, A. 2016.
The Truthful Art: Data, Charts, and Maps for Communication.
San Francisco, CA: New Riders, p.136. | How to determine whether or not the y-axis of a graph should start at zero?
Don't use space in a graph in any way that doesn't help understanding. Space is needed to show the data!
Use your scientific (engineering, medical, social, business, ...) judgement as well as your st |
4,432 | Prediction interval for lmer() mixed effects model in R | This question and excellent exchange was the impetus for creating the predictInterval function in the merTools package. bootMer is the way to go, but for some problems it is not feasible computationally to generate bootstrapped refits of the whole model (in cases where the model is large).
In those cases, predictInterval is designed to use the arm::sim functions to generate distributions of parameters in the model and then to use those distributions to generate simulated values of the response given the newdata provided by the user. It's simple to use -- all you would need to do is:
library(merTools)
preds <- predictInterval(lme1, newdata = newDat, n.sims = 999)
You can specify a whole host of other values to predictInterval including setting the interval for the prediction intervals, choosing whether to report the mean or median of the distribution, and choosing whether or not to include the residual variance from the model.
It's not a full prediction interval because the variability of the theta parameters in the lmer object are not included, but all of the other variation is captured through this method, giving a pretty decent approximation. | Prediction interval for lmer() mixed effects model in R | This question and excellent exchange was the impetus for creating the predictInterval function in the merTools package. bootMer is the way to go, but for some problems it is not feasible computational | Prediction interval for lmer() mixed effects model in R
This question and excellent exchange was the impetus for creating the predictInterval function in the merTools package. bootMer is the way to go, but for some problems it is not feasible computationally to generate bootstrapped refits of the whole model (in cases where the model is large).
In those cases, predictInterval is designed to use the arm::sim functions to generate distributions of parameters in the model and then to use those distributions to generate simulated values of the response given the newdata provided by the user. It's simple to use -- all you would need to do is:
library(merTools)
preds <- predictInterval(lme1, newdata = newDat, n.sims = 999)
You can specify a whole host of other values to predictInterval including setting the interval for the prediction intervals, choosing whether to report the mean or median of the distribution, and choosing whether or not to include the residual variance from the model.
It's not a full prediction interval because the variability of the theta parameters in the lmer object are not included, but all of the other variation is captured through this method, giving a pretty decent approximation. | Prediction interval for lmer() mixed effects model in R
This question and excellent exchange was the impetus for creating the predictInterval function in the merTools package. bootMer is the way to go, but for some problems it is not feasible computational |
4,433 | Prediction interval for lmer() mixed effects model in R | Do this by making bootMer generate a set of predictions for each parametric bootstrap replicate:
predFun <- function(fit) {
predict(fit,newDat)
}
bb <- bootMer(lme1,nsim=200,FUN=predFun,seed=101)
The output of bootMer is in a not-terribly-transparent "boot" object, but we can get the raw predictions out of the $t component.
How much of the time does Fish E beat Fish D?
predMat <- bb$t
dim(predMat) ## 200 rows (PB reps) x 10 (predictions)
Fish E's times are in column 5, fish D's times are in column 4, so we just need to know the proportion that column 5 is less than column 4:
mean(predMat[,5]<predMat[,4]) ## 0.57 | Prediction interval for lmer() mixed effects model in R | Do this by making bootMer generate a set of predictions for each parametric bootstrap replicate:
predFun <- function(fit) {
predict(fit,newDat)
}
bb <- bootMer(lme1,nsim=200,FUN=predFun,seed=101)
| Prediction interval for lmer() mixed effects model in R
Do this by making bootMer generate a set of predictions for each parametric bootstrap replicate:
predFun <- function(fit) {
predict(fit,newDat)
}
bb <- bootMer(lme1,nsim=200,FUN=predFun,seed=101)
The output of bootMer is in a not-terribly-transparent "boot" object, but we can get the raw predictions out of the $t component.
How much of the time does Fish E beat Fish D?
predMat <- bb$t
dim(predMat) ## 200 rows (PB reps) x 10 (predictions)
Fish E's times are in column 5, fish D's times are in column 4, so we just need to know the proportion that column 5 is less than column 4:
mean(predMat[,5]<predMat[,4]) ## 0.57 | Prediction interval for lmer() mixed effects model in R
Do this by making bootMer generate a set of predictions for each parametric bootstrap replicate:
predFun <- function(fit) {
predict(fit,newDat)
}
bb <- bootMer(lme1,nsim=200,FUN=predFun,seed=101)
|
4,434 | Kullback–Leibler vs Kolmogorov-Smirnov distance | The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful.
The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common. | Kullback–Leibler vs Kolmogorov-Smirnov distance | The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for | Kullback–Leibler vs Kolmogorov-Smirnov distance
The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for example. It's not a distance in the typical (metric) sense, because of lack of symmetry and triangle inequality, and so it's used in places where the directionality is meaningful.
The KS-distance is typically used in the context of a non-parametric test. In fact, I've rarely seen it used as a generic "distance between distributions", where the $\ell_1$ distance, the Jensen-Shannon distance, and other distances are more common. | Kullback–Leibler vs Kolmogorov-Smirnov distance
The KL-divergence is typically used in information-theoretic settings, or even Bayesian settings, to measure the information change between distributions before and after applying some inference, for |
4,435 | Kullback–Leibler vs Kolmogorov-Smirnov distance | Another way of stating the same thing as the previous answer in more layman terms:
KL Divergence - Actually provides a measure of how big of a difference are two distributions from each other. As mentioned by the previous answer, this measure isnt an appropriate distance metric since its not symmetrical. I.e. distance between distribution A and B is a different value from the distance between distribution B and A.
Kolmogorov-Smirnov Test - This is an evaluation metric that looks at greatest separation between the cumulative distribution of a test distribution relative to a reference distribution. In addition, you can use this metric just like a z-score against the Kolmogorov distribution to perform a hypothesis test as to whether the test distribution is the same distribution as the reference. This metric can be used as a distance function as it is symmetric. I.e. greatest separation between CDF of A vs CDF of B is the same as greatest separation between CDF of B vs CDF of A. | Kullback–Leibler vs Kolmogorov-Smirnov distance | Another way of stating the same thing as the previous answer in more layman terms:
KL Divergence - Actually provides a measure of how big of a difference are two distributions from each other. As ment | Kullback–Leibler vs Kolmogorov-Smirnov distance
Another way of stating the same thing as the previous answer in more layman terms:
KL Divergence - Actually provides a measure of how big of a difference are two distributions from each other. As mentioned by the previous answer, this measure isnt an appropriate distance metric since its not symmetrical. I.e. distance between distribution A and B is a different value from the distance between distribution B and A.
Kolmogorov-Smirnov Test - This is an evaluation metric that looks at greatest separation between the cumulative distribution of a test distribution relative to a reference distribution. In addition, you can use this metric just like a z-score against the Kolmogorov distribution to perform a hypothesis test as to whether the test distribution is the same distribution as the reference. This metric can be used as a distance function as it is symmetric. I.e. greatest separation between CDF of A vs CDF of B is the same as greatest separation between CDF of B vs CDF of A. | Kullback–Leibler vs Kolmogorov-Smirnov distance
Another way of stating the same thing as the previous answer in more layman terms:
KL Divergence - Actually provides a measure of how big of a difference are two distributions from each other. As ment |
4,436 | Kullback–Leibler vs Kolmogorov-Smirnov distance | KL divergence upper bounds Kolmogrov Distance and Total variation, meaning that if two distributions $\mathcal{D}_1, \mathcal{D}_2$ have a small KL divergence, then it follows that $\mathcal{D}_1, \mathcal{D}_2$ have a small total variation and subsequently a small Kolmogrov distance (in that order).
Also check out this paper for more information - On Choosing and Bounding Probability Metrics, by Gibbs and Su. | Kullback–Leibler vs Kolmogorov-Smirnov distance | KL divergence upper bounds Kolmogrov Distance and Total variation, meaning that if two distributions $\mathcal{D}_1, \mathcal{D}_2$ have a small KL divergence, then it follows that $\mathcal{D}_1, \ma | Kullback–Leibler vs Kolmogorov-Smirnov distance
KL divergence upper bounds Kolmogrov Distance and Total variation, meaning that if two distributions $\mathcal{D}_1, \mathcal{D}_2$ have a small KL divergence, then it follows that $\mathcal{D}_1, \mathcal{D}_2$ have a small total variation and subsequently a small Kolmogrov distance (in that order).
Also check out this paper for more information - On Choosing and Bounding Probability Metrics, by Gibbs and Su. | Kullback–Leibler vs Kolmogorov-Smirnov distance
KL divergence upper bounds Kolmogrov Distance and Total variation, meaning that if two distributions $\mathcal{D}_1, \mathcal{D}_2$ have a small KL divergence, then it follows that $\mathcal{D}_1, \ma |
4,437 | Kullback–Leibler vs Kolmogorov-Smirnov distance | KS test and KL divergence test both are used to find the difference between two distributions
KS test is statistical-based and KL divergence is information theory-based
But the one major diff between KL and KS test, and why KL is more popular in machine learning is because the formulation for KL divergence is differentiable. And for solving optimization problems in machine learning we need a function to be differentiable.
In the context of machine learning, KL_dist(P||Q) is often called the information gain achieved if Q is used instead of P
links:
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test | Kullback–Leibler vs Kolmogorov-Smirnov distance | KS test and KL divergence test both are used to find the difference between two distributions
KS test is statistical-based and KL divergence is information theory-based
But the one major diff between | Kullback–Leibler vs Kolmogorov-Smirnov distance
KS test and KL divergence test both are used to find the difference between two distributions
KS test is statistical-based and KL divergence is information theory-based
But the one major diff between KL and KS test, and why KL is more popular in machine learning is because the formulation for KL divergence is differentiable. And for solving optimization problems in machine learning we need a function to be differentiable.
In the context of machine learning, KL_dist(P||Q) is often called the information gain achieved if Q is used instead of P
links:
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test
https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test | Kullback–Leibler vs Kolmogorov-Smirnov distance
KS test and KL divergence test both are used to find the difference between two distributions
KS test is statistical-based and KL divergence is information theory-based
But the one major diff between |
4,438 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | First of all, my first intuitive thought was: "S2 can only be the same as S1 if the traffic death rate stays constant, possibly over decades" - which certainly wouldn't have been a good assumption in the last so many decades. This already hints that one difficulty lies with implicit/unspoken temporal assumptions.
I'd say your statements have the form
1 in $x$ $population$ experience $event$.
In S1, the population are deaths, and the implied temporal specification is at present or "in a suitably large [to have sufficent case numbers] but not too wide time frame [to have approximately constant car accident characteristics] around the present"
In S2, the population are people. And others seem to read this not as "dying people" but as "living people" (which after all, is what people more frequently/longer do).
If you read the population as living people, clearly, not one of every 80 people living now dies "now" of a car accident. So that is read as "when they are dying [possibly decades from now], the cause of death is car accident".
Take home message: always be careful to spell out who your population are and the denominator of fractions in general. (Gerd Gigerenzer has papers about not spelling out the denominator being a major cause of confusion, particularly in statistics and risk communication). | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | First of all, my first intuitive thought was: "S2 can only be the same as S1 if the traffic death rate stays constant, possibly over decades" - which certainly wouldn't have been a good assumption in | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
First of all, my first intuitive thought was: "S2 can only be the same as S1 if the traffic death rate stays constant, possibly over decades" - which certainly wouldn't have been a good assumption in the last so many decades. This already hints that one difficulty lies with implicit/unspoken temporal assumptions.
I'd say your statements have the form
1 in $x$ $population$ experience $event$.
In S1, the population are deaths, and the implied temporal specification is at present or "in a suitably large [to have sufficent case numbers] but not too wide time frame [to have approximately constant car accident characteristics] around the present"
In S2, the population are people. And others seem to read this not as "dying people" but as "living people" (which after all, is what people more frequently/longer do).
If you read the population as living people, clearly, not one of every 80 people living now dies "now" of a car accident. So that is read as "when they are dying [possibly decades from now], the cause of death is car accident".
Take home message: always be careful to spell out who your population are and the denominator of fractions in general. (Gerd Gigerenzer has papers about not spelling out the denominator being a major cause of confusion, particularly in statistics and risk communication). | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
First of all, my first intuitive thought was: "S2 can only be the same as S1 if the traffic death rate stays constant, possibly over decades" - which certainly wouldn't have been a good assumption in |
4,439 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | To me "1 in 80 deaths..." is by far the clearer statement. The denominator in your "1 in 80" is the set of all death events and that statement makes it explicit.
There's ambiguity in the "1 in 80 people..." formulation. You really mean "1 in 80 people who dies..." but the statement can just as easily be interpreted as "1 in 80 people now alive..." or similar.
I'm all for being explicit about the reference set in probability or frequency assertions like this. If you're talking about the proportion of deaths, then say "deaths" not "people". | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | To me "1 in 80 deaths..." is by far the clearer statement. The denominator in your "1 in 80" is the set of all death events and that statement makes it explicit.
There's ambiguity in the "1 in 80 pe | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
To me "1 in 80 deaths..." is by far the clearer statement. The denominator in your "1 in 80" is the set of all death events and that statement makes it explicit.
There's ambiguity in the "1 in 80 people..." formulation. You really mean "1 in 80 people who dies..." but the statement can just as easily be interpreted as "1 in 80 people now alive..." or similar.
I'm all for being explicit about the reference set in probability or frequency assertions like this. If you're talking about the proportion of deaths, then say "deaths" not "people". | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
To me "1 in 80 deaths..." is by far the clearer statement. The denominator in your "1 in 80" is the set of all death events and that statement makes it explicit.
There's ambiguity in the "1 in 80 pe |
4,440 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | It depends on whether you are describing or predicting.
"1 in 80 people will die in a car accident" is a prediction. Of all the people alive today, some time within their remaining lifetime, one in 80 will die that way.
"1 in 80 deaths are caused by a car accident" is a description. Of all the people who died in a given period (e.g. the time span of a supporting study), 1 in 80 of them did indeed die in a car accident.
Note that the time window here is ambiguous. One sentence implies that the deaths have already occurred; the other implies they will occur some day. One sentence implies that your baseline population is people who have died (and who were alive before that); the other implies a baseline population of people who are alive today (and will die eventually).
These are actually different statements entirely, and only one of them is probably supported by your source data.
On a side note, the ambiguity arises from a mismatch between the state of being a person (which happens continuously) and the event of dying (which happens at a point in time). Whenever you combine things in this way you get something that is similarly ambiguous. You can instantly resolve the ambiguity by using two events instead of one state and one event; for example, "Of each 80 people who are born, 1 dies in a car accident." | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | It depends on whether you are describing or predicting.
"1 in 80 people will die in a car accident" is a prediction. Of all the people alive today, some time within their remaining lifetime, one in 8 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
It depends on whether you are describing or predicting.
"1 in 80 people will die in a car accident" is a prediction. Of all the people alive today, some time within their remaining lifetime, one in 80 will die that way.
"1 in 80 deaths are caused by a car accident" is a description. Of all the people who died in a given period (e.g. the time span of a supporting study), 1 in 80 of them did indeed die in a car accident.
Note that the time window here is ambiguous. One sentence implies that the deaths have already occurred; the other implies they will occur some day. One sentence implies that your baseline population is people who have died (and who were alive before that); the other implies a baseline population of people who are alive today (and will die eventually).
These are actually different statements entirely, and only one of them is probably supported by your source data.
On a side note, the ambiguity arises from a mismatch between the state of being a person (which happens continuously) and the event of dying (which happens at a point in time). Whenever you combine things in this way you get something that is similarly ambiguous. You can instantly resolve the ambiguity by using two events instead of one state and one event; for example, "Of each 80 people who are born, 1 dies in a car accident." | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
It depends on whether you are describing or predicting.
"1 in 80 people will die in a car accident" is a prediction. Of all the people alive today, some time within their remaining lifetime, one in 8 |
4,441 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | The two statements are different because of sampling bias, because car accidents are more likely to occur when people are young.
Let's make this more concrete by positing an unrealistic scenario.
Consider the two statements:
One half of all deaths are caused by a car accident.
One half of all people alive today will die in a car accident.
We will show that these two statements are not the same.
Let's simplify things greatly and suppose that everybody born will either
die of a heart attack at age 80 or a car accident at age 40. Further, let's suppose that the first statement above holds, and that we're in a steady state population, so deaths balance births. Then there will be three populations of humans, all equally large.
People under 40 who will die of a car accident.
People under 40 who will die of a heart attack.
People over 40 who will die of a heart attack.
These three populations have to be equally large, because the rate of people dying in car accidents (from the first population above) and the rate of people dying in heart attacks (from the third population above) are equal.
Why are they equal? The number of people who die in car accidents each year is $1/40$ of the number of people in the first population, and the number of people who die by heart attacks is $1/40$ of the number of people in the third population, so the two populations have to have equal size. Further, the second population is the same size as the third (because the third population is the second, 40 years later).
So in this case, only one third of all people alive today will die in a car accident, so the two statements are not the same.
In real life, my impression is that car accidents occur at a significantly younger age than most other causes of death. If this is the case, there will be a substantial difference between the numbers in your statement one and two.
If you modified the second statement to
One half of all people born will die in a car accident,
then under the assumption of a steady state population, the two statements would be equivalent. But of course, in the real world we don't have a steady state population, and a similar (although more complicated) argument shows that for a growing, or shrinking, population, sampling bias still makes these two statements different. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | The two statements are different because of sampling bias, because car accidents are more likely to occur when people are young.
Let's make this more concrete by positing an unrealistic scenario.
Cons | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
The two statements are different because of sampling bias, because car accidents are more likely to occur when people are young.
Let's make this more concrete by positing an unrealistic scenario.
Consider the two statements:
One half of all deaths are caused by a car accident.
One half of all people alive today will die in a car accident.
We will show that these two statements are not the same.
Let's simplify things greatly and suppose that everybody born will either
die of a heart attack at age 80 or a car accident at age 40. Further, let's suppose that the first statement above holds, and that we're in a steady state population, so deaths balance births. Then there will be three populations of humans, all equally large.
People under 40 who will die of a car accident.
People under 40 who will die of a heart attack.
People over 40 who will die of a heart attack.
These three populations have to be equally large, because the rate of people dying in car accidents (from the first population above) and the rate of people dying in heart attacks (from the third population above) are equal.
Why are they equal? The number of people who die in car accidents each year is $1/40$ of the number of people in the first population, and the number of people who die by heart attacks is $1/40$ of the number of people in the third population, so the two populations have to have equal size. Further, the second population is the same size as the third (because the third population is the second, 40 years later).
So in this case, only one third of all people alive today will die in a car accident, so the two statements are not the same.
In real life, my impression is that car accidents occur at a significantly younger age than most other causes of death. If this is the case, there will be a substantial difference between the numbers in your statement one and two.
If you modified the second statement to
One half of all people born will die in a car accident,
then under the assumption of a steady state population, the two statements would be equivalent. But of course, in the real world we don't have a steady state population, and a similar (although more complicated) argument shows that for a growing, or shrinking, population, sampling bias still makes these two statements different. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
The two statements are different because of sampling bias, because car accidents are more likely to occur when people are young.
Let's make this more concrete by positing an unrealistic scenario.
Cons |
4,442 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | Is my default interpretation indeed equivalent to Statement One?
No.
Let's say we have 800 people. 400 died: 5 from a car crash, the other 395 forgot to breathe. S1 is now true: 5/400=1/80. S2 is false: 5/800!=1/80.
The problem is that technically S2 is ambiguous because it doesn't specify how many deaths there were in total, while S1 does. Alternately, S1 has one more piece of information (total deaths) and one less piece of information (total people). Taken at face value, they describe different ratios.
Is unusual or reckless for this to be my default interpretation?
I actually disagree with your interpretation, but I think it doesn't matter. Likely, context would make it obvious what is meant.
On the one hand, obviously all people die, thus it is implicit that total people = total deaths. So if you are discussing rates of death in general, your default interpretation applies.
On the other, if you are discussing a limited data set in which it is not a given that everybody dies, my interpretation above is more accurate. But it seems not hard for the reader to overlook this.
You might ask where you could possibly encounter people who don't die. For one, we could be working with a statistical dataset that only tracks people for 5 years, so the one ones still alive at the end of the study must be ignored, as it's not known what they will die from. Alternatively, the cause of death may be unknown, in which case you can't really assign it to cars or not cars.
If you do think S1 and S2 different, such that to state the second when one means the first is misleading/incorrect, could you please provide a fully-qualified revision of S2 that is equivalent?
"One in 80 people who die, does so as a result of a car accident." which amounts to rephrasing S1. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | Is my default interpretation indeed equivalent to Statement One?
No.
Let's say we have 800 people. 400 died: 5 from a car crash, the other 395 forgot to breathe. S1 is now true: 5/400=1/80. S2 is fal | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
Is my default interpretation indeed equivalent to Statement One?
No.
Let's say we have 800 people. 400 died: 5 from a car crash, the other 395 forgot to breathe. S1 is now true: 5/400=1/80. S2 is false: 5/800!=1/80.
The problem is that technically S2 is ambiguous because it doesn't specify how many deaths there were in total, while S1 does. Alternately, S1 has one more piece of information (total deaths) and one less piece of information (total people). Taken at face value, they describe different ratios.
Is unusual or reckless for this to be my default interpretation?
I actually disagree with your interpretation, but I think it doesn't matter. Likely, context would make it obvious what is meant.
On the one hand, obviously all people die, thus it is implicit that total people = total deaths. So if you are discussing rates of death in general, your default interpretation applies.
On the other, if you are discussing a limited data set in which it is not a given that everybody dies, my interpretation above is more accurate. But it seems not hard for the reader to overlook this.
You might ask where you could possibly encounter people who don't die. For one, we could be working with a statistical dataset that only tracks people for 5 years, so the one ones still alive at the end of the study must be ignored, as it's not known what they will die from. Alternatively, the cause of death may be unknown, in which case you can't really assign it to cars or not cars.
If you do think S1 and S2 different, such that to state the second when one means the first is misleading/incorrect, could you please provide a fully-qualified revision of S2 that is equivalent?
"One in 80 people who die, does so as a result of a car accident." which amounts to rephrasing S1. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
Is my default interpretation indeed equivalent to Statement One?
No.
Let's say we have 800 people. 400 died: 5 from a car crash, the other 395 forgot to breathe. S1 is now true: 5/400=1/80. S2 is fal |
4,443 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | I would agree that your interpretation of the second statement is consistent with the first statement. I would also agree that it's a perfectly reasonable interpretation of the second statement. That being said, the second statement is much more ambiguous.
The second statement can also be interpreted as:
Given a sample of individuals in a recent car accident, 1/80 died.
Given a population sample at large, 1/80 will die because of factors related to a car accident, some of them being the accidents themselves, but some others being suicide, injuries, medical malpractice, vigilante justice, etc.
Extrapolating current safety trends indicates that 1/80 people alive today will die because of a car accident.
The second and third interpretations above might be close enough for lay audiences, but the first one is pretty substantially different. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | I would agree that your interpretation of the second statement is consistent with the first statement. I would also agree that it's a perfectly reasonable interpretation of the second statement. That | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
I would agree that your interpretation of the second statement is consistent with the first statement. I would also agree that it's a perfectly reasonable interpretation of the second statement. That being said, the second statement is much more ambiguous.
The second statement can also be interpreted as:
Given a sample of individuals in a recent car accident, 1/80 died.
Given a population sample at large, 1/80 will die because of factors related to a car accident, some of them being the accidents themselves, but some others being suicide, injuries, medical malpractice, vigilante justice, etc.
Extrapolating current safety trends indicates that 1/80 people alive today will die because of a car accident.
The second and third interpretations above might be close enough for lay audiences, but the first one is pretty substantially different. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
I would agree that your interpretation of the second statement is consistent with the first statement. I would also agree that it's a perfectly reasonable interpretation of the second statement. That |
4,444 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | The basic difference is that the two statements refer to different populations of humans, and different time frames.
"One in 80 deaths is caused by a car accident" presumably refers to the proportion of deaths in some fairly limited time period (say one year). Since the proportion of the total population using cars, and the safety record of the cars, have both changed significantly over time, the statement doesn't make any sense unless you state what time interval it refers to. (As a ridiculous example, it would clearly have been completely wrong for the year 1919, considering the level of car ownership and use in the total population at that time). Note, the "proportion of the total population using cars" in the above is actually a mistake - it should be "the proportion of people who will die in the near future using cars" and that is going to be skewed by the fact that young and old people have different probabilities of dying from non-accident-related causes, and also have different amounts of car use.
"One in 80 people dies as a result of a car accident" presumably refers to all humans who are currently alive in some region, and their eventual cause of death at some unknown future time. Since the prevalence and safety of car travel will almost certainly change within their lifetimes (say within the next 100 years, for today's new-born infants) this is a very different statement from the first one. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | The basic difference is that the two statements refer to different populations of humans, and different time frames.
"One in 80 deaths is caused by a car accident" presumably refers to the proportion | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
The basic difference is that the two statements refer to different populations of humans, and different time frames.
"One in 80 deaths is caused by a car accident" presumably refers to the proportion of deaths in some fairly limited time period (say one year). Since the proportion of the total population using cars, and the safety record of the cars, have both changed significantly over time, the statement doesn't make any sense unless you state what time interval it refers to. (As a ridiculous example, it would clearly have been completely wrong for the year 1919, considering the level of car ownership and use in the total population at that time). Note, the "proportion of the total population using cars" in the above is actually a mistake - it should be "the proportion of people who will die in the near future using cars" and that is going to be skewed by the fact that young and old people have different probabilities of dying from non-accident-related causes, and also have different amounts of car use.
"One in 80 people dies as a result of a car accident" presumably refers to all humans who are currently alive in some region, and their eventual cause of death at some unknown future time. Since the prevalence and safety of car travel will almost certainly change within their lifetimes (say within the next 100 years, for today's new-born infants) this is a very different statement from the first one. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
The basic difference is that the two statements refer to different populations of humans, and different time frames.
"One in 80 deaths is caused by a car accident" presumably refers to the proportion |
4,445 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | A1) Assuming everyone dies, and assuming the context of a sufficiently small period of time around that which the measurements were taken, yes, your interpretation of S2 matches S1.
A2) Yes, your interpretation of S2 is reckless. S2 can be interpreted as "1 in 80 people involved in car accidents die" which is obviously not equivalent to S1. Therefore using S2 could cause confusion.
Your interpretation of 1 in 80 is reasonable, though, and the other interpretation (1 in any 80) is very unusual. "1 in N of U is P" is a very common shorthand for "given a predicate, P, and N random samples, x, from universe U, the expected number of samples such that P(x) is true approximately equals 1".
A3) Out if all people, 1 in 80 dies as a result of a car accident. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | A1) Assuming everyone dies, and assuming the context of a sufficiently small period of time around that which the measurements were taken, yes, your interpretation of S2 matches S1.
A2) Yes, your inte | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
A1) Assuming everyone dies, and assuming the context of a sufficiently small period of time around that which the measurements were taken, yes, your interpretation of S2 matches S1.
A2) Yes, your interpretation of S2 is reckless. S2 can be interpreted as "1 in 80 people involved in car accidents die" which is obviously not equivalent to S1. Therefore using S2 could cause confusion.
Your interpretation of 1 in 80 is reasonable, though, and the other interpretation (1 in any 80) is very unusual. "1 in N of U is P" is a very common shorthand for "given a predicate, P, and N random samples, x, from universe U, the expected number of samples such that P(x) is true approximately equals 1".
A3) Out if all people, 1 in 80 dies as a result of a car accident. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
A1) Assuming everyone dies, and assuming the context of a sufficiently small period of time around that which the measurements were taken, yes, your interpretation of S2 matches S1.
A2) Yes, your inte |
4,446 | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?" | Yes, it is wrong, and neither phrasing seems sufficient to consistently convey your desired meaning
Speaking as a layperson, if your target is laypeople, I would definitely recommend posting over at https://english.stackexchange.com/, rather than here - your question took me a few reads to unentangle what S1 & S2 intuitively mean to me vs. what you meant to say.
For the record, my interpretations of each statement:
(S1) - per 80 deaths, 1 death by car accident
(S2) - per 80 people in a car accident, 1 death
To convey your meaning, I would likely use a modified S2: "One in 80 people will die in a car accident."
This still contains some ambiguity, but keeps a similar brevity. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res | Yes, it is wrong, and neither phrasing seems sufficient to consistently convey your desired meaning
Speaking as a layperson, if your target is laypeople, I would definitely recommend posting over at h | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a result of a car accident?"
Yes, it is wrong, and neither phrasing seems sufficient to consistently convey your desired meaning
Speaking as a layperson, if your target is laypeople, I would definitely recommend posting over at https://english.stackexchange.com/, rather than here - your question took me a few reads to unentangle what S1 & S2 intuitively mean to me vs. what you meant to say.
For the record, my interpretations of each statement:
(S1) - per 80 deaths, 1 death by car accident
(S2) - per 80 people in a car accident, 1 death
To convey your meaning, I would likely use a modified S2: "One in 80 people will die in a car accident."
This still contains some ambiguity, but keeps a similar brevity. | Is it wrong to rephrase "1 in 80 deaths is caused by a car accident" as "1 in 80 people die as a res
Yes, it is wrong, and neither phrasing seems sufficient to consistently convey your desired meaning
Speaking as a layperson, if your target is laypeople, I would definitely recommend posting over at h |
4,447 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | There are 400 possibilities and 20 of them, each occuring with probability $\frac{1}{400}$, have the guess equal to the outcome. So the total probability of having the guess equal to the outcome is $20\cdot \frac{1}{400} = \frac{20}{400} = \frac{1}{20}$
$$\small{ \begin{array}{rc}
& \text{OUTCOME}\\
\begin{array}{}
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\
\end{array}
&\begin{array}{c|ccccccccccccccccccccccccccc}
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\
\hline
1 & \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
2 & \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
3 & \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
4 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
5 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
6 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
7 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
8 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
9 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
10 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
11 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
12 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
13 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
14 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
15 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
16 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
17 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
18 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}\\
19 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}\\
20 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}\\
\end{array}\end{array}}$$
More general
If the guesses do not have equal ${1}/{20}$ probability for each number, but instead values $p_i$ then the 400 possibilities would not be all with probability $(1/20)\cdot(1/20)={1}/{400}$, but instead ${p_i}/{20}$.
The concept is not different however. The answer boils down again to the sum of the diagonal and is $\sum_{i=1}^{20} \frac{p_i}{20} = \frac{1}{20} \sum_{i=1}^{20} p_i = \frac{1}{20}$.
$$\small{ \begin{array}{rc}
& \text{OUTCOME}\\
\begin{array}{}
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\
\end{array}
&\begin{array}{c|ccccccccccccccccccccccccccc}
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\
\hline
1& \color{red}{ \frac{p_{1}}{20}}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}\\2& \frac{p_{2}}{20}& \color{red}{ \frac{p_{2}}{20}}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}\\3& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \color{red}{ \frac{p_{3}}{20}}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}\\4& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \color{red}{ \frac{p_{4}}{20}}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}\\5& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \color{red}{ \frac{p_{5}}{20}}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}\\6& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \color{red}{ \frac{p_{6}}{20}}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}\\7& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \color{red}{ \frac{p_{7}}{20}}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}\\8& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \color{red}{ \frac{p_{8}}{20}}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}\\9& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \color{red}{ \frac{p_{9}}{20}}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}\\10& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \color{red}{ \frac{p_{10}}{20}}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}\\11& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \color{red}{ \frac{p_{11}}{20}}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}\\12& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \color{red}{ \frac{p_{12}}{20}}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}\\13& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \color{red}{ \frac{p_{13}}{20}}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}\\14& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \color{red}{ \frac{p_{14}}{20}}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}\\15& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \color{red}{ \frac{p_{15}}{20}}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}\\16& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \color{red}{ \frac{p_{16}}{20}}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}\\17& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \color{red}{ \frac{p_{17}}{20}}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}\\18& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \color{red}{ \frac{p_{18}}{20}}& \frac{p_{18}}{20}& \frac{p_{18}}{20}\\19& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \color{red}{ \frac{p_{19}}{20}}& \frac{p_{19}}{20}\\20& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \color{red}{ \frac{p_{20}}{20}}\\
\end{array}\end{array}}$$
More interesting is the case when both probabilities, for the guess and for the outcome, are not uniform (not equal probabilities). For instance, we could imagine rolling an unfair d20 two times. Then the probability will be equal to the expectation $E[p_i] = \sum_{i=1}^{20} p_i^2$. This will be larger than $\frac{1}{20}$ if the $p_i$ are unequal. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | There are 400 possibilities and 20 of them, each occuring with probability $\frac{1}{400}$, have the guess equal to the outcome. So the total probability of having the guess equal to the outcome is $2 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
There are 400 possibilities and 20 of them, each occuring with probability $\frac{1}{400}$, have the guess equal to the outcome. So the total probability of having the guess equal to the outcome is $20\cdot \frac{1}{400} = \frac{20}{400} = \frac{1}{20}$
$$\small{ \begin{array}{rc}
& \text{OUTCOME}\\
\begin{array}{}
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\
\end{array}
&\begin{array}{c|ccccccccccccccccccccccccccc}
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\
\hline
1 & \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
2 & \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
3 & \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
4 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
5 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
6 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
7 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
8 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
9 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
10 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
11 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
12 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
13 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
14 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
15 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
16 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
17 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\
18 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}\\
19 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}\\
20 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}\\
\end{array}\end{array}}$$
More general
If the guesses do not have equal ${1}/{20}$ probability for each number, but instead values $p_i$ then the 400 possibilities would not be all with probability $(1/20)\cdot(1/20)={1}/{400}$, but instead ${p_i}/{20}$.
The concept is not different however. The answer boils down again to the sum of the diagonal and is $\sum_{i=1}^{20} \frac{p_i}{20} = \frac{1}{20} \sum_{i=1}^{20} p_i = \frac{1}{20}$.
$$\small{ \begin{array}{rc}
& \text{OUTCOME}\\
\begin{array}{}
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\
\require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\
\end{array}
&\begin{array}{c|ccccccccccccccccccccccccccc}
&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\
\hline
1& \color{red}{ \frac{p_{1}}{20}}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}\\2& \frac{p_{2}}{20}& \color{red}{ \frac{p_{2}}{20}}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}\\3& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \color{red}{ \frac{p_{3}}{20}}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}\\4& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \color{red}{ \frac{p_{4}}{20}}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}\\5& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \color{red}{ \frac{p_{5}}{20}}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}\\6& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \color{red}{ \frac{p_{6}}{20}}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}\\7& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \color{red}{ \frac{p_{7}}{20}}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}\\8& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \color{red}{ \frac{p_{8}}{20}}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}\\9& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \color{red}{ \frac{p_{9}}{20}}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}\\10& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \color{red}{ \frac{p_{10}}{20}}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}\\11& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \color{red}{ \frac{p_{11}}{20}}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}\\12& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \color{red}{ \frac{p_{12}}{20}}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}\\13& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \color{red}{ \frac{p_{13}}{20}}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}\\14& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \color{red}{ \frac{p_{14}}{20}}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}\\15& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \color{red}{ \frac{p_{15}}{20}}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}\\16& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \color{red}{ \frac{p_{16}}{20}}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}\\17& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \color{red}{ \frac{p_{17}}{20}}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}\\18& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \color{red}{ \frac{p_{18}}{20}}& \frac{p_{18}}{20}& \frac{p_{18}}{20}\\19& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \color{red}{ \frac{p_{19}}{20}}& \frac{p_{19}}{20}\\20& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \color{red}{ \frac{p_{20}}{20}}\\
\end{array}\end{array}}$$
More interesting is the case when both probabilities, for the guess and for the outcome, are not uniform (not equal probabilities). For instance, we could imagine rolling an unfair d20 two times. Then the probability will be equal to the expectation $E[p_i] = \sum_{i=1}^{20} p_i^2$. This will be larger than $\frac{1}{20}$ if the $p_i$ are unequal. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
There are 400 possibilities and 20 of them, each occuring with probability $\frac{1}{400}$, have the guess equal to the outcome. So the total probability of having the guess equal to the outcome is $2 |
4,448 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Let's simulate it!
set.seed(2021)
R <- 10000
d <- 20
guess <- sample(seq(1, d, 1), R, replace = T)
roll <- sample(seq(1, d, 1), R, replace = T)
length(which(guess == roll))/R
I get that about $1/20$ $(486/10000)$ times my guess match the roll. If you chop off the last digit of the seed and run the code with set.seed(202), I get exactly $5\%$. With next year's seed of 2022, I get $4.99\%$.
In more mathematical terms, this is the difference between the probability of rolling a number AND guessing that number (probably not what interests you) versus rolling a number GIVEN your guess. Assuming the roll and guess are independent...
$$
P(Roll = n) = 1/20\\
P(Guess = n) = 1/20\\
P(Roll = n \text{ AND } Guess = n) = 1/400\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{1/400}{1/20} = 1/20
$$
EDIT
If you don’t have equal probability of each number, the calculation can be modified. I’ll do the case where the human is not necessarily good at picking numbers with uniform probability of each number. Again, assume independence of the guess and the roll.
$$
P(Roll = n) = 1/20\\
P(Guess = n) = p\\
P(Roll = n \text{ AND } Guess = n) = p/20\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{p/20}{p} = 1/20
$$
If also the d20 is not fair:
$$
P(Roll = n) = p_1\\
P(Guess = n) = p_2\\
P(Roll = n \text{ AND } Guess = n) = p_1p_2\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{p_1p_2}{p_2} = p_1
$$
So it only depends on the probability of rolling a particular number. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Let's simulate it!
set.seed(2021)
R <- 10000
d <- 20
guess <- sample(seq(1, d, 1), R, replace = T)
roll <- sample(seq(1, d, 1), R, replace = T)
length(which(guess == roll))/R
I get that about $1/20$ | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Let's simulate it!
set.seed(2021)
R <- 10000
d <- 20
guess <- sample(seq(1, d, 1), R, replace = T)
roll <- sample(seq(1, d, 1), R, replace = T)
length(which(guess == roll))/R
I get that about $1/20$ $(486/10000)$ times my guess match the roll. If you chop off the last digit of the seed and run the code with set.seed(202), I get exactly $5\%$. With next year's seed of 2022, I get $4.99\%$.
In more mathematical terms, this is the difference between the probability of rolling a number AND guessing that number (probably not what interests you) versus rolling a number GIVEN your guess. Assuming the roll and guess are independent...
$$
P(Roll = n) = 1/20\\
P(Guess = n) = 1/20\\
P(Roll = n \text{ AND } Guess = n) = 1/400\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{1/400}{1/20} = 1/20
$$
EDIT
If you don’t have equal probability of each number, the calculation can be modified. I’ll do the case where the human is not necessarily good at picking numbers with uniform probability of each number. Again, assume independence of the guess and the roll.
$$
P(Roll = n) = 1/20\\
P(Guess = n) = p\\
P(Roll = n \text{ AND } Guess = n) = p/20\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{p/20}{p} = 1/20
$$
If also the d20 is not fair:
$$
P(Roll = n) = p_1\\
P(Guess = n) = p_2\\
P(Roll = n \text{ AND } Guess = n) = p_1p_2\\
P(Roll = n \vert Guess = n) = \dfrac{P(Roll = n \text{ AND } Guess = n)}{P(Guess = n)} = \dfrac{p_1p_2}{p_2} = p_1
$$
So it only depends on the probability of rolling a particular number. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Let's simulate it!
set.seed(2021)
R <- 10000
d <- 20
guess <- sample(seq(1, d, 1), R, replace = T)
roll <- sample(seq(1, d, 1), R, replace = T)
length(which(guess == roll))/R
I get that about $1/20$ |
4,449 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | I don't think any of the existing answers explicitly state why the answer is 1 in 20 even if the friend is not equally likely to guess all 20 numbers (they aren't - humans are not good random number generators, and the friend might not even be trying to guess randomly).
For each possible roll $i$, let $p_i$ be the probability of your friend guessing $i$. We know that these probabilities must sum to 1:
$$\sum_{i=1}^{20}p_i = 1$$
Then the probability of the friend guessing correctly is:
$$
\sum_{i=1}^{20} P(\text{friend guesses } i)P(\text{roll }i) = \sum_{i=1}^{20}p_i \frac{1}{20} = \frac{1}{20}\sum_{i=1}^{20}p_i = \frac{1}{20}
$$ | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | I don't think any of the existing answers explicitly state why the answer is 1 in 20 even if the friend is not equally likely to guess all 20 numbers (they aren't - humans are not good random number g | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
I don't think any of the existing answers explicitly state why the answer is 1 in 20 even if the friend is not equally likely to guess all 20 numbers (they aren't - humans are not good random number generators, and the friend might not even be trying to guess randomly).
For each possible roll $i$, let $p_i$ be the probability of your friend guessing $i$. We know that these probabilities must sum to 1:
$$\sum_{i=1}^{20}p_i = 1$$
Then the probability of the friend guessing correctly is:
$$
\sum_{i=1}^{20} P(\text{friend guesses } i)P(\text{roll }i) = \sum_{i=1}^{20}p_i \frac{1}{20} = \frac{1}{20}\sum_{i=1}^{20}p_i = \frac{1}{20}
$$ | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
I don't think any of the existing answers explicitly state why the answer is 1 in 20 even if the friend is not equally likely to guess all 20 numbers (they aren't - humans are not good random number g |
4,450 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Your friend is confusing the situation where both players roll the same specific number (which would give you 1/400) vs the situation you are in where they have to roll the same number but it could be any number (1/20). I suppose your confusion is that you have to roll the specific number your friend guessed but, which makes it seem like the first situation I described, however the difference is that your guess could have been any number. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Your friend is confusing the situation where both players roll the same specific number (which would give you 1/400) vs the situation you are in where they have to roll the same number but it could be | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Your friend is confusing the situation where both players roll the same specific number (which would give you 1/400) vs the situation you are in where they have to roll the same number but it could be any number (1/20). I suppose your confusion is that you have to roll the specific number your friend guessed but, which makes it seem like the first situation I described, however the difference is that your guess could have been any number. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Your friend is confusing the situation where both players roll the same specific number (which would give you 1/400) vs the situation you are in where they have to roll the same number but it could be |
4,451 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Yes for any specific number $1,\ldots,20$ the probability that your friend selects that number and the dice rolls it is 1/400, but you must marginalize over all possible options:
If $X\sim \text{Unif}(1,\ldots,20)$, $Y\sim \text{Unif}(1,\ldots,20)$, Let $Z=X-Y$ then
$$
\begin{align*}
\text{P}(\text{predict roll}) &= \text{P}(Z=0)\\
&= \sum_{y=1}^{20} P(X=0+y)P(Y=y) &&\text{convolution formula for sums of random variables}\\
&= \sum_{y=1}^{20} \tfrac{1}{20}\tfrac{1}{20}\\
&= \tfrac{1}{20}
\end{align*}
$$
And by simulation
tmp <- data.frame(x=sample(1:20, size=100000, replace=TRUE ), y=sample(1:20, size=100000, replace=TRUE ))
> mean(tmp$x == tmp$y)
[1] 0.0497 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Yes for any specific number $1,\ldots,20$ the probability that your friend selects that number and the dice rolls it is 1/400, but you must marginalize over all possible options:
If $X\sim \text{Unif} | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Yes for any specific number $1,\ldots,20$ the probability that your friend selects that number and the dice rolls it is 1/400, but you must marginalize over all possible options:
If $X\sim \text{Unif}(1,\ldots,20)$, $Y\sim \text{Unif}(1,\ldots,20)$, Let $Z=X-Y$ then
$$
\begin{align*}
\text{P}(\text{predict roll}) &= \text{P}(Z=0)\\
&= \sum_{y=1}^{20} P(X=0+y)P(Y=y) &&\text{convolution formula for sums of random variables}\\
&= \sum_{y=1}^{20} \tfrac{1}{20}\tfrac{1}{20}\\
&= \tfrac{1}{20}
\end{align*}
$$
And by simulation
tmp <- data.frame(x=sample(1:20, size=100000, replace=TRUE ), y=sample(1:20, size=100000, replace=TRUE ))
> mean(tmp$x == tmp$y)
[1] 0.0497 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Yes for any specific number $1,\ldots,20$ the probability that your friend selects that number and the dice rolls it is 1/400, but you must marginalize over all possible options:
If $X\sim \text{Unif} |
4,452 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | In more layman's terms, I would describe it like this:
Let's say the friend guesses 20--take that as a given. Now we roll the die, what's the chance that a d20 roll is 20? 1/20 = 5%.
Okay, now let's say the friend guesses 19. In this case, what's the chance that a d20 roll is 19? 1/20 = 5%.
What if the friend guesses 18? 1/20 = 5%.
Et cetera.
In every case, from guess = 1 up to guess = 20, the probability of the die roll equaling the guess is 1/20 = 5%. The guess doesn't matter at all (as long as it falls between 1 and 20, clearly a guess of 42 has a 0% chance of being correct). Whatever the guess, whether the process for generating the guess is random, bad human-approximation of random, fixed, or anything else, as long as the guess is between 1 and 20, the probability of the roll matching the guess is 1/20.
Mathematically, this is shown rigorously in @fblundun's answer - I'm trying to explain that math in an accessible way. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | In more layman's terms, I would describe it like this:
Let's say the friend guesses 20--take that as a given. Now we roll the die, what's the chance that a d20 roll is 20? 1/20 = 5%.
Okay, now let's s | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
In more layman's terms, I would describe it like this:
Let's say the friend guesses 20--take that as a given. Now we roll the die, what's the chance that a d20 roll is 20? 1/20 = 5%.
Okay, now let's say the friend guesses 19. In this case, what's the chance that a d20 roll is 19? 1/20 = 5%.
What if the friend guesses 18? 1/20 = 5%.
Et cetera.
In every case, from guess = 1 up to guess = 20, the probability of the die roll equaling the guess is 1/20 = 5%. The guess doesn't matter at all (as long as it falls between 1 and 20, clearly a guess of 42 has a 0% chance of being correct). Whatever the guess, whether the process for generating the guess is random, bad human-approximation of random, fixed, or anything else, as long as the guess is between 1 and 20, the probability of the roll matching the guess is 1/20.
Mathematically, this is shown rigorously in @fblundun's answer - I'm trying to explain that math in an accessible way. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
In more layman's terms, I would describe it like this:
Let's say the friend guesses 20--take that as a given. Now we roll the die, what's the chance that a d20 roll is 20? 1/20 = 5%.
Okay, now let's s |
4,453 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Instead of a die roll, let's look at a coin flip. I guess whether the coin will land on heads or tails, and then you flip the coin.
There are four possible guess+outcome combinations:
I guess heads, and the coin lands on heads.
I guess heads, and the coin lands on tails.
I guess tails, and the coin lands on heads.
I guess tails, and the coin lands on tails.
Assuming that my guesses themselves are truly random—i.e. I am no more likely to guess any particular option than I am to guess any other*—then the probability that any particular guess+outcome combination will occur is (1/2)(1/2) = 1/4.
Now, any particular guess+outcome combination can be classified as either a "correct guess", meaning that the guess and the outcome match, or an "incorrect guess", meaning they do not match. The probability that a correct-guess combination will occur is computed by looking at each possible correct-guess combination, finding the probability that that particular combination will occur, and then summing those probabilities together. So in this case, the probability of a correct guess is (1/4)+(1/4) = 1/2.
This makes intuitive sense. We all know that our chances of guessing a coin flip are 50/50; if they were 25/75, we would never agree to a coin flip in the first place.
Your friend's die roll may be based on 1-in-20 odds instead of 1-in-2 odds, but the same logic applies. If your friend guesses randomly,* the probablility of any particular guess+outcome combination occurring is (1/20)(1/20) = 1/400. But since there are 20 possible correct-guess combinations (guess 1 + outcome 1, guess 2 + outcome 2, ..., guess 20 + outcome 20), the probability of a correct guess is (1/400)+(1/400)+...+(1/400) = 20/400 = 1/20.
*In real life, human beings do not guess randomly. When you ask people to pick a number between 1 and 10, they are more likely to pick 7 than 6 or 8. That complicates the math for determining the probability of any particular guess+outcome combination. But as long as the outcome (coin/die/whatever) remains random, the probability of a correct guess will not change, because the higher probability of certain guesses will be offset by the lower probability of the other guesses. fblundun's answer does a better job of describing this. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Instead of a die roll, let's look at a coin flip. I guess whether the coin will land on heads or tails, and then you flip the coin.
There are four possible guess+outcome combinations:
I guess heads, | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Instead of a die roll, let's look at a coin flip. I guess whether the coin will land on heads or tails, and then you flip the coin.
There are four possible guess+outcome combinations:
I guess heads, and the coin lands on heads.
I guess heads, and the coin lands on tails.
I guess tails, and the coin lands on heads.
I guess tails, and the coin lands on tails.
Assuming that my guesses themselves are truly random—i.e. I am no more likely to guess any particular option than I am to guess any other*—then the probability that any particular guess+outcome combination will occur is (1/2)(1/2) = 1/4.
Now, any particular guess+outcome combination can be classified as either a "correct guess", meaning that the guess and the outcome match, or an "incorrect guess", meaning they do not match. The probability that a correct-guess combination will occur is computed by looking at each possible correct-guess combination, finding the probability that that particular combination will occur, and then summing those probabilities together. So in this case, the probability of a correct guess is (1/4)+(1/4) = 1/2.
This makes intuitive sense. We all know that our chances of guessing a coin flip are 50/50; if they were 25/75, we would never agree to a coin flip in the first place.
Your friend's die roll may be based on 1-in-20 odds instead of 1-in-2 odds, but the same logic applies. If your friend guesses randomly,* the probablility of any particular guess+outcome combination occurring is (1/20)(1/20) = 1/400. But since there are 20 possible correct-guess combinations (guess 1 + outcome 1, guess 2 + outcome 2, ..., guess 20 + outcome 20), the probability of a correct guess is (1/400)+(1/400)+...+(1/400) = 20/400 = 1/20.
*In real life, human beings do not guess randomly. When you ask people to pick a number between 1 and 10, they are more likely to pick 7 than 6 or 8. That complicates the math for determining the probability of any particular guess+outcome combination. But as long as the outcome (coin/die/whatever) remains random, the probability of a correct guess will not change, because the higher probability of certain guesses will be offset by the lower probability of the other guesses. fblundun's answer does a better job of describing this. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Instead of a die roll, let's look at a coin flip. I guess whether the coin will land on heads or tails, and then you flip the coin.
There are four possible guess+outcome combinations:
I guess heads, |
4,454 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Your another friend is arguing that:
Guessing the roll; and
Rolling the roll.
Are independent events. But that's not what you are measuring. You a measuring:
Only the matching (of guest and roll has the same value).
There is one guest and 20 possible roll outcomes. So a chance of 1/20. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | Your another friend is arguing that:
Guessing the roll; and
Rolling the roll.
Are independent events. But that's not what you are measuring. You a measuring:
Only the matching (of guest and roll ha | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Your another friend is arguing that:
Guessing the roll; and
Rolling the roll.
Are independent events. But that's not what you are measuring. You a measuring:
Only the matching (of guest and roll has the same value).
There is one guest and 20 possible roll outcomes. So a chance of 1/20. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
Your another friend is arguing that:
Guessing the roll; and
Rolling the roll.
Are independent events. But that's not what you are measuring. You a measuring:
Only the matching (of guest and roll ha |
4,455 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | The probability of your friend correctly predicting the outcome of the roll was 1 in 20.
The probability of your friend correctly predicting that that specific number would be the outcome of the roll was 1 in 400.
Let's say the friend predicted it would be a 20. If you'd have been equally impressed if he'd predicted it would be a 4, then you're in the first situation (1/20). If you don't care about a correct prediction unless it was a 20, you're in the second situation (1/400).
Put another way, suppose you were watching your friend from behind a glass window, and you knew he was going to try to call the roll. If you said, "he will correctly predict the outcome of the roll" you'd have a 1 in 20 chance of being right. If you said, "he will correctly predict that he will roll a perfect 20" you'd have a 1 in 400 chance of being right. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | The probability of your friend correctly predicting the outcome of the roll was 1 in 20.
The probability of your friend correctly predicting that that specific number would be the outcome of the roll | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
The probability of your friend correctly predicting the outcome of the roll was 1 in 20.
The probability of your friend correctly predicting that that specific number would be the outcome of the roll was 1 in 400.
Let's say the friend predicted it would be a 20. If you'd have been equally impressed if he'd predicted it would be a 4, then you're in the first situation (1/20). If you don't care about a correct prediction unless it was a 20, you're in the second situation (1/400).
Put another way, suppose you were watching your friend from behind a glass window, and you knew he was going to try to call the roll. If you said, "he will correctly predict the outcome of the roll" you'd have a 1 in 20 chance of being right. If you said, "he will correctly predict that he will roll a perfect 20" you'd have a 1 in 400 chance of being right. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
The probability of your friend correctly predicting the outcome of the roll was 1 in 20.
The probability of your friend correctly predicting that that specific number would be the outcome of the roll |
4,456 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | What is the chance a D20 lands on 20? 5%.
What is the chance a D20 lands on 7? 5%.
What is the chance a D20 lands on [whatever number your friend named]? 5%.
No matter how you select a specific number from 1-20, a D20 has a 5% chance of matching it. Any particular number is always 5% likely, regardless of whether your friend chooses that number or not. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | What is the chance a D20 lands on 20? 5%.
What is the chance a D20 lands on 7? 5%.
What is the chance a D20 lands on [whatever number your friend named]? 5%.
No matter how you select a specific number | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
What is the chance a D20 lands on 20? 5%.
What is the chance a D20 lands on 7? 5%.
What is the chance a D20 lands on [whatever number your friend named]? 5%.
No matter how you select a specific number from 1-20, a D20 has a 5% chance of matching it. Any particular number is always 5% likely, regardless of whether your friend chooses that number or not. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
What is the chance a D20 lands on 20? 5%.
What is the chance a D20 lands on 7? 5%.
What is the chance a D20 lands on [whatever number your friend named]? 5%.
No matter how you select a specific number |
4,457 | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | the probability of him randomly guessing a number and then rolling it
were both 1 in 20 so the compound probability is 1 in 400.
The probability of your friend guessing a number is not 1/20, the probability is 1. Unless there is a chance he would fail to guess anything or a chance he would guess something not on the D20.
So the formula for your friend guessing the correct outcome of the roll is:
1 * 1/20
or 1/20
Now, the odds of you guessing what your friend is going to guess and then that guess being correct, that would be 1/400. But that doesn't appear to be the situation. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens? | the probability of him randomly guessing a number and then rolling it
were both 1 in 20 so the compound probability is 1 in 400.
The probability of your friend guessing a number is not 1/20, the prob | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
the probability of him randomly guessing a number and then rolling it
were both 1 in 20 so the compound probability is 1 in 400.
The probability of your friend guessing a number is not 1/20, the probability is 1. Unless there is a chance he would fail to guess anything or a chance he would guess something not on the D20.
So the formula for your friend guessing the correct outcome of the roll is:
1 * 1/20
or 1/20
Now, the odds of you guessing what your friend is going to guess and then that guess being correct, that would be 1/400. But that doesn't appear to be the situation. | Is there a 1 in 20 or 1 in 400 chance of guessing the outcome of a d20 roll before it happens?
the probability of him randomly guessing a number and then rolling it
were both 1 in 20 so the compound probability is 1 in 400.
The probability of your friend guessing a number is not 1/20, the prob |
4,458 | What is the difference between the forward-backward and Viterbi algorithms? | A bit of background first maybe it clears things up a bit.
When talking about HMMs (Hidden Markov Models) there are generally 3 problems to be considered:
Evaluation problem
Evaluation problem answers the question: what is the probability that a particular sequence of symbols is produced by a particular model?
For evaluation we use two algorithms: the forward algorithm or the backwards algorithm (DO NOT confuse them with the forward-backward algorithm).
Decoding problem
Decoding problem answers the question: Given a sequence of symbols (your observations) and a model, what is the most likely sequence of states that produced the sequence.
For decoding we use the Viterbi algorithm.
Training problem
Training problem answers the question: Given a model structure and a set of sequences, find the model that best fits the data.
For this problem we can use the following 3 algorithms:
MLE (maximum likelihood estimation)
Viterbi training(DO NOT confuse with Viterbi decoding)
Baum Welch = forward-backward algorithm
To sum it up, you use the Viterbi algorithm for the decoding problem and Baum Welch/Forward-backward when you train your model on a set of sequences.
Baum Welch works in the following way.
For each sequence in the training set of sequences.
Calculate forward probabilities with the forward algorithm
Calculate backward probabilities with the backward algorithm
Calculate the contributions of the current sequence to the transitions of the model, calculate the contributions of the current sequence to the emission probabilities of the model.
Calculate the new model parameters (start probabilities, transition probabilities, emission probabilities)
Calculate the new log likelihood of the model
Stop when the change in log likelihood is smaller than a given threshold or when a maximum number of iterations is passed.
If you need a full description of the equations for Viterbi decoding and the training algorithm let me know and I can point you in the right direction. | What is the difference between the forward-backward and Viterbi algorithms? | A bit of background first maybe it clears things up a bit.
When talking about HMMs (Hidden Markov Models) there are generally 3 problems to be considered:
Evaluation problem
Evaluation problem answe | What is the difference between the forward-backward and Viterbi algorithms?
A bit of background first maybe it clears things up a bit.
When talking about HMMs (Hidden Markov Models) there are generally 3 problems to be considered:
Evaluation problem
Evaluation problem answers the question: what is the probability that a particular sequence of symbols is produced by a particular model?
For evaluation we use two algorithms: the forward algorithm or the backwards algorithm (DO NOT confuse them with the forward-backward algorithm).
Decoding problem
Decoding problem answers the question: Given a sequence of symbols (your observations) and a model, what is the most likely sequence of states that produced the sequence.
For decoding we use the Viterbi algorithm.
Training problem
Training problem answers the question: Given a model structure and a set of sequences, find the model that best fits the data.
For this problem we can use the following 3 algorithms:
MLE (maximum likelihood estimation)
Viterbi training(DO NOT confuse with Viterbi decoding)
Baum Welch = forward-backward algorithm
To sum it up, you use the Viterbi algorithm for the decoding problem and Baum Welch/Forward-backward when you train your model on a set of sequences.
Baum Welch works in the following way.
For each sequence in the training set of sequences.
Calculate forward probabilities with the forward algorithm
Calculate backward probabilities with the backward algorithm
Calculate the contributions of the current sequence to the transitions of the model, calculate the contributions of the current sequence to the emission probabilities of the model.
Calculate the new model parameters (start probabilities, transition probabilities, emission probabilities)
Calculate the new log likelihood of the model
Stop when the change in log likelihood is smaller than a given threshold or when a maximum number of iterations is passed.
If you need a full description of the equations for Viterbi decoding and the training algorithm let me know and I can point you in the right direction. | What is the difference between the forward-backward and Viterbi algorithms?
A bit of background first maybe it clears things up a bit.
When talking about HMMs (Hidden Markov Models) there are generally 3 problems to be considered:
Evaluation problem
Evaluation problem answe |
4,459 | What is the difference between the forward-backward and Viterbi algorithms? | Forward-Backward gives marginal probability for each individual state, Viterbi gives probability of the most likely sequence of states. For instance if your HMM task is to predict sunny vs. rainy weather for each day, Forward Backward would tell you the probability of it being "sunny" for each day, Viterbi would give the most likely sequence of sunny/rainy days, and the probability of this sequence. | What is the difference between the forward-backward and Viterbi algorithms? | Forward-Backward gives marginal probability for each individual state, Viterbi gives probability of the most likely sequence of states. For instance if your HMM task is to predict sunny vs. rainy weat | What is the difference between the forward-backward and Viterbi algorithms?
Forward-Backward gives marginal probability for each individual state, Viterbi gives probability of the most likely sequence of states. For instance if your HMM task is to predict sunny vs. rainy weather for each day, Forward Backward would tell you the probability of it being "sunny" for each day, Viterbi would give the most likely sequence of sunny/rainy days, and the probability of this sequence. | What is the difference between the forward-backward and Viterbi algorithms?
Forward-Backward gives marginal probability for each individual state, Viterbi gives probability of the most likely sequence of states. For instance if your HMM task is to predict sunny vs. rainy weat |
4,460 | What is the difference between the forward-backward and Viterbi algorithms? | I find these two following slides from {2} to be really good to situate the forward-backward and Viterbi algorithms amongst all other typical algorithms used with HMM:
Notes:
$x$ is the observed emission(s), $\pi$ are the parameters of the HMM.
path = a sequence of emissions
decoding = inference
learning = training = parameter estimation
Some papers (e.g., {1}) claim that Baum–Welch is the same as forward–backward algorithm, but I agree with Masterfool and Wikipedia: Baum–Welch is an expectation-maximization algorithm that uses the forward–backward algorithm. The two illustrations also distinguish Baum–Welch from the forward–backward algorithm.
References:
{1} Lember, Jüri, and Alexey Koloydenko. "The adjusted Viterbi training for hidden Markov models." Bernoulli 14, no. 1 (2008): 180-206.
{2} 6.047/6.878 Computational Biology: Genomes, Networks, Evolution (Fall 2012) Lecture 07 - HMMs II (2012-09-29) http://stellar.mit.edu/S/course/6/fa12/6.047/courseMaterial/topics/topic2/lectureNotes/Lecture07_HMMsIIb_6up/Lecture07_HMMsIIb_6up.pdf (Manolis Kellis): | What is the difference between the forward-backward and Viterbi algorithms? | I find these two following slides from {2} to be really good to situate the forward-backward and Viterbi algorithms amongst all other typical algorithms used with HMM:
Notes:
$x$ is the observed em | What is the difference between the forward-backward and Viterbi algorithms?
I find these two following slides from {2} to be really good to situate the forward-backward and Viterbi algorithms amongst all other typical algorithms used with HMM:
Notes:
$x$ is the observed emission(s), $\pi$ are the parameters of the HMM.
path = a sequence of emissions
decoding = inference
learning = training = parameter estimation
Some papers (e.g., {1}) claim that Baum–Welch is the same as forward–backward algorithm, but I agree with Masterfool and Wikipedia: Baum–Welch is an expectation-maximization algorithm that uses the forward–backward algorithm. The two illustrations also distinguish Baum–Welch from the forward–backward algorithm.
References:
{1} Lember, Jüri, and Alexey Koloydenko. "The adjusted Viterbi training for hidden Markov models." Bernoulli 14, no. 1 (2008): 180-206.
{2} 6.047/6.878 Computational Biology: Genomes, Networks, Evolution (Fall 2012) Lecture 07 - HMMs II (2012-09-29) http://stellar.mit.edu/S/course/6/fa12/6.047/courseMaterial/topics/topic2/lectureNotes/Lecture07_HMMsIIb_6up/Lecture07_HMMsIIb_6up.pdf (Manolis Kellis): | What is the difference between the forward-backward and Viterbi algorithms?
I find these two following slides from {2} to be really good to situate the forward-backward and Viterbi algorithms amongst all other typical algorithms used with HMM:
Notes:
$x$ is the observed em |
4,461 | What is the difference between the forward-backward and Viterbi algorithms? | Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward-backward algorithm really is just a combination of the forward and backward algorithms: one forward pass, one backward pass. On its own, the forward-backward algorithm is not used for training an HMM's parameters, but only for smoothing: computing the marginal likelihoods of a sequence of states.
https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm
https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm | What is the difference between the forward-backward and Viterbi algorithms? | Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward- | What is the difference between the forward-backward and Viterbi algorithms?
Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward-backward algorithm really is just a combination of the forward and backward algorithms: one forward pass, one backward pass. On its own, the forward-backward algorithm is not used for training an HMM's parameters, but only for smoothing: computing the marginal likelihoods of a sequence of states.
https://en.wikipedia.org/wiki/Forward%E2%80%93backward_algorithm
https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm | What is the difference between the forward-backward and Viterbi algorithms?
Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward- |
4,462 | What is the difference between the forward-backward and Viterbi algorithms? | @Yaroslav Bulatov had a precise answer. I would add one example of it to tell the differences between forward-backward and Viterbi algorithms.
Suppose we have an this HMM (from Wikipedia HMM page). Note, the model is already given, so there is no learning from data task here.
Suppose our data is a length 4 sequence. (Walk, Shop, Walk, Clean). Two algorithm will give different things.
Forward backward algorithm will give the probability of each hidden states. Here is an example. Note, each column in the table sum up to $1$.
Viterbi algorithm will give the most probable sequence of hidden states. Here is an example. Note, there is also a probability associated with this hidden state sequence. This sequence has max prob. over all other sequences (e.g., $2^4=16$ sequences from all Sunny to all Rainy).
Here is some R code for the demo
library(HMM)
# in education setting,
# hidden state: Rainy and Sunny
# observation: Walk, Shop, Clean
# state transition
P <- as.matrix(rbind(c(0.7,0.3),
c(0.4,0.6)))
# emission prob
R <- as.matrix(rbind(c(0.1, 0.4, 0.5),
c(0.6,0.3, 0.1)))
hmm = initHMM(States=c("Rainy","Sunny"),
Symbols=c("Walk","Shop", "Clean"),
startProbs=c(0.6,0.4),
transProbs=P,
emissionProbs=R)
hmm
obs=c("Walk","Shop","Walk", "Clean")
print(posterior(hmm,obs))
print(viterbi(hmm, obs)) | What is the difference between the forward-backward and Viterbi algorithms? | @Yaroslav Bulatov had a precise answer. I would add one example of it to tell the differences between forward-backward and Viterbi algorithms.
Suppose we have an this HMM (from Wikipedia HMM page). No | What is the difference between the forward-backward and Viterbi algorithms?
@Yaroslav Bulatov had a precise answer. I would add one example of it to tell the differences between forward-backward and Viterbi algorithms.
Suppose we have an this HMM (from Wikipedia HMM page). Note, the model is already given, so there is no learning from data task here.
Suppose our data is a length 4 sequence. (Walk, Shop, Walk, Clean). Two algorithm will give different things.
Forward backward algorithm will give the probability of each hidden states. Here is an example. Note, each column in the table sum up to $1$.
Viterbi algorithm will give the most probable sequence of hidden states. Here is an example. Note, there is also a probability associated with this hidden state sequence. This sequence has max prob. over all other sequences (e.g., $2^4=16$ sequences from all Sunny to all Rainy).
Here is some R code for the demo
library(HMM)
# in education setting,
# hidden state: Rainy and Sunny
# observation: Walk, Shop, Clean
# state transition
P <- as.matrix(rbind(c(0.7,0.3),
c(0.4,0.6)))
# emission prob
R <- as.matrix(rbind(c(0.1, 0.4, 0.5),
c(0.6,0.3, 0.1)))
hmm = initHMM(States=c("Rainy","Sunny"),
Symbols=c("Walk","Shop", "Clean"),
startProbs=c(0.6,0.4),
transProbs=P,
emissionProbs=R)
hmm
obs=c("Walk","Shop","Walk", "Clean")
print(posterior(hmm,obs))
print(viterbi(hmm, obs)) | What is the difference between the forward-backward and Viterbi algorithms?
@Yaroslav Bulatov had a precise answer. I would add one example of it to tell the differences between forward-backward and Viterbi algorithms.
Suppose we have an this HMM (from Wikipedia HMM page). No |
4,463 | Linear kernel and non-linear kernel for support vector machine? | Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel. There are two main factors to consider:
Solving the optimisation problem for a linear kernel is much faster, see e.g. LIBLINEAR.
Typically, the best possible predictive performance is better for a nonlinear kernel (or at least as good as the linear one).
It's been shown that the linear kernel is a degenerate version of RBF, hence the linear kernel is never more accurate than a properly tuned RBF kernel. Quoting the abstract from the paper I linked:
The analysis also indicates that if complete model selection using the Gaussian kernel has been conducted, there is no need to consider linear SVM.
A basic rule of thumb is briefly covered in NTU's practical guide to support vector classification (Appendix C).
If the number of features is large, one may not need to map data to a higher dimensional space. That is, the nonlinear mapping does not improve the performance.
Using the linear kernel is good enough, and one only searches for the parameter C.
Your conclusion is more or less right but you have the argument backwards. In practice, the linear kernel tends to perform very well when the number of features is large (e.g. there is no need to map to an even higher dimensional feature space). A typical example of this is document classification, with thousands of dimensions in input space.
In those cases, nonlinear kernels are not necessarily significantly more accurate than the linear one. This basically means nonlinear kernels lose their appeal: they require way more resources to train with little to no gain in predictive performance, so why bother.
TL;DR
Always try linear first since it is way faster to train (AND test). If the accuracy suffices, pat yourself on the back for a job well done and move on to the next problem. If not, try a nonlinear kernel. | Linear kernel and non-linear kernel for support vector machine? | Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel. There are two main factors to consider:
Solving the optimisation problem for a linear kernel is much faster, see e.g. L | Linear kernel and non-linear kernel for support vector machine?
Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel. There are two main factors to consider:
Solving the optimisation problem for a linear kernel is much faster, see e.g. LIBLINEAR.
Typically, the best possible predictive performance is better for a nonlinear kernel (or at least as good as the linear one).
It's been shown that the linear kernel is a degenerate version of RBF, hence the linear kernel is never more accurate than a properly tuned RBF kernel. Quoting the abstract from the paper I linked:
The analysis also indicates that if complete model selection using the Gaussian kernel has been conducted, there is no need to consider linear SVM.
A basic rule of thumb is briefly covered in NTU's practical guide to support vector classification (Appendix C).
If the number of features is large, one may not need to map data to a higher dimensional space. That is, the nonlinear mapping does not improve the performance.
Using the linear kernel is good enough, and one only searches for the parameter C.
Your conclusion is more or less right but you have the argument backwards. In practice, the linear kernel tends to perform very well when the number of features is large (e.g. there is no need to map to an even higher dimensional feature space). A typical example of this is document classification, with thousands of dimensions in input space.
In those cases, nonlinear kernels are not necessarily significantly more accurate than the linear one. This basically means nonlinear kernels lose their appeal: they require way more resources to train with little to no gain in predictive performance, so why bother.
TL;DR
Always try linear first since it is way faster to train (AND test). If the accuracy suffices, pat yourself on the back for a job well done and move on to the next problem. If not, try a nonlinear kernel. | Linear kernel and non-linear kernel for support vector machine?
Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel. There are two main factors to consider:
Solving the optimisation problem for a linear kernel is much faster, see e.g. L |
4,464 | Linear kernel and non-linear kernel for support vector machine? | Andrew Ng gives a nice rule of thumb explanation in this video starting 14:46, though the whole video is worth watching.
Key Points
Use linear kernel when number of features is larger than number of observations.
Use gaussian kernel when number of observations is larger than number of features.
If number of observations is larger than 50,000 speed could be an issue when using gaussian kernel; hence, one might want to use linear kernel. | Linear kernel and non-linear kernel for support vector machine? | Andrew Ng gives a nice rule of thumb explanation in this video starting 14:46, though the whole video is worth watching.
Key Points
Use linear kernel when number of features is larger than number of | Linear kernel and non-linear kernel for support vector machine?
Andrew Ng gives a nice rule of thumb explanation in this video starting 14:46, though the whole video is worth watching.
Key Points
Use linear kernel when number of features is larger than number of observations.
Use gaussian kernel when number of observations is larger than number of features.
If number of observations is larger than 50,000 speed could be an issue when using gaussian kernel; hence, one might want to use linear kernel. | Linear kernel and non-linear kernel for support vector machine?
Andrew Ng gives a nice rule of thumb explanation in this video starting 14:46, though the whole video is worth watching.
Key Points
Use linear kernel when number of features is larger than number of |
4,465 | Logistic Regression in R (Odds Ratio) | if you want to interpret the estimated effects as relative odds ratios, just do exp(coef(x)) (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated with $\beta$ increases by 1). For profile likelihood intervals for this quantity, you can do
require(MASS)
exp(cbind(coef(x), confint(x)))
EDIT: @caracal was quicker... | Logistic Regression in R (Odds Ratio) | if you want to interpret the estimated effects as relative odds ratios, just do exp(coef(x)) (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated wit | Logistic Regression in R (Odds Ratio)
if you want to interpret the estimated effects as relative odds ratios, just do exp(coef(x)) (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated with $\beta$ increases by 1). For profile likelihood intervals for this quantity, you can do
require(MASS)
exp(cbind(coef(x), confint(x)))
EDIT: @caracal was quicker... | Logistic Regression in R (Odds Ratio)
if you want to interpret the estimated effects as relative odds ratios, just do exp(coef(x)) (gives you $e^\beta$, the multiplicative change in the odds ratio for $y=1$ if the covariate associated wit |
4,466 | Logistic Regression in R (Odds Ratio) | You are right that R's output usually contains only essential information, and more needs to be calculated separately.
N <- 100 # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N, 30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
# dichotomize Y and do logistic regression
Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
coefficients() gives you the estimated regression parameters $b_{j}$. It's easier to interpret $exp(b_{j})$ though (except for the intercept).
> exp(coefficients(glmFit))
(Intercept) X1 X2 X3
5.811655e-06 1.098665e+00 9.511785e-01 9.528930e-01
To get the odds ratio, we need the classification cross-table of the original dichotomous DV and the predicted classification according to some probability threshold that needs to be chosen first. You can also see function ClassLog() in package QuantPsyc (as chl mentioned in a related question).
# predicted probabilities or: predict(glmFit, type="response")
> Yhat <- fitted(glmFit)
> thresh <- 0.5 # threshold for dichotomizing according to predicted probability
> YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi"))
> cTab <- table(Yfac, YhatFac) # contingency table
> addmargins(cTab) # marginal sums
YhatFac
Yfac lo hi Sum
lo 41 9 50
hi 14 36 50
Sum 55 45 100
> sum(diag(cTab)) / sum(cTab) # percentage correct for training data
[1] 0.77
For the odds ratio, you can either use package vcd or do the calculation manually.
> library(vcd) # for oddsratio()
> (OR <- oddsratio(cTab, log=FALSE)) # odds ratio
[1] 11.71429
> (cTab[1, 1] / cTab[1, 2]) / (cTab[2, 1] / cTab[2, 2])
[1] 11.71429
> summary(glmFit) # test for regression parameters ...
# test for the full model against the 0-model
> glm0 <- glm(Yfac ~ 1, family=binomial(link="logit"))
> anova(glm0, glmFit, test="Chisq")
Analysis of Deviance Table
Model 1: Yfac ~ 1
Model 2: Yfac ~ X1 + X2 + X3
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 99 138.63
2 96 110.58 3 28.045 3.554e-06 *** | Logistic Regression in R (Odds Ratio) | You are right that R's output usually contains only essential information, and more needs to be calculated separately.
N <- 100 # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N | Logistic Regression in R (Odds Ratio)
You are right that R's output usually contains only essential information, and more needs to be calculated separately.
N <- 100 # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N, 30, 8)
X3 <- abs(rnorm(N, 60, 30))
Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 12)
# dichotomize Y and do logistic regression
Yfac <- cut(Y, breaks=c(-Inf, median(Y), Inf), labels=c("lo", "hi"))
glmFit <- glm(Yfac ~ X1 + X2 + X3, family=binomial(link="logit"))
coefficients() gives you the estimated regression parameters $b_{j}$. It's easier to interpret $exp(b_{j})$ though (except for the intercept).
> exp(coefficients(glmFit))
(Intercept) X1 X2 X3
5.811655e-06 1.098665e+00 9.511785e-01 9.528930e-01
To get the odds ratio, we need the classification cross-table of the original dichotomous DV and the predicted classification according to some probability threshold that needs to be chosen first. You can also see function ClassLog() in package QuantPsyc (as chl mentioned in a related question).
# predicted probabilities or: predict(glmFit, type="response")
> Yhat <- fitted(glmFit)
> thresh <- 0.5 # threshold for dichotomizing according to predicted probability
> YhatFac <- cut(Yhat, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi"))
> cTab <- table(Yfac, YhatFac) # contingency table
> addmargins(cTab) # marginal sums
YhatFac
Yfac lo hi Sum
lo 41 9 50
hi 14 36 50
Sum 55 45 100
> sum(diag(cTab)) / sum(cTab) # percentage correct for training data
[1] 0.77
For the odds ratio, you can either use package vcd or do the calculation manually.
> library(vcd) # for oddsratio()
> (OR <- oddsratio(cTab, log=FALSE)) # odds ratio
[1] 11.71429
> (cTab[1, 1] / cTab[1, 2]) / (cTab[2, 1] / cTab[2, 2])
[1] 11.71429
> summary(glmFit) # test for regression parameters ...
# test for the full model against the 0-model
> glm0 <- glm(Yfac ~ 1, family=binomial(link="logit"))
> anova(glm0, glmFit, test="Chisq")
Analysis of Deviance Table
Model 1: Yfac ~ 1
Model 2: Yfac ~ X1 + X2 + X3
Resid. Df Resid. Dev Df Deviance P(>|Chi|)
1 99 138.63
2 96 110.58 3 28.045 3.554e-06 *** | Logistic Regression in R (Odds Ratio)
You are right that R's output usually contains only essential information, and more needs to be calculated separately.
N <- 100 # generate some data
X1 <- rnorm(N, 175, 7)
X2 <- rnorm(N |
4,467 | Logistic Regression in R (Odds Ratio) | The UCLA stats page has a nice walk-through of performing logistic regression in R. It includes a brief section on calculating odds ratios. | Logistic Regression in R (Odds Ratio) | The UCLA stats page has a nice walk-through of performing logistic regression in R. It includes a brief section on calculating odds ratios. | Logistic Regression in R (Odds Ratio)
The UCLA stats page has a nice walk-through of performing logistic regression in R. It includes a brief section on calculating odds ratios. | Logistic Regression in R (Odds Ratio)
The UCLA stats page has a nice walk-through of performing logistic regression in R. It includes a brief section on calculating odds ratios. |
4,468 | Logistic Regression in R (Odds Ratio) | The epiDisplay package does this very easily.
library(epiDisplay)
data(Wells, package="carData")
glm1 <- glm(switch~arsenic+distance+education+association,
family=binomial, data=Wells)
logistic.display(glm1)
Logistic regression predicting switch : yes vs no
crude OR(95%CI) adj. OR(95%CI) P(Wald's test) P(LR-test)
arsenic (cont. var.) 1.461 (1.355,1.576) 1.595 (1.47,1.731) < 0.001 < 0.001
distance (cont. var.) 0.9938 (0.9919,0.9957) 0.9911 (0.989,0.9931) < 0.001 < 0.001
education (cont. var.) 1.04 (1.021,1.059) 1.043 (1.024,1.063) < 0.001 < 0.001
association: yes vs no 0.863 (0.746,0.999) 0.883 (0.759,1.027) 0.1063 0.1064
Log-likelihood = -1953.91299
No. of observations = 3020
AIC value = 3917.82598 | Logistic Regression in R (Odds Ratio) | The epiDisplay package does this very easily.
library(epiDisplay)
data(Wells, package="carData")
glm1 <- glm(switch~arsenic+distance+education+association,
family=binomial, data=Wells)
lo | Logistic Regression in R (Odds Ratio)
The epiDisplay package does this very easily.
library(epiDisplay)
data(Wells, package="carData")
glm1 <- glm(switch~arsenic+distance+education+association,
family=binomial, data=Wells)
logistic.display(glm1)
Logistic regression predicting switch : yes vs no
crude OR(95%CI) adj. OR(95%CI) P(Wald's test) P(LR-test)
arsenic (cont. var.) 1.461 (1.355,1.576) 1.595 (1.47,1.731) < 0.001 < 0.001
distance (cont. var.) 0.9938 (0.9919,0.9957) 0.9911 (0.989,0.9931) < 0.001 < 0.001
education (cont. var.) 1.04 (1.021,1.059) 1.043 (1.024,1.063) < 0.001 < 0.001
association: yes vs no 0.863 (0.746,0.999) 0.883 (0.759,1.027) 0.1063 0.1064
Log-likelihood = -1953.91299
No. of observations = 3020
AIC value = 3917.82598 | Logistic Regression in R (Odds Ratio)
The epiDisplay package does this very easily.
library(epiDisplay)
data(Wells, package="carData")
glm1 <- glm(switch~arsenic+distance+education+association,
family=binomial, data=Wells)
lo |
4,469 | Logistic Regression in R (Odds Ratio) | I tried @fabians's answer. It gave different results compared to @lockedoff's and @Edward answer when using a binary predictor. Please be careful when choosing the method.
For my own model, using @fabian's method, it gave Odds ratio 4.01 with confidence interval [1.183976, 25.038871] while @lockedoff's answer gave odds ratio 4.01 with confidence interval [0.94,17.05].
My model summary is as the following:
Call:
glm(formula = slicc_leuk ~ binf, family = binomial, data = kk)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.6823 0.3482 0.4625 0.4625 0.4625
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.1812 0.1783 12.232 <2e-16 ***
binf1 0.5914 0.6213 0.952 0.3412
binf2 1.3883 0.7388 1.879 0.0602 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 273.43 on 468 degrees of freedom
Residual deviance: 267.65 on 466 degrees of freedom
AIC: 273.65
Number of Fisher Scoring iterations: 6
> exp(cbind(coef(fit1), confint(fit1)))
Waiting for profiling to be done...
2.5 % 97.5 %
(Intercept) 8.857143 6.340297 12.782462
binf1 1.806452 0.618633 7.697776
binf2 4.008065 1.183976 25.038871
> logistic.display(fit1)
Logistic regression predicting slicc_leuk : 1 vs 0
OR(95%CI) P(Wald's test) P(LR-test)
binf: ref.=0 0.056
1 1.81 (0.53,6.1) 0.341
2 4.01 (0.94,17.05) 0.06
Log-likelihood = -133.8248
No. of observations = 469
AIC value = 273.6496
``` | Logistic Regression in R (Odds Ratio) | I tried @fabians's answer. It gave different results compared to @lockedoff's and @Edward answer when using a binary predictor. Please be careful when choosing the method.
For my own model, using @fa | Logistic Regression in R (Odds Ratio)
I tried @fabians's answer. It gave different results compared to @lockedoff's and @Edward answer when using a binary predictor. Please be careful when choosing the method.
For my own model, using @fabian's method, it gave Odds ratio 4.01 with confidence interval [1.183976, 25.038871] while @lockedoff's answer gave odds ratio 4.01 with confidence interval [0.94,17.05].
My model summary is as the following:
Call:
glm(formula = slicc_leuk ~ binf, family = binomial, data = kk)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.6823 0.3482 0.4625 0.4625 0.4625
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.1812 0.1783 12.232 <2e-16 ***
binf1 0.5914 0.6213 0.952 0.3412
binf2 1.3883 0.7388 1.879 0.0602 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 273.43 on 468 degrees of freedom
Residual deviance: 267.65 on 466 degrees of freedom
AIC: 273.65
Number of Fisher Scoring iterations: 6
> exp(cbind(coef(fit1), confint(fit1)))
Waiting for profiling to be done...
2.5 % 97.5 %
(Intercept) 8.857143 6.340297 12.782462
binf1 1.806452 0.618633 7.697776
binf2 4.008065 1.183976 25.038871
> logistic.display(fit1)
Logistic regression predicting slicc_leuk : 1 vs 0
OR(95%CI) P(Wald's test) P(LR-test)
binf: ref.=0 0.056
1 1.81 (0.53,6.1) 0.341
2 4.01 (0.94,17.05) 0.06
Log-likelihood = -133.8248
No. of observations = 469
AIC value = 273.6496
``` | Logistic Regression in R (Odds Ratio)
I tried @fabians's answer. It gave different results compared to @lockedoff's and @Edward answer when using a binary predictor. Please be careful when choosing the method.
For my own model, using @fa |
4,470 | Logistic Regression in R (Odds Ratio) | R has been mature with regard to odds ratio calculations more more than two decades. It's best to think about this in general terms. For example what if x1 and x2 are continuous and have nonlinear effects and interact with each other? Here is example code where the inter-quartile-range effect of x1 is computed, adjusted to x2=1.5.
require(rms)
dd <- datadist(mydata); options(datadist='dd')
# restricted cubic spline in x1, quadratic in x2, interactions
f <- lrm(y ~ rcs(x1, 4) * pol(x2, 2), data=mydata)
summary(f, x2=1.5) # gives IQR odds ratio for x1
summary(f, x1=c(1, 3), x2=1.5) # OR for x1=1 vs. x1=3, still x2=1.5
You can see that dealing with individual coefficients is not the general solution. | Logistic Regression in R (Odds Ratio) | R has been mature with regard to odds ratio calculations more more than two decades. It's best to think about this in general terms. For example what if x1 and x2 are continuous and have nonlinear e | Logistic Regression in R (Odds Ratio)
R has been mature with regard to odds ratio calculations more more than two decades. It's best to think about this in general terms. For example what if x1 and x2 are continuous and have nonlinear effects and interact with each other? Here is example code where the inter-quartile-range effect of x1 is computed, adjusted to x2=1.5.
require(rms)
dd <- datadist(mydata); options(datadist='dd')
# restricted cubic spline in x1, quadratic in x2, interactions
f <- lrm(y ~ rcs(x1, 4) * pol(x2, 2), data=mydata)
summary(f, x2=1.5) # gives IQR odds ratio for x1
summary(f, x1=c(1, 3), x2=1.5) # OR for x1=1 vs. x1=3, still x2=1.5
You can see that dealing with individual coefficients is not the general solution. | Logistic Regression in R (Odds Ratio)
R has been mature with regard to odds ratio calculations more more than two decades. It's best to think about this in general terms. For example what if x1 and x2 are continuous and have nonlinear e |
4,471 | Logistic Regression in R (Odds Ratio) | Similar to the choosen answer, but there is a direct command to get the exp(coefficients) and the intervals in one line.
Current choosen answer:
require(MASS)
exp(cbind(coef(x), confint(x)))
In one line:
summ(my_model, exp = T)
(I could not make a comment since I'm still a newbi and don't have enough reputation points on the website). | Logistic Regression in R (Odds Ratio) | Similar to the choosen answer, but there is a direct command to get the exp(coefficients) and the intervals in one line.
Current choosen answer:
require(MASS)
exp(cbind(coef(x), confint(x)))
In one | Logistic Regression in R (Odds Ratio)
Similar to the choosen answer, but there is a direct command to get the exp(coefficients) and the intervals in one line.
Current choosen answer:
require(MASS)
exp(cbind(coef(x), confint(x)))
In one line:
summ(my_model, exp = T)
(I could not make a comment since I'm still a newbi and don't have enough reputation points on the website). | Logistic Regression in R (Odds Ratio)
Similar to the choosen answer, but there is a direct command to get the exp(coefficients) and the intervals in one line.
Current choosen answer:
require(MASS)
exp(cbind(coef(x), confint(x)))
In one |
4,472 | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Note that the Shapiro-Wilk is a powerful test of normality.
The best approach is really to have a good idea of how sensitive any procedure you want to use is to various kinds of non-normality (how badly non-normal does it have to be in that way for it to affect your inference more than you can accept).
An informal approach for looking at the plots would be to generate a number of data sets that are actually normal of the same sample size as the one you have - (for example, say 24 of them). Plot your real data among a grid of such plots (5x5 in the case of 24 random sets). If it's not especially unusual looking (the worst looking one, say), it's reasonably consistent with normality.
To my eye, data set "Z" in the center looks roughly on a par with "o" and "v" and maybe even "h", while "d" and "f" look slightly worse. "Z" is the real data. While I don't believe for a moment that it's actually normal, it's not particularly unusual-looking when you compare it with normal data.
[Edit: I just conducted a random poll --- well, I asked my daughter, but at a fairly random time -- and her choice for the least like a straight line was "d". So 100% of those surveyed thought "d" was the most-odd one.]
More formal approach would be to do a Shapiro-Francia test (which is effectively based on the correlation in the QQ-plot), but (a) it's not even as powerful as the Shapiro Wilk test, and (b) formal testing answers a question (sometimes) that you should already know the answer to anyway (the distribution your data were drawn from isn't exactly normal), instead of the question you need answered (how badly does that matter?).
As requested, code for the above display. Nothing fancy involved:
z = lm(dist~speed,cars)$residual
n = length(z)
xz = cbind(matrix(rnorm(12*n), nr=n), z,
matrix(rnorm(12*n), nr=n))
colnames(xz) = c(letters[1:12],"Z",letters[13:24])
opar = par()
par(mfrow=c(5,5));
par(mar=c(0.5,0.5,0.5,0.5))
par(oma=c(1,1,1,1));
ytpos = (apply(xz,2,min)+3*apply(xz,2,max))/4
cn = colnames(xz)
for(i in 1:25) {
qqnorm(xz[, i], axes=FALSE, ylab= colnames(xz)[i],
xlab="", main="")
qqline(xz[,i],col=2,lty=2)
box("figure", col="darkgreen")
text(-1.5,ytpos[i],cn[i])
}
par(opar)
Note that this was just for the purposes of illustration; I wanted a small data set that looked mildly non-normal which is why I used the residuals from a linear regression on the cars data (the model isn't quite appropriate). However, if I was actually generating such a display for a set of residuals for a regression, I'd regress all 25 data sets on the same $x$'s as in the model, and display QQ plots of their residuals, since residuals have some structure not present in normal random numbers.
(I've been making sets of plots like this since the mid-80s at least. How can you interpret plots if you are unfamiliar with how they behave when the assumptions hold --- and when they don't?)
See more:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F
and Wickham, H. (2009) Statistical Inference for exploratory data
analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367,
4361-4383 doi: 10.1098/rsta.2009.0120
Edit: I mentioned this issue in my second paragraph but I want to emphasize the point again, in case it gets forgotten along the way. What usually matters is not whether you can tell something is not-actually-normal (whether by formal test or by looking at a plot) but rather how much it matters for what you would be using that model to do: How sensitive are the properties you care about to the amount and manner of lack of fit you might have between your model and the actual population?
The answer to the question "is the population I'm sampling actually normally distributed" is, essentially always, "no" (you don't need a test or a plot for that), but the question is rather "how much does it matter?". If the answer is "not much at all", the fact that the assumption is false is of little practical consequence. A plot can help some since it at least shows you something of the 'amount and manner' of deviation between the sample and the distributional model, so it's a starting point for considering whether it would matter. However, whether it does depends on the properties of what you are doing (consider a t-test vs a test of variance for example; the t-test can in general tolerate much more substantial deviations from the assumptions that are made in its derivation than an F-ratio test of equality variances can). | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Note that the Shapiro-Wilk is a powerful test of normality.
The best approach is really to have a good idea of how sensitive any procedure you want to use is to various kinds of non-normality (how bad | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Note that the Shapiro-Wilk is a powerful test of normality.
The best approach is really to have a good idea of how sensitive any procedure you want to use is to various kinds of non-normality (how badly non-normal does it have to be in that way for it to affect your inference more than you can accept).
An informal approach for looking at the plots would be to generate a number of data sets that are actually normal of the same sample size as the one you have - (for example, say 24 of them). Plot your real data among a grid of such plots (5x5 in the case of 24 random sets). If it's not especially unusual looking (the worst looking one, say), it's reasonably consistent with normality.
To my eye, data set "Z" in the center looks roughly on a par with "o" and "v" and maybe even "h", while "d" and "f" look slightly worse. "Z" is the real data. While I don't believe for a moment that it's actually normal, it's not particularly unusual-looking when you compare it with normal data.
[Edit: I just conducted a random poll --- well, I asked my daughter, but at a fairly random time -- and her choice for the least like a straight line was "d". So 100% of those surveyed thought "d" was the most-odd one.]
More formal approach would be to do a Shapiro-Francia test (which is effectively based on the correlation in the QQ-plot), but (a) it's not even as powerful as the Shapiro Wilk test, and (b) formal testing answers a question (sometimes) that you should already know the answer to anyway (the distribution your data were drawn from isn't exactly normal), instead of the question you need answered (how badly does that matter?).
As requested, code for the above display. Nothing fancy involved:
z = lm(dist~speed,cars)$residual
n = length(z)
xz = cbind(matrix(rnorm(12*n), nr=n), z,
matrix(rnorm(12*n), nr=n))
colnames(xz) = c(letters[1:12],"Z",letters[13:24])
opar = par()
par(mfrow=c(5,5));
par(mar=c(0.5,0.5,0.5,0.5))
par(oma=c(1,1,1,1));
ytpos = (apply(xz,2,min)+3*apply(xz,2,max))/4
cn = colnames(xz)
for(i in 1:25) {
qqnorm(xz[, i], axes=FALSE, ylab= colnames(xz)[i],
xlab="", main="")
qqline(xz[,i],col=2,lty=2)
box("figure", col="darkgreen")
text(-1.5,ytpos[i],cn[i])
}
par(opar)
Note that this was just for the purposes of illustration; I wanted a small data set that looked mildly non-normal which is why I used the residuals from a linear regression on the cars data (the model isn't quite appropriate). However, if I was actually generating such a display for a set of residuals for a regression, I'd regress all 25 data sets on the same $x$'s as in the model, and display QQ plots of their residuals, since residuals have some structure not present in normal random numbers.
(I've been making sets of plots like this since the mid-80s at least. How can you interpret plots if you are unfamiliar with how they behave when the assumptions hold --- and when they don't?)
See more:
Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F
and Wickham, H. (2009) Statistical Inference for exploratory data
analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367,
4361-4383 doi: 10.1098/rsta.2009.0120
Edit: I mentioned this issue in my second paragraph but I want to emphasize the point again, in case it gets forgotten along the way. What usually matters is not whether you can tell something is not-actually-normal (whether by formal test or by looking at a plot) but rather how much it matters for what you would be using that model to do: How sensitive are the properties you care about to the amount and manner of lack of fit you might have between your model and the actual population?
The answer to the question "is the population I'm sampling actually normally distributed" is, essentially always, "no" (you don't need a test or a plot for that), but the question is rather "how much does it matter?". If the answer is "not much at all", the fact that the assumption is false is of little practical consequence. A plot can help some since it at least shows you something of the 'amount and manner' of deviation between the sample and the distributional model, so it's a starting point for considering whether it would matter. However, whether it does depends on the properties of what you are doing (consider a t-test vs a test of variance for example; the t-test can in general tolerate much more substantial deviations from the assumptions that are made in its derivation than an F-ratio test of equality variances can). | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Note that the Shapiro-Wilk is a powerful test of normality.
The best approach is really to have a good idea of how sensitive any procedure you want to use is to various kinds of non-normality (how bad |
4,473 | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Without contradicting any of the excellent answers here, I have one rule of thumb which is often (but not always) decisive. (A passing comment in the answer by @Dante seems pertinent too.)
It sometimes seems too obvious to state, but here you are.
I am happy to call a distribution non-normal if I think I can offer a different description that is clearly more appropriate.
So, if there is minor curvature and/or irregularity in the tails of a normal quantile-quantile plot, but approximate straightness on a gamma quantile-quantile plot, I can say "That's not well characterised as a normal; it's more like a gamma".
It's no accident that this echoes a standard argument in history and philosophy of science, not to mention general scientific practice, that a hypothesis is most clearly and effectively refuted when you have a better one to put in its place. (Cue: allusions to Karl Popper, Thomas S. Kuhn, and so forth.)
It is true that for beginners, and indeed for everyone, there is a smooth gradation between "That is normal, except for minor irregularities which we always expect" and "That is very different from normal, except for some rough similarity which we often get".
Confidence(-like) envelopes and multiple simulated samples can help mightily, and I use and recommend both, but this can be helpful too. (Incidentally, comparing with a portfolio of simulations is a repeated recent re-invention, but goes back at least as far as Shewhart in 1931.)
I'll echo my top line. Sometimes no brand-name distribution appears to fit at all, and you have to move forward as best you can. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Without contradicting any of the excellent answers here, I have one rule of thumb which is often (but not always) decisive. (A passing comment in the answer by @Dante seems pertinent too.)
It someti | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Without contradicting any of the excellent answers here, I have one rule of thumb which is often (but not always) decisive. (A passing comment in the answer by @Dante seems pertinent too.)
It sometimes seems too obvious to state, but here you are.
I am happy to call a distribution non-normal if I think I can offer a different description that is clearly more appropriate.
So, if there is minor curvature and/or irregularity in the tails of a normal quantile-quantile plot, but approximate straightness on a gamma quantile-quantile plot, I can say "That's not well characterised as a normal; it's more like a gamma".
It's no accident that this echoes a standard argument in history and philosophy of science, not to mention general scientific practice, that a hypothesis is most clearly and effectively refuted when you have a better one to put in its place. (Cue: allusions to Karl Popper, Thomas S. Kuhn, and so forth.)
It is true that for beginners, and indeed for everyone, there is a smooth gradation between "That is normal, except for minor irregularities which we always expect" and "That is very different from normal, except for some rough similarity which we often get".
Confidence(-like) envelopes and multiple simulated samples can help mightily, and I use and recommend both, but this can be helpful too. (Incidentally, comparing with a portfolio of simulations is a repeated recent re-invention, but goes back at least as far as Shewhart in 1931.)
I'll echo my top line. Sometimes no brand-name distribution appears to fit at all, and you have to move forward as best you can. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Without contradicting any of the excellent answers here, I have one rule of thumb which is often (but not always) decisive. (A passing comment in the answer by @Dante seems pertinent too.)
It someti |
4,474 | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Like @Glen_b said, you can compare your data with the data you're sure is normal - the data you generated yourself, and then rely on your gut feeling :)
The following is an example from OpenIntro Statistics textbook
Let's have a look at this Q-Q Plot:
Is it normal? Let's compare it with normally distributed data:
This one looks better than our data, so our data doesn't seem normal. Let's make sure by simulating it several times and plotting side-by-side
So our gut feeling tells us that the sample is not likely to be distributed normally.
Here's the R code to do this
load(url("http://www.openintro.org/stat/data/bdims.RData"))
fdims = subset(bdims, bdims$sex == 0)
qqnorm(fdims$wgt, col=adjustcolor("orange", 0.4), pch=19)
qqline(fdims$wgt)
qqnormsim = function(dat, dim=c(2,2)) {
par(mfrow=dim)
qqnorm(dat, col=adjustcolor("orange", 0.4),
pch=19, cex=0.7, main="Normal QQ Plot (Data)")
qqline(dat)
for (i in 1:(prod(dim) - 1)) {
simnorm = rnorm(n=length(dat), mean=mean(dat), sd=sd(dat))
qqnorm(simnorm, col=adjustcolor("orange", 0.4),
pch=19, cex=0.7,
main="Normal QQ Plot (Sim)")
qqline(simnorm)
}
par(mfrow=c(1, 1))
}
qqnormsim(fdims$wgt) | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | Like @Glen_b said, you can compare your data with the data you're sure is normal - the data you generated yourself, and then rely on your gut feeling :)
The following is an example from OpenIntro Stat | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Like @Glen_b said, you can compare your data with the data you're sure is normal - the data you generated yourself, and then rely on your gut feeling :)
The following is an example from OpenIntro Statistics textbook
Let's have a look at this Q-Q Plot:
Is it normal? Let's compare it with normally distributed data:
This one looks better than our data, so our data doesn't seem normal. Let's make sure by simulating it several times and plotting side-by-side
So our gut feeling tells us that the sample is not likely to be distributed normally.
Here's the R code to do this
load(url("http://www.openintro.org/stat/data/bdims.RData"))
fdims = subset(bdims, bdims$sex == 0)
qqnorm(fdims$wgt, col=adjustcolor("orange", 0.4), pch=19)
qqline(fdims$wgt)
qqnormsim = function(dat, dim=c(2,2)) {
par(mfrow=dim)
qqnorm(dat, col=adjustcolor("orange", 0.4),
pch=19, cex=0.7, main="Normal QQ Plot (Data)")
qqline(dat)
for (i in 1:(prod(dim) - 1)) {
simnorm = rnorm(n=length(dat), mean=mean(dat), sd=sd(dat))
qqnorm(simnorm, col=adjustcolor("orange", 0.4),
pch=19, cex=0.7,
main="Normal QQ Plot (Sim)")
qqline(simnorm)
}
par(mfrow=c(1, 1))
}
qqnormsim(fdims$wgt) | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
Like @Glen_b said, you can compare your data with the data you're sure is normal - the data you generated yourself, and then rely on your gut feeling :)
The following is an example from OpenIntro Stat |
4,475 | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | There are many tests of normality. One usually focuses on the null hypothesis, namely, "$H_0: F=Normal$". However, little attention is paid to the alternative hypothesis: "against what"?
Typically, tests that consider any other distribution as the alternative hypothesis have low power when compared against tests with the right alternative hypothesis (see, for instance, 1 and 2).
There is an interesting R package with the implementation of several nonparametric normality tests ('nortest', http://cran.r-project.org/web/packages/nortest/index.html). As mentioned in the papers above, the likelihood ratio test, with appropriate alternative hypothesis, is more powerful than these tests.
The idea mentioned by @Glen_b about comparing your sample against random samples from your (fitted) model is mentioned in my second reference. They are called "QQ-Envelopes" or "QQ-Fans". This implicitly requires having a model to generate the data from and, consequently, an alternative hypothesis. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | There are many tests of normality. One usually focuses on the null hypothesis, namely, "$H_0: F=Normal$". However, little attention is paid to the alternative hypothesis: "against what"?
Typically, te | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
There are many tests of normality. One usually focuses on the null hypothesis, namely, "$H_0: F=Normal$". However, little attention is paid to the alternative hypothesis: "against what"?
Typically, tests that consider any other distribution as the alternative hypothesis have low power when compared against tests with the right alternative hypothesis (see, for instance, 1 and 2).
There is an interesting R package with the implementation of several nonparametric normality tests ('nortest', http://cran.r-project.org/web/packages/nortest/index.html). As mentioned in the papers above, the likelihood ratio test, with appropriate alternative hypothesis, is more powerful than these tests.
The idea mentioned by @Glen_b about comparing your sample against random samples from your (fitted) model is mentioned in my second reference. They are called "QQ-Envelopes" or "QQ-Fans". This implicitly requires having a model to generate the data from and, consequently, an alternative hypothesis. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
There are many tests of normality. One usually focuses on the null hypothesis, namely, "$H_0: F=Normal$". However, little attention is paid to the alternative hypothesis: "against what"?
Typically, te |
4,476 | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | When teaching my regression modeling strategies course, this topic always troubles my students and me. I tell them that our graphical assessments are always subjective, and I tend to worry about the graphs more early in the day than later when I'm tired. Adding formal statistical tests doesn't help enough: tests can pick up trivial non-normality for very large sample sizes and miss important non-normality for small $n$. I prefer using methods that do not assume normality that are efficient, e.g., ordinal regression for continuous $Y$. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality? | When teaching my regression modeling strategies course, this topic always troubles my students and me. I tell them that our graphical assessments are always subjective, and I tend to worry about the | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
When teaching my regression modeling strategies course, this topic always troubles my students and me. I tell them that our graphical assessments are always subjective, and I tend to worry about the graphs more early in the day than later when I'm tired. Adding formal statistical tests doesn't help enough: tests can pick up trivial non-normality for very large sample sizes and miss important non-normality for small $n$. I prefer using methods that do not assume normality that are efficient, e.g., ordinal regression for continuous $Y$. | Interpreting QQplot - Is there any rule of thumb to decide for non-normality?
When teaching my regression modeling strategies course, this topic always troubles my students and me. I tell them that our graphical assessments are always subjective, and I tend to worry about the |
4,477 | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$? | We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, covering the major divisions of mathematical thought--analysis (the infinite and the infinitesimal), geometry/topology (spatial relationships), and algebra (formal patterns of symbolic manipulation)--as well as probability itself. It culminates in an observation that unifies all four approaches, demonstrates there is a genuine question to be answered here, and shows exactly what the issue is. Each approach provides, in its own way, deeper insight into the nature of the shapes of the probability distribution functions of sums of independent uniform variables.
Background
The Uniform $[0,1]$ distribution has several basic descriptions. When $X$ has such a distribution,
The chance that $X$ lies in a measurable set $A$ is just the measure (length) of $A \cap [0,1]$, written $|A \cap [0,1]|$.
From this it is immediate that the cumulative distribution function (CDF) is
$$F_X(x) = \Pr(X \le x) = |(-\infty, x] \cap [0,1]| = |[0,\min(x,1)]| = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x\lt 0 \\
x & 0\leq x\leq 1 \\
1 & x\gt 1.
\end{array}\right.
\end{array} $$
The probability density function (PDF), which is the derivative of the CDF, is $f_X(x) = 1$ for $0 \le x \le 1$ and $f_X(x)=0$ otherwise. (It is undefined at $0$ and $1$.)
Intuition from Characteristic Functions (Analysis)
The characteristic function (CF) of any random variable $X$ is the expectation of $\exp(i t X)$ (where $i$ is the imaginary unit, $i^2=-1$). Using the PDF of a uniform distribution we can compute
$$\phi_X(t) = \int_{-\infty}^\infty \exp(i t x) f_X(x) dx = \int_0^1 \exp(i t x) dx = \left. \frac{\exp(itx)}{it} \right|_{x=0}^{x=1} = \frac{\exp(it)-1}{it}.$$
The CF is a (version of the) Fourier transform of the PDF, $\phi(t) = \hat{f}(t)$. The most basic theorems about Fourier transforms are:
The CF of a sum of independent variables $X+Y$ is the product of their CFs.
When the original PDF $f$ is continuous and $X$ is bounded, $f$ can be recovered from the CF $\phi$ by a closely related version of the Fourier transform,
$$f(x) = \check{\phi}(x) = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt.$$
When $f$ is differentiable, its derivative can be computed under the integral sign:
$$f'(x) = \frac{d}{dx} \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt = \frac{-i}{2\pi} \int_{-\infty}^\infty t \exp(-i x t) \phi(t) dt.$$
For this to be well-defined, the last integral must converge absolutely; that is,
$$\int_{-\infty}^\infty |t \exp(-i x t) \phi(t)| dt = \int_{-\infty}^\infty |t| |\phi(t)| dt$$
must converge to a finite value. Conversely, when it does converge, the derivative exists everywhere by virtue of these inversion formulas.
It is now clear exactly how differentiable the PDF for a sum of $n$ uniform variables is: from the first bullet, the CF of the sum of iid variables is the CF of one of them raised to the $n^\text{th}$ power, here equal to $(\exp(i t) - 1)^n / (i t)^n$. The numerator is bounded (it consists of sine waves) while the denominator is $O(t^{n})$. We can multiply such an integrand by $t^{s}$ and it will still converge absolutely when $s \lt n-1$ and converge conditionally when $s = n-1$. Thus, repeated application of the third bullet shows that the PDF for the sum of $n$ uniform variates will be continuously $n-2$ times differentiable and, in most places, it will be $n-1$ times differentiable.
The blue shaded curve is a log-log plot of the absolute value of the real part of the CF of the sum of $n=10$ iid uniform variates. The dashed red line is an asymptote; its slope is $-10$, showing that the PDF is $10 - 2 = 8$ times differentiable. For reference, the gray curve plots the real part of the CF for a similarly shaped Gaussian function (a normal PDF).
Intuition from Probability
Let $Y$ and $X$ be independent random variables where $X$ has a Uniform $[0,1]$ distribution. Consider a narrow interval $(t, t+dt]$. We decompose the chance that $X+Y \in (t, t+dt]$ into the chance that $Y$ is sufficiently close to this interval times the chance that $X$ is just the right size to place $X+Y$ in this interval, given that $Y$ is close enough:
$$\eqalign{
f_{X+Y}(t) dt = &\Pr(X+Y\in (t,t+dt])\\
& = \Pr(X+Y\in (t,t+dt] | Y \in (t-1, t+dt]) \Pr(Y \in (t-1, t+dt]) \\
& = \Pr(X \in (t-Y, t-Y+dt] | Y \in (t-1, t+dt]) \left(F_Y(t+dt) - F_Y(t-1)\right) \\
& = 1 dt \left(F_Y(t+dt) - F_Y(t-1)\right).
}$$
The final equality comes from the expression for the PDF of $X$. Dividing both sides by $dt$ and taking the limit as $dt\to 0$ gives
$$f_{X+Y}(t) = F_Y(t) - F_Y(t-1).$$
In other words, adding a Uniform $[0,1]$ variable $X$ to any variable $Y$ changes the pdf $f_Y$ into a differenced CDF $F_Y(t) - F_Y(t-1)$. Because the PDF is the derivative of the CDF, this implies that each time we add an independent uniform variable to $Y$, the resulting PDF is one time more differentiable than before.
Let's apply this insight, starting with a uniform variable $Y$. The original PDF is not differentiable at $0$ or $1$: it is discontinuous there. The PDF of $Y+X$ is not differentiable at $0$, $1$, or $2$, but it must be continuous at those points, because it is the difference of integrals of the PDF of $Y$. Add another independent uniform variable $X_2$: the PDF of $Y+X+X_2$ is differentiable at $0$,$1$,$2$, and $3$--but it does not necessarily have second derivatives at those points. And so on.
Intuition from Geometry
The CDF at $t$ of a sum of $n$ iid uniform variates equals the volume of the unit hypercube $[0,1]^n$ lying within the half-space $x_1+x_2+\cdots+x_n \le t$. The situation for $n=3$ variates is shown here, with $t$ set at $1/2$, $3/2$, and then $5/2$.
As $t$ progresses from $0$ through $n$, the hyperplane $H_n(t): x_1+x_2+\cdots+x_n=t$ crosses vertices at $t=0$, $t=1, \ldots, t=n$. At each time the shape of the cross section changes: in the figure it first is a triangle (a $2$-simplex), then a hexagon, then a triangle again. Why doesn't the PDF have sharp bends at these values of $t$?
To understand this, first consider small values of $t$. Here, the hyperplane $H_n(t)$ cuts off an $n-1$-simplex. All $n-1$ dimensions of the simplex are directly proportional to $t$, whence its "area" is proportional to $t^{n-1}$. Some notation for this will come in handy later. Let $\theta$ be the "unit step function,"
$$\theta(x) = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x \lt 0 \\
1 & x\ge 0.
\end{array}\right.
\end{array} $$
If it were not for the presence of the other corners of the hypercube, this scaling would continue indefinitely. A plot of the area of the $n-1$-simplex would look like the solid blue curve below: it is zero at negative values and equals $t^{n-1}/(n-1)!$ at the positive one, conveniently written $\theta(t) t^{n-1}/(n-1)!$. It has a "kink" of order $n-2$ at the origin, in the sense that all derivatives through order $n-3$ exist and are continuous, but that left and right derivatives of order $n-2$ exist but do not agree at the origin.
(The other curves shown in this figure are $-3\theta(t-1) (t-1)^{2}/2!$ (red), $3\theta(t-2) (t-2)^{2}/2!$ (gold), and $-\theta(t-3) (t-3)^{2}/2!$ (black). Their roles in the case $n=3$ are discussed further below.)
To understand what happens when $t$ crosses $1$, let's examine in detail the case $n=2$, where all the geometry happens in a plane. We may view the unit "cube" (now just a square) as a linear combination of quadrants, as shown here:
The first quadrant appears in the lower left panel, in gray. The value of $t$ is $1.5$, determining the diagonal line shown in all five panels. The CDF equals the yellow area shown at right. This yellow area is comprised of:
The triangular gray area in the lower left panel,
minus the triangular green area in the upper left panel,
minus the triangular red area in the low middle panel,
plus any blue area in the upper middle panel (but there isn't any such area, nor will there be until $t$ exceeds $2$).
Every one of these $2^n=4$ areas is the area of a triangle. The first one scales like $t^n=t^2$, the next two are zero for $t\lt 1$ and otherwise scale like $(t-1)^n = (t-1)^2$, and the last is zero for $t\lt 2$ and otherwise scales like $(t-2)^n$. This geometric analysis has established that the CDF is proportional to $\theta(t)t^2 - \theta(t-1)(t-1)^2 - \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$ = $\theta(t)t^2 - 2 \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$; equivalently, the PDF is proportional to the sum of the three functions $\theta(t)t$, $-2\theta(t-1)(t-1)$, and $\theta(t-2)(t-2)$ (each of them scaling linearly when $n=2$). The left panel of this figure shows their graphs: evidently, they are all versions of the original graph $\theta(t)t$, but (a) shifted by $0$, $1$, and $2$ units to the right and (b) rescaled by $1$, $-2$, and $1$, respectively.
The right panel shows the sum of these graphs (the solid black curve, normalized to have unit area: this is precisely the angular-looking PDF shown in the original question.
Now we can understand the nature of the "kinks" in the PDF of any sum of iid uniform variables. They are all exactly like the "kink" that occurs at $0$ in the function $\theta(t)t^{n-1}$, possibly rescaled, and shifted to the integers $1,2,\ldots, n$ corresponding to where the hyperplane $H_n(t)$ crosses the vertices of the hypercube. For $n=2$, this is a visible change in direction: the right derivative of $\theta(t)t$ at $0$ is $0$ while its left derivative is $1$. For $n=3$, this is a continuous change in direction, but a sudden (discontinuous) change in second derivative. For general $n$, there will be continuous derivatives through order $n-2$ but a discontinuity in the $n-1^\text{st}$ derivative.
Intuition from Algebraic Manipulation
The integration to compute the CF, the form of the conditional probability in the probabilistic analysis, and the synthesis of a hypercube as a linear combination of quadrants all suggest returning to the original uniform distribution and re-expressing it as a linear combination of simpler things. Indeed, its PDF can be written
$$f_X(x) = \theta(x) - \theta(x-1).$$
Let us introduce the shift operator $\Delta$: it acts on any function $f$ by shifting its graph one unit to the right:
$$(\Delta f)(x) = f(x-1).$$
Formally, then, for the PDF of a uniform variable $X$ we may write
$$f_X = (1 - \Delta)\theta.$$
The PDF of a sum of $n$ iid uniforms is the convolution of $f_X$ with itself $n$ times. This follows from the definition of a sum of random variables: the convolution of two functions $f$ and $g$ is the function
$$(f \star g)(x) = \int_{-\infty}^{\infty} f(x-y)g(y) dy.$$
It is easy to verify that convolution commutes with $\Delta$. Just change the variable of integration from $y$ to $y+1$:
$$\eqalign{
(f \star (\Delta g)) &= \int_{-\infty}^{\infty} f(x-y)(\Delta g)(y) dy \\
&= \int_{-\infty}^{\infty} f(x-y)g(y-1) dy \\
&= \int_{-\infty}^{\infty} f((x-1)-y)g(y) dy \\
&= (\Delta (f \star g))(x).
}$$
For the PDF of the sum of $n$ iid uniforms, we may now proceed algebraically to write
$$f = f_X^{\star n} = ((1 - \Delta)\theta)^{\star n} = (1-\Delta)^n \theta^{\star n}$$
(where the $\star n$ "power" denotes repeated convolution, not pointwise multiplication!). Now $\theta^{\star n}$ is a direct, elementary integration, giving
$$\theta^{\star n}(x) = \theta(x) \frac{x^{n-1}}{{n-1}!}.$$
The rest is algebra, because the Binomial Theorem applies (as it does in any commutative algebra over the reals):
$$f = (1-\Delta)^n \theta^{\star n} = \sum_{i=0}^{n} (-1)^i \binom{n}{i} \Delta^i \theta^{\star n}.$$
Because $\Delta^i$ merely shifts its argument by $i$, this exhibits the PDF $f$ as a linear combination of shifted versions of $\theta(x) x^{n-1}$, exactly as we deduced geometrically:
$$f(x) = \frac{1}{(n-1)!}\sum_{i=0}^{n} (-1)^i \binom{n}{i} (x-i)^{n-1}\theta(x-i).$$
(John Cook quotes this formula later in his blog post, using the notation $(x-i)^{n-1}_+$ for $(x-i)^{n-1}\theta(x-i)$.)
Accordingly, because $x^{n-1}$ is a smooth function everywhere, any singular behavior of the PDF will occur only at places where $\theta(x)$ is singular (obviously just $0$) and at those places shifted to the right by $1, 2, \ldots, n$. The nature of that singular behavior--the degree of smoothness--will therefore be the same at all $n+1$ locations.
Illustrating this is the picture for $n=8$, showing (in the left panel) the individual terms in the sum and (in the right panel) the partial sums, culminating in the sum itself (solid black curve):
Closing Comments
It is useful to note that this last approach has finally yielded a compact, practical expression for computing the PDF of a sum of $n$ iid uniform variables. (A formula for the CDF is similarly obtained.)
The Central Limit Theorem has little to say here. After all, a sum of iid Binomial variables converges to a Normal distribution, but that sum is always discrete: it never even has a PDF at all! We should not hope for any intuition about "kinks" or other measures of differentiability of a PDF to come from the CLT. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of | We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, c | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$?
We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, covering the major divisions of mathematical thought--analysis (the infinite and the infinitesimal), geometry/topology (spatial relationships), and algebra (formal patterns of symbolic manipulation)--as well as probability itself. It culminates in an observation that unifies all four approaches, demonstrates there is a genuine question to be answered here, and shows exactly what the issue is. Each approach provides, in its own way, deeper insight into the nature of the shapes of the probability distribution functions of sums of independent uniform variables.
Background
The Uniform $[0,1]$ distribution has several basic descriptions. When $X$ has such a distribution,
The chance that $X$ lies in a measurable set $A$ is just the measure (length) of $A \cap [0,1]$, written $|A \cap [0,1]|$.
From this it is immediate that the cumulative distribution function (CDF) is
$$F_X(x) = \Pr(X \le x) = |(-\infty, x] \cap [0,1]| = |[0,\min(x,1)]| = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x\lt 0 \\
x & 0\leq x\leq 1 \\
1 & x\gt 1.
\end{array}\right.
\end{array} $$
The probability density function (PDF), which is the derivative of the CDF, is $f_X(x) = 1$ for $0 \le x \le 1$ and $f_X(x)=0$ otherwise. (It is undefined at $0$ and $1$.)
Intuition from Characteristic Functions (Analysis)
The characteristic function (CF) of any random variable $X$ is the expectation of $\exp(i t X)$ (where $i$ is the imaginary unit, $i^2=-1$). Using the PDF of a uniform distribution we can compute
$$\phi_X(t) = \int_{-\infty}^\infty \exp(i t x) f_X(x) dx = \int_0^1 \exp(i t x) dx = \left. \frac{\exp(itx)}{it} \right|_{x=0}^{x=1} = \frac{\exp(it)-1}{it}.$$
The CF is a (version of the) Fourier transform of the PDF, $\phi(t) = \hat{f}(t)$. The most basic theorems about Fourier transforms are:
The CF of a sum of independent variables $X+Y$ is the product of their CFs.
When the original PDF $f$ is continuous and $X$ is bounded, $f$ can be recovered from the CF $\phi$ by a closely related version of the Fourier transform,
$$f(x) = \check{\phi}(x) = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt.$$
When $f$ is differentiable, its derivative can be computed under the integral sign:
$$f'(x) = \frac{d}{dx} \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt = \frac{-i}{2\pi} \int_{-\infty}^\infty t \exp(-i x t) \phi(t) dt.$$
For this to be well-defined, the last integral must converge absolutely; that is,
$$\int_{-\infty}^\infty |t \exp(-i x t) \phi(t)| dt = \int_{-\infty}^\infty |t| |\phi(t)| dt$$
must converge to a finite value. Conversely, when it does converge, the derivative exists everywhere by virtue of these inversion formulas.
It is now clear exactly how differentiable the PDF for a sum of $n$ uniform variables is: from the first bullet, the CF of the sum of iid variables is the CF of one of them raised to the $n^\text{th}$ power, here equal to $(\exp(i t) - 1)^n / (i t)^n$. The numerator is bounded (it consists of sine waves) while the denominator is $O(t^{n})$. We can multiply such an integrand by $t^{s}$ and it will still converge absolutely when $s \lt n-1$ and converge conditionally when $s = n-1$. Thus, repeated application of the third bullet shows that the PDF for the sum of $n$ uniform variates will be continuously $n-2$ times differentiable and, in most places, it will be $n-1$ times differentiable.
The blue shaded curve is a log-log plot of the absolute value of the real part of the CF of the sum of $n=10$ iid uniform variates. The dashed red line is an asymptote; its slope is $-10$, showing that the PDF is $10 - 2 = 8$ times differentiable. For reference, the gray curve plots the real part of the CF for a similarly shaped Gaussian function (a normal PDF).
Intuition from Probability
Let $Y$ and $X$ be independent random variables where $X$ has a Uniform $[0,1]$ distribution. Consider a narrow interval $(t, t+dt]$. We decompose the chance that $X+Y \in (t, t+dt]$ into the chance that $Y$ is sufficiently close to this interval times the chance that $X$ is just the right size to place $X+Y$ in this interval, given that $Y$ is close enough:
$$\eqalign{
f_{X+Y}(t) dt = &\Pr(X+Y\in (t,t+dt])\\
& = \Pr(X+Y\in (t,t+dt] | Y \in (t-1, t+dt]) \Pr(Y \in (t-1, t+dt]) \\
& = \Pr(X \in (t-Y, t-Y+dt] | Y \in (t-1, t+dt]) \left(F_Y(t+dt) - F_Y(t-1)\right) \\
& = 1 dt \left(F_Y(t+dt) - F_Y(t-1)\right).
}$$
The final equality comes from the expression for the PDF of $X$. Dividing both sides by $dt$ and taking the limit as $dt\to 0$ gives
$$f_{X+Y}(t) = F_Y(t) - F_Y(t-1).$$
In other words, adding a Uniform $[0,1]$ variable $X$ to any variable $Y$ changes the pdf $f_Y$ into a differenced CDF $F_Y(t) - F_Y(t-1)$. Because the PDF is the derivative of the CDF, this implies that each time we add an independent uniform variable to $Y$, the resulting PDF is one time more differentiable than before.
Let's apply this insight, starting with a uniform variable $Y$. The original PDF is not differentiable at $0$ or $1$: it is discontinuous there. The PDF of $Y+X$ is not differentiable at $0$, $1$, or $2$, but it must be continuous at those points, because it is the difference of integrals of the PDF of $Y$. Add another independent uniform variable $X_2$: the PDF of $Y+X+X_2$ is differentiable at $0$,$1$,$2$, and $3$--but it does not necessarily have second derivatives at those points. And so on.
Intuition from Geometry
The CDF at $t$ of a sum of $n$ iid uniform variates equals the volume of the unit hypercube $[0,1]^n$ lying within the half-space $x_1+x_2+\cdots+x_n \le t$. The situation for $n=3$ variates is shown here, with $t$ set at $1/2$, $3/2$, and then $5/2$.
As $t$ progresses from $0$ through $n$, the hyperplane $H_n(t): x_1+x_2+\cdots+x_n=t$ crosses vertices at $t=0$, $t=1, \ldots, t=n$. At each time the shape of the cross section changes: in the figure it first is a triangle (a $2$-simplex), then a hexagon, then a triangle again. Why doesn't the PDF have sharp bends at these values of $t$?
To understand this, first consider small values of $t$. Here, the hyperplane $H_n(t)$ cuts off an $n-1$-simplex. All $n-1$ dimensions of the simplex are directly proportional to $t$, whence its "area" is proportional to $t^{n-1}$. Some notation for this will come in handy later. Let $\theta$ be the "unit step function,"
$$\theta(x) = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x \lt 0 \\
1 & x\ge 0.
\end{array}\right.
\end{array} $$
If it were not for the presence of the other corners of the hypercube, this scaling would continue indefinitely. A plot of the area of the $n-1$-simplex would look like the solid blue curve below: it is zero at negative values and equals $t^{n-1}/(n-1)!$ at the positive one, conveniently written $\theta(t) t^{n-1}/(n-1)!$. It has a "kink" of order $n-2$ at the origin, in the sense that all derivatives through order $n-3$ exist and are continuous, but that left and right derivatives of order $n-2$ exist but do not agree at the origin.
(The other curves shown in this figure are $-3\theta(t-1) (t-1)^{2}/2!$ (red), $3\theta(t-2) (t-2)^{2}/2!$ (gold), and $-\theta(t-3) (t-3)^{2}/2!$ (black). Their roles in the case $n=3$ are discussed further below.)
To understand what happens when $t$ crosses $1$, let's examine in detail the case $n=2$, where all the geometry happens in a plane. We may view the unit "cube" (now just a square) as a linear combination of quadrants, as shown here:
The first quadrant appears in the lower left panel, in gray. The value of $t$ is $1.5$, determining the diagonal line shown in all five panels. The CDF equals the yellow area shown at right. This yellow area is comprised of:
The triangular gray area in the lower left panel,
minus the triangular green area in the upper left panel,
minus the triangular red area in the low middle panel,
plus any blue area in the upper middle panel (but there isn't any such area, nor will there be until $t$ exceeds $2$).
Every one of these $2^n=4$ areas is the area of a triangle. The first one scales like $t^n=t^2$, the next two are zero for $t\lt 1$ and otherwise scale like $(t-1)^n = (t-1)^2$, and the last is zero for $t\lt 2$ and otherwise scales like $(t-2)^n$. This geometric analysis has established that the CDF is proportional to $\theta(t)t^2 - \theta(t-1)(t-1)^2 - \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$ = $\theta(t)t^2 - 2 \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$; equivalently, the PDF is proportional to the sum of the three functions $\theta(t)t$, $-2\theta(t-1)(t-1)$, and $\theta(t-2)(t-2)$ (each of them scaling linearly when $n=2$). The left panel of this figure shows their graphs: evidently, they are all versions of the original graph $\theta(t)t$, but (a) shifted by $0$, $1$, and $2$ units to the right and (b) rescaled by $1$, $-2$, and $1$, respectively.
The right panel shows the sum of these graphs (the solid black curve, normalized to have unit area: this is precisely the angular-looking PDF shown in the original question.
Now we can understand the nature of the "kinks" in the PDF of any sum of iid uniform variables. They are all exactly like the "kink" that occurs at $0$ in the function $\theta(t)t^{n-1}$, possibly rescaled, and shifted to the integers $1,2,\ldots, n$ corresponding to where the hyperplane $H_n(t)$ crosses the vertices of the hypercube. For $n=2$, this is a visible change in direction: the right derivative of $\theta(t)t$ at $0$ is $0$ while its left derivative is $1$. For $n=3$, this is a continuous change in direction, but a sudden (discontinuous) change in second derivative. For general $n$, there will be continuous derivatives through order $n-2$ but a discontinuity in the $n-1^\text{st}$ derivative.
Intuition from Algebraic Manipulation
The integration to compute the CF, the form of the conditional probability in the probabilistic analysis, and the synthesis of a hypercube as a linear combination of quadrants all suggest returning to the original uniform distribution and re-expressing it as a linear combination of simpler things. Indeed, its PDF can be written
$$f_X(x) = \theta(x) - \theta(x-1).$$
Let us introduce the shift operator $\Delta$: it acts on any function $f$ by shifting its graph one unit to the right:
$$(\Delta f)(x) = f(x-1).$$
Formally, then, for the PDF of a uniform variable $X$ we may write
$$f_X = (1 - \Delta)\theta.$$
The PDF of a sum of $n$ iid uniforms is the convolution of $f_X$ with itself $n$ times. This follows from the definition of a sum of random variables: the convolution of two functions $f$ and $g$ is the function
$$(f \star g)(x) = \int_{-\infty}^{\infty} f(x-y)g(y) dy.$$
It is easy to verify that convolution commutes with $\Delta$. Just change the variable of integration from $y$ to $y+1$:
$$\eqalign{
(f \star (\Delta g)) &= \int_{-\infty}^{\infty} f(x-y)(\Delta g)(y) dy \\
&= \int_{-\infty}^{\infty} f(x-y)g(y-1) dy \\
&= \int_{-\infty}^{\infty} f((x-1)-y)g(y) dy \\
&= (\Delta (f \star g))(x).
}$$
For the PDF of the sum of $n$ iid uniforms, we may now proceed algebraically to write
$$f = f_X^{\star n} = ((1 - \Delta)\theta)^{\star n} = (1-\Delta)^n \theta^{\star n}$$
(where the $\star n$ "power" denotes repeated convolution, not pointwise multiplication!). Now $\theta^{\star n}$ is a direct, elementary integration, giving
$$\theta^{\star n}(x) = \theta(x) \frac{x^{n-1}}{{n-1}!}.$$
The rest is algebra, because the Binomial Theorem applies (as it does in any commutative algebra over the reals):
$$f = (1-\Delta)^n \theta^{\star n} = \sum_{i=0}^{n} (-1)^i \binom{n}{i} \Delta^i \theta^{\star n}.$$
Because $\Delta^i$ merely shifts its argument by $i$, this exhibits the PDF $f$ as a linear combination of shifted versions of $\theta(x) x^{n-1}$, exactly as we deduced geometrically:
$$f(x) = \frac{1}{(n-1)!}\sum_{i=0}^{n} (-1)^i \binom{n}{i} (x-i)^{n-1}\theta(x-i).$$
(John Cook quotes this formula later in his blog post, using the notation $(x-i)^{n-1}_+$ for $(x-i)^{n-1}\theta(x-i)$.)
Accordingly, because $x^{n-1}$ is a smooth function everywhere, any singular behavior of the PDF will occur only at places where $\theta(x)$ is singular (obviously just $0$) and at those places shifted to the right by $1, 2, \ldots, n$. The nature of that singular behavior--the degree of smoothness--will therefore be the same at all $n+1$ locations.
Illustrating this is the picture for $n=8$, showing (in the left panel) the individual terms in the sum and (in the right panel) the partial sums, culminating in the sum itself (solid black curve):
Closing Comments
It is useful to note that this last approach has finally yielded a compact, practical expression for computing the PDF of a sum of $n$ iid uniform variables. (A formula for the CDF is similarly obtained.)
The Central Limit Theorem has little to say here. After all, a sum of iid Binomial variables converges to a Normal distribution, but that sum is always discrete: it never even has a PDF at all! We should not hope for any intuition about "kinks" or other measures of differentiability of a PDF to come from the CLT. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of
We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, c |
4,478 | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$? | I think the more surprising thing is that you get the sharp peak for $n=2$.
The Central Limit Theorem says that for large enough sample sizes the distribution of the mean (and the sum is just the mean times $n$, a fixed constant for each graph) will be approximately normal. It turns out that the uniform distribution is really well behaved with respect to the CLT (symmetric, no heavy tails (well not much of any tails), no possibility of outliers), so for the uniform the sample size needed to be "large enough" is not very big (around 5 or 6 for a good approximation), you are already seeing the OK approximation at $n=3$. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of | I think the more surprising thing is that you get the sharp peak for $n=2$.
The Central Limit Theorem says that for large enough sample sizes the distribution of the mean (and the sum is just the me | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$?
I think the more surprising thing is that you get the sharp peak for $n=2$.
The Central Limit Theorem says that for large enough sample sizes the distribution of the mean (and the sum is just the mean times $n$, a fixed constant for each graph) will be approximately normal. It turns out that the uniform distribution is really well behaved with respect to the CLT (symmetric, no heavy tails (well not much of any tails), no possibility of outliers), so for the uniform the sample size needed to be "large enough" is not very big (around 5 or 6 for a good approximation), you are already seeing the OK approximation at $n=3$. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of
I think the more surprising thing is that you get the sharp peak for $n=2$.
The Central Limit Theorem says that for large enough sample sizes the distribution of the mean (and the sum is just the me |
4,479 | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$? | You could argue that the probability density function of a uniform random variable is finite,
so its integral the cumulative density function of a uniform random variable is continuous,
so the probability density function of the sum of two uniform random variables is continuous,
so its integral the cumulative density function of the sum of two uniform random variables is smooth (continuously differentiable),
so the probability density function of the sum of three uniform random variables is smooth. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of | You could argue that the probability density function of a uniform random variable is finite,
so its integral the cumulative density function of a uniform random variable is continuous,
so the proba | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of $Z_n$ disappear for $n \geq 3$?
You could argue that the probability density function of a uniform random variable is finite,
so its integral the cumulative density function of a uniform random variable is continuous,
so the probability density function of the sum of two uniform random variables is continuous,
so its integral the cumulative density function of the sum of two uniform random variables is smooth (continuously differentiable),
so the probability density function of the sum of three uniform random variables is smooth. | Consider the sum of $n$ uniform distributions on $[0,1]$, or $Z_n$. Why does the cusp in the PDF of
You could argue that the probability density function of a uniform random variable is finite,
so its integral the cumulative density function of a uniform random variable is continuous,
so the proba |
4,480 | How do R and Python complement each other in data science? | They are complementary. It is true that both can do the same things, yet this can be said of most languages. Each has its strengths and weaknesses. The common outlook seems to be that Python is best for data gathering and preparation, as well as for textual analysis. R is considered best for the data analysis, as it is a statistical language first and foremost.
R has a smorgasbord of packages for anything you can think of, but its staple is statistical analysis - from basic chi-square to factor analysis and hazard models, it is easy and robust. Some of the biggest names in statistics create R packages, and it has a lively community to help with your every need. ggplot2 is a standard in data visualization (graphs etc..). R is a vectorized language and built to loop through data efficiently. It also stores all data in the RAM, which is a double-edged sword - it is snappy on smaller data sets (although some might argue with me), but it can't handle big data well (although it has packages to bypass this, such as ff).
Python is considerably easier to learn than R - especially for those who have previous programming experience. R is just... weird. Python is great at data retrieval, and is the language to use for web scraping (with the amazing beautifulsoup). Python is known for its strength in string parsing and text manipulation. pandas is a great library for data manipulation, merging, transforming, etc., and is fast (and probably inspired by R).
Python is great when you need to do some programming. This is not surprising as it is a general-purpose language. R, however, with all its extensions, was built by statisticians for statisticians. So while Python may be easier and better and faster at many applications, R would be the go-to platform for statistical analysis. | How do R and Python complement each other in data science? | They are complementary. It is true that both can do the same things, yet this can be said of most languages. Each has its strengths and weaknesses. The common outlook seems to be that Python is best f | How do R and Python complement each other in data science?
They are complementary. It is true that both can do the same things, yet this can be said of most languages. Each has its strengths and weaknesses. The common outlook seems to be that Python is best for data gathering and preparation, as well as for textual analysis. R is considered best for the data analysis, as it is a statistical language first and foremost.
R has a smorgasbord of packages for anything you can think of, but its staple is statistical analysis - from basic chi-square to factor analysis and hazard models, it is easy and robust. Some of the biggest names in statistics create R packages, and it has a lively community to help with your every need. ggplot2 is a standard in data visualization (graphs etc..). R is a vectorized language and built to loop through data efficiently. It also stores all data in the RAM, which is a double-edged sword - it is snappy on smaller data sets (although some might argue with me), but it can't handle big data well (although it has packages to bypass this, such as ff).
Python is considerably easier to learn than R - especially for those who have previous programming experience. R is just... weird. Python is great at data retrieval, and is the language to use for web scraping (with the amazing beautifulsoup). Python is known for its strength in string parsing and text manipulation. pandas is a great library for data manipulation, merging, transforming, etc., and is fast (and probably inspired by R).
Python is great when you need to do some programming. This is not surprising as it is a general-purpose language. R, however, with all its extensions, was built by statisticians for statisticians. So while Python may be easier and better and faster at many applications, R would be the go-to platform for statistical analysis. | How do R and Python complement each other in data science?
They are complementary. It is true that both can do the same things, yet this can be said of most languages. Each has its strengths and weaknesses. The common outlook seems to be that Python is best f |
4,481 | How do R and Python complement each other in data science? | I will try to formulate an answer touching the main points where the two languages come into play for data science / statistics / data analysis and the like, as someone who uses both.
The workflow in data analysis generally consists of the following steps:
Fetching the data from some sort of source (most likely a SQL/noSQL database or .csv files).
Parsing the data in a decent and reasonable format (data frame) so that one can do operations and think thereupon.
Applying some functions to the data (grouping, deleting, merging, renaming).
Applying some sort of model to the data (regression, clustering, a neural network or any other more or less complicated theory).
Deploying / presenting your results to a more-or-less technical audience.
Fetching data
99% of the time, the process of fetching the data comes down to querying some sort of SQL or Impala database: both Python and R have specific clients or libraries that do the job in no time and equally well (RImpala, RmySQL for R and MySQLdb for Python work smoothly, not really much to add). When it comes to reading external .csv files, the data.table package for R provides the function fread that reads in huge and complicated .csv files with any custom parsing option in no time, and transforms the result directly into data frames with column names and row numbers.
Organising the data frames
We want the data to be stored in some sort of table so that we can access any single entry, row or column with ease.
The R package data.table provides unbeatable ways to label, rename, delete and access the data. The standard syntax is very much SQL-like as dt[i, j, fun_by], where that is intended to be dt[where_condition, select_column, grouped_by (or the like)]; custom user-defined functions can be put in there as well as in the j clause, so that you are completely free to manipulate the data and apply any complicated or fancy function on groups or subsets (like take the i-th row, k-th element and sum it to the (k-2)-th element of the (i-1)-th row if and only if the standard deviation of the entire column is what-it-is, grouped by the last column altogether). Have a look at the benchmarks and at this other amazing question on SO. Sorting, deleting and re-naming of columns and rows do what they have to do, and the standard vectorised R methods apply, sapply, lapply, ifelse perform vectorised operations on columns and data frames altogether, without looping through each element (remember that whenever you are using loops in R you are doing it badly wrong).
Python's counterweapon is the pandas library. It finally provides the structure pd.DataFrame (that standard Python lacks, for some reason still unknown to me) that treats the data for what they are, namely frames of data (instead of some numpy array, numpy list, numpy matrix or whatever). Operations like grouping, re-naming, sorting and the like can be easily achieved and here, too, the user can apply any custom function to a grouped dataset or subset of the frame using Python apply or lambda. I personally dislike the grammar df[df.iloc(...)] to access the entries, but that is just personal taste and no problem at all. Benchmarks for grouping operations are still slightly worse than R data.table but unless you want to save 0.02 seconds for compilation there is no big difference in performance.
Strings
The R way to treat strings is to use the stringr package that allows any text manipulation, anagram, regular expression, trailing white spaces or similar with ease. It can also be used in combination with JSON libraries that unpack JSON dictionaries and unlist their elements, so that one has a final data frame where the column names and the elements are what they have to be, without any non-UTF8 character or white space in there.
Python's Pandas .str. does the same job of playing with regular expressions, trailing or else as good as its competitor, so even here no big difference in taste.
Applying models
Here is where, in my opinion, differences between the two languages arise.
R has, as of today, an unbeatable set of libraries that allow the user to essentially do anything they want in one to two lines of code. Standard functional or polynomial regressions are performed in one-liners and produce outputs whose coefficients are easily readable, accompanied by their corresponding confidence intervals and p-values distributions. Likewise for clustering, likewise for random forest models, likewise for dendograms, principal component analysis, singular value decompositions, logistic fits and many more. The output for each of the above most likely comes with a specific plotting class that generates visualisations of what you have just done, with colours and bubbles for coefficients and parameters. Hypotheses tests, statistical tests, Shapiro, Kruskal-Wallis or the like can be performed in one line of code by means of appropriate libraries.
Python is trying to keep up with SciPy and scikit-learn. Most of the standard analysis and models are available as well, but they are slightly longer to code and less-intuitive to read (in my opinion). More complicated machineries are missing, although some can be traced back to some combinations of the already existing libraries. One thing that I prefer doing in Python rather than in R is bag-of-word text analysis with bi-grams, tri-grams and higher orders.
Presenting the results
Both languages have beautiful plotting tools, R ggplot2 above all and the corresponding Python equivalent. Not really much to compete, they do the job safe and sound, although I believe that if you are presenting the results you may have to use other tools—there are fancy colourful design tools out there and neither Python nor R are meant to astonish the audience with fancy red-and-green drag and drops. R has lately published a lot of improvements on its shiny app features, that basically allow it to produce interactive outputs. I never wanted to learn it, but I know it is there and people use it well.
Side note
As a side note I would like to emphasise that the major difference between the two languages is that Python is a general purpose programming langauge, made by and for computer science, portability, deployments and so on and so forth. It is awesome at what it does and is straightforward to learn; there is nobody who does not like python. But it is a programming language to do programming.
R, on the other hand, was invented by and for mathematicians, physicists, statisticians and data scientists. If you come from that background everything makes perfect sense because it perfectly mirrors and reproduces the concepts used in statistics and mathematics. But if, instead, you come from a computer science background and want to simulate Java or C in R you are going to be disappointed; it does not have "objects" in the standard sense (well, it does, but not what one typically thinks they are...), it does not have classes in the standard sense (well, it does, but not what one typically thinks they are...), it does not have "pointers" or all other computer science structures - but just because it does not need them. Last but not the least: documentation and packages are straightforward to create and read (if you are using Rstudio); there is a large and passionate community out there, and it takes literally five seconds to Google "how to do insert-random-problem in R" whose first entry redirects you to a solution to the problem (done by someone else) with corresponding code, in no time.
Most industrial companies have their infrastructure built in Python (or a Python-friendly environment) that allows easy integration of Python code (just import myAnalysis anywhere and you are basically done). However, any modern technology or server or platform easily runs background R code without any problem as well. | How do R and Python complement each other in data science? | I will try to formulate an answer touching the main points where the two languages come into play for data science / statistics / data analysis and the like, as someone who uses both.
The workflow in | How do R and Python complement each other in data science?
I will try to formulate an answer touching the main points where the two languages come into play for data science / statistics / data analysis and the like, as someone who uses both.
The workflow in data analysis generally consists of the following steps:
Fetching the data from some sort of source (most likely a SQL/noSQL database or .csv files).
Parsing the data in a decent and reasonable format (data frame) so that one can do operations and think thereupon.
Applying some functions to the data (grouping, deleting, merging, renaming).
Applying some sort of model to the data (regression, clustering, a neural network or any other more or less complicated theory).
Deploying / presenting your results to a more-or-less technical audience.
Fetching data
99% of the time, the process of fetching the data comes down to querying some sort of SQL or Impala database: both Python and R have specific clients or libraries that do the job in no time and equally well (RImpala, RmySQL for R and MySQLdb for Python work smoothly, not really much to add). When it comes to reading external .csv files, the data.table package for R provides the function fread that reads in huge and complicated .csv files with any custom parsing option in no time, and transforms the result directly into data frames with column names and row numbers.
Organising the data frames
We want the data to be stored in some sort of table so that we can access any single entry, row or column with ease.
The R package data.table provides unbeatable ways to label, rename, delete and access the data. The standard syntax is very much SQL-like as dt[i, j, fun_by], where that is intended to be dt[where_condition, select_column, grouped_by (or the like)]; custom user-defined functions can be put in there as well as in the j clause, so that you are completely free to manipulate the data and apply any complicated or fancy function on groups or subsets (like take the i-th row, k-th element and sum it to the (k-2)-th element of the (i-1)-th row if and only if the standard deviation of the entire column is what-it-is, grouped by the last column altogether). Have a look at the benchmarks and at this other amazing question on SO. Sorting, deleting and re-naming of columns and rows do what they have to do, and the standard vectorised R methods apply, sapply, lapply, ifelse perform vectorised operations on columns and data frames altogether, without looping through each element (remember that whenever you are using loops in R you are doing it badly wrong).
Python's counterweapon is the pandas library. It finally provides the structure pd.DataFrame (that standard Python lacks, for some reason still unknown to me) that treats the data for what they are, namely frames of data (instead of some numpy array, numpy list, numpy matrix or whatever). Operations like grouping, re-naming, sorting and the like can be easily achieved and here, too, the user can apply any custom function to a grouped dataset or subset of the frame using Python apply or lambda. I personally dislike the grammar df[df.iloc(...)] to access the entries, but that is just personal taste and no problem at all. Benchmarks for grouping operations are still slightly worse than R data.table but unless you want to save 0.02 seconds for compilation there is no big difference in performance.
Strings
The R way to treat strings is to use the stringr package that allows any text manipulation, anagram, regular expression, trailing white spaces or similar with ease. It can also be used in combination with JSON libraries that unpack JSON dictionaries and unlist their elements, so that one has a final data frame where the column names and the elements are what they have to be, without any non-UTF8 character or white space in there.
Python's Pandas .str. does the same job of playing with regular expressions, trailing or else as good as its competitor, so even here no big difference in taste.
Applying models
Here is where, in my opinion, differences between the two languages arise.
R has, as of today, an unbeatable set of libraries that allow the user to essentially do anything they want in one to two lines of code. Standard functional or polynomial regressions are performed in one-liners and produce outputs whose coefficients are easily readable, accompanied by their corresponding confidence intervals and p-values distributions. Likewise for clustering, likewise for random forest models, likewise for dendograms, principal component analysis, singular value decompositions, logistic fits and many more. The output for each of the above most likely comes with a specific plotting class that generates visualisations of what you have just done, with colours and bubbles for coefficients and parameters. Hypotheses tests, statistical tests, Shapiro, Kruskal-Wallis or the like can be performed in one line of code by means of appropriate libraries.
Python is trying to keep up with SciPy and scikit-learn. Most of the standard analysis and models are available as well, but they are slightly longer to code and less-intuitive to read (in my opinion). More complicated machineries are missing, although some can be traced back to some combinations of the already existing libraries. One thing that I prefer doing in Python rather than in R is bag-of-word text analysis with bi-grams, tri-grams and higher orders.
Presenting the results
Both languages have beautiful plotting tools, R ggplot2 above all and the corresponding Python equivalent. Not really much to compete, they do the job safe and sound, although I believe that if you are presenting the results you may have to use other tools—there are fancy colourful design tools out there and neither Python nor R are meant to astonish the audience with fancy red-and-green drag and drops. R has lately published a lot of improvements on its shiny app features, that basically allow it to produce interactive outputs. I never wanted to learn it, but I know it is there and people use it well.
Side note
As a side note I would like to emphasise that the major difference between the two languages is that Python is a general purpose programming langauge, made by and for computer science, portability, deployments and so on and so forth. It is awesome at what it does and is straightforward to learn; there is nobody who does not like python. But it is a programming language to do programming.
R, on the other hand, was invented by and for mathematicians, physicists, statisticians and data scientists. If you come from that background everything makes perfect sense because it perfectly mirrors and reproduces the concepts used in statistics and mathematics. But if, instead, you come from a computer science background and want to simulate Java or C in R you are going to be disappointed; it does not have "objects" in the standard sense (well, it does, but not what one typically thinks they are...), it does not have classes in the standard sense (well, it does, but not what one typically thinks they are...), it does not have "pointers" or all other computer science structures - but just because it does not need them. Last but not the least: documentation and packages are straightforward to create and read (if you are using Rstudio); there is a large and passionate community out there, and it takes literally five seconds to Google "how to do insert-random-problem in R" whose first entry redirects you to a solution to the problem (done by someone else) with corresponding code, in no time.
Most industrial companies have their infrastructure built in Python (or a Python-friendly environment) that allows easy integration of Python code (just import myAnalysis anywhere and you are basically done). However, any modern technology or server or platform easily runs background R code without any problem as well. | How do R and Python complement each other in data science?
I will try to formulate an answer touching the main points where the two languages come into play for data science / statistics / data analysis and the like, as someone who uses both.
The workflow in |
4,482 | How do R and Python complement each other in data science? | Python is a general programming language: therefore, it is good for doing many other tasks in addition to data analysis. For example, if we want to automate our model execution in production server, then python is a really good choice. Other examples include connecting to hardware/sensors to read data, interacting with databases (relational or non-structured data like JSON), parsing data, network programming (TCP/IP), graphical user interface, interacting with shell, etc. (Well, why would a data scientist want to do so many of these kinds of task, which have little to do with predictive models? I think people have different definitions What is a data scientist? In some organizations, parsing the data and doing the descriptive analysis with dashboard is good enough for business and the data is not mature enough for doing predictive models. On the other hand, in many small companies, people may expect data scientists to do lots of software engineering. Knowing python will make you independent of other software engineers.)
R has a lot of statistical packages that are much better than python or MATLAB. By using R, one can really think in model level instead of implementation detail level. This is a huge advantage in developing statistical models. For example, many people are manually implementing neural networks in python; doing such work may not help to understand why neural networks work, but just following the recipe to duplicate others' work to check if it works. If we are working in R, we can easily focus on the math behind the model, instead of implementation details.
In many cases, people use them together. Building software is easy to do in python, and building models is better in R. If we want to deliver a model in production but not a paper, we may need both. If your company has a lot of software engineers, you may need more R. And if your company has a lot of research scientists, you may need more python. | How do R and Python complement each other in data science? | Python is a general programming language: therefore, it is good for doing many other tasks in addition to data analysis. For example, if we want to automate our model execution in production server, t | How do R and Python complement each other in data science?
Python is a general programming language: therefore, it is good for doing many other tasks in addition to data analysis. For example, if we want to automate our model execution in production server, then python is a really good choice. Other examples include connecting to hardware/sensors to read data, interacting with databases (relational or non-structured data like JSON), parsing data, network programming (TCP/IP), graphical user interface, interacting with shell, etc. (Well, why would a data scientist want to do so many of these kinds of task, which have little to do with predictive models? I think people have different definitions What is a data scientist? In some organizations, parsing the data and doing the descriptive analysis with dashboard is good enough for business and the data is not mature enough for doing predictive models. On the other hand, in many small companies, people may expect data scientists to do lots of software engineering. Knowing python will make you independent of other software engineers.)
R has a lot of statistical packages that are much better than python or MATLAB. By using R, one can really think in model level instead of implementation detail level. This is a huge advantage in developing statistical models. For example, many people are manually implementing neural networks in python; doing such work may not help to understand why neural networks work, but just following the recipe to duplicate others' work to check if it works. If we are working in R, we can easily focus on the math behind the model, instead of implementation details.
In many cases, people use them together. Building software is easy to do in python, and building models is better in R. If we want to deliver a model in production but not a paper, we may need both. If your company has a lot of software engineers, you may need more R. And if your company has a lot of research scientists, you may need more python. | How do R and Python complement each other in data science?
Python is a general programming language: therefore, it is good for doing many other tasks in addition to data analysis. For example, if we want to automate our model execution in production server, t |
4,483 | How do R and Python complement each other in data science? | Programmers of all stripes underestimate how much language choices are cultural. Web developers like Node.js. Scientists like Python. As a polyglot software engineer who can handle Javascript's fluidity and Java's rigidity all the same, I've realized there is not any intrinsic reason these languages are bad at each other's jobs -- just the enormous amount of packages, documentation, communities, books, etc. surrounding them.
(For intrinsic reasons one random language is better than some other language, see the forthcoming comments to this answer.)
My personal prediction is that Python is the way of the future because it can do everything R can - or rather, enough of what R can that dedicated programmers are working to fill in the gaps - and is a far better software engineering language. Software engineering is a discipline that deals with:
trusting your code's reliability enough to put it in production (so any machine learning model that serves users in real time)
ensuring your code can continue working as it undergoes modification and reuse (unit testing frameworks, for instance)
a focus on readability, for the benefit of others, and of yourself in as little as 6 months
a deep emphasis on code organization, for ease of versioning, backouts to previous working versions, and concurrent development by multiple parties
preferring tools and technologies with better documentation, and ideally with the property that they won't work at all unless you use them right (this was my biggest gripe with Matlab -- I google a question and I have to read through their rather terrible forums searching for an answer)
Plus frankly Python is easier to learn.
Scientists and statisticians will realize they are stakeholders to good software engineering practice, not an independent and unbothered profession. Just my opinion, but papers proving the brittleness of academic code will support this.
This answer is all my opinion - but you asked a very opinionated question, and since it's well-received so far I felt you deserved an unpretentious, reasonably informed (I hope!) opinion in response. There's a serious argument for Python over R across the board and I would be remiss to try to post nonpartisan answer when reality may itself be partisan. | How do R and Python complement each other in data science? | Programmers of all stripes underestimate how much language choices are cultural. Web developers like Node.js. Scientists like Python. As a polyglot software engineer who can handle Javascript's fluidi | How do R and Python complement each other in data science?
Programmers of all stripes underestimate how much language choices are cultural. Web developers like Node.js. Scientists like Python. As a polyglot software engineer who can handle Javascript's fluidity and Java's rigidity all the same, I've realized there is not any intrinsic reason these languages are bad at each other's jobs -- just the enormous amount of packages, documentation, communities, books, etc. surrounding them.
(For intrinsic reasons one random language is better than some other language, see the forthcoming comments to this answer.)
My personal prediction is that Python is the way of the future because it can do everything R can - or rather, enough of what R can that dedicated programmers are working to fill in the gaps - and is a far better software engineering language. Software engineering is a discipline that deals with:
trusting your code's reliability enough to put it in production (so any machine learning model that serves users in real time)
ensuring your code can continue working as it undergoes modification and reuse (unit testing frameworks, for instance)
a focus on readability, for the benefit of others, and of yourself in as little as 6 months
a deep emphasis on code organization, for ease of versioning, backouts to previous working versions, and concurrent development by multiple parties
preferring tools and technologies with better documentation, and ideally with the property that they won't work at all unless you use them right (this was my biggest gripe with Matlab -- I google a question and I have to read through their rather terrible forums searching for an answer)
Plus frankly Python is easier to learn.
Scientists and statisticians will realize they are stakeholders to good software engineering practice, not an independent and unbothered profession. Just my opinion, but papers proving the brittleness of academic code will support this.
This answer is all my opinion - but you asked a very opinionated question, and since it's well-received so far I felt you deserved an unpretentious, reasonably informed (I hope!) opinion in response. There's a serious argument for Python over R across the board and I would be remiss to try to post nonpartisan answer when reality may itself be partisan. | How do R and Python complement each other in data science?
Programmers of all stripes underestimate how much language choices are cultural. Web developers like Node.js. Scientists like Python. As a polyglot software engineer who can handle Javascript's fluidi |
4,484 | How do R and Python complement each other in data science? | I am an R user but I think Python is the future (I don't think it's the syntax)
Python is the future
The benefit of Python is as other people have already mentioned the much wider support, and, for programmers, more logical syntax.
Also the ability that you can translate findings from your analysis into a production system is much more straightforward.
Maybe it's because of Python being general purpose and R is not but even I raise my eyebrows when I see a productionized R pipeline.
But not only that, even for Advanced applications Python is quickly catching up (Scikit-learn, PyBrain, Tensorflow etc) and while R is still the lingua franca in academics on how to implement statistical methods Python has gotten huge in the professional sector due to the advent of advanced specialized libraries.
But R is not bad
Many people seem to like to jump on the "R has bad syntax" bandwagon.
I wish to propose the syntax of R to be a good thing!
Assignment functions, lazy evaluation, non standard evaluation and formulas are huge benefits when using R.
It just saves so much time not to have to worry about escaping variable names referenced in your summary or how to construct the logic of what is modeled against what or looking at names with names() and then assigning new names by adding <- c("A", "B", "C").
When people complain about R's weird syntax they look at it as a programming language, not as a data science tool.
As someone coming from R and loving dplyr I find pandas' syntax a bit clumsy in comparison.
Yes it is a bit more flexible, but for most tasks you take a lot more keystrokes to perform a simple command than in R that are merely there to satisfy Python's parser, not to express your idea.
In summary
Of course it is wise to know both and while Python is getting there R's domain specific design choices just make it simpler for ad hoc work. The huge drawback of R is that it's difficult to leave its domain, which you basically have to do once you try to implement your findings in a sustainable way. | How do R and Python complement each other in data science? | I am an R user but I think Python is the future (I don't think it's the syntax)
Python is the future
The benefit of Python is as other people have already mentioned the much wider support, and, for pr | How do R and Python complement each other in data science?
I am an R user but I think Python is the future (I don't think it's the syntax)
Python is the future
The benefit of Python is as other people have already mentioned the much wider support, and, for programmers, more logical syntax.
Also the ability that you can translate findings from your analysis into a production system is much more straightforward.
Maybe it's because of Python being general purpose and R is not but even I raise my eyebrows when I see a productionized R pipeline.
But not only that, even for Advanced applications Python is quickly catching up (Scikit-learn, PyBrain, Tensorflow etc) and while R is still the lingua franca in academics on how to implement statistical methods Python has gotten huge in the professional sector due to the advent of advanced specialized libraries.
But R is not bad
Many people seem to like to jump on the "R has bad syntax" bandwagon.
I wish to propose the syntax of R to be a good thing!
Assignment functions, lazy evaluation, non standard evaluation and formulas are huge benefits when using R.
It just saves so much time not to have to worry about escaping variable names referenced in your summary or how to construct the logic of what is modeled against what or looking at names with names() and then assigning new names by adding <- c("A", "B", "C").
When people complain about R's weird syntax they look at it as a programming language, not as a data science tool.
As someone coming from R and loving dplyr I find pandas' syntax a bit clumsy in comparison.
Yes it is a bit more flexible, but for most tasks you take a lot more keystrokes to perform a simple command than in R that are merely there to satisfy Python's parser, not to express your idea.
In summary
Of course it is wise to know both and while Python is getting there R's domain specific design choices just make it simpler for ad hoc work. The huge drawback of R is that it's difficult to leave its domain, which you basically have to do once you try to implement your findings in a sustainable way. | How do R and Python complement each other in data science?
I am an R user but I think Python is the future (I don't think it's the syntax)
Python is the future
The benefit of Python is as other people have already mentioned the much wider support, and, for pr |
4,485 | How do R and Python complement each other in data science? | If you look at R as more of a statistical tool and not as a programming language, it is really great. It has far more flexibility than Stata or SPSS, but can do everything they can as well. I learned Stata during college, and R was easy enough to look at because I already had the perspective of the statistical tool and not a pure programming language experience that others might have.
I think frustration with R comes in to play when those who are programmers try to learn and understand R; but it is a great tool for those coming to R through a statistical background.
Python is great if you are already a great programmer; but for me as a beginner to programming and statistics just out of college, R was a much better choice. It is really just preference of which one fits your skillset and interests more. | How do R and Python complement each other in data science? | If you look at R as more of a statistical tool and not as a programming language, it is really great. It has far more flexibility than Stata or SPSS, but can do everything they can as well. I learned | How do R and Python complement each other in data science?
If you look at R as more of a statistical tool and not as a programming language, it is really great. It has far more flexibility than Stata or SPSS, but can do everything they can as well. I learned Stata during college, and R was easy enough to look at because I already had the perspective of the statistical tool and not a pure programming language experience that others might have.
I think frustration with R comes in to play when those who are programmers try to learn and understand R; but it is a great tool for those coming to R through a statistical background.
Python is great if you are already a great programmer; but for me as a beginner to programming and statistics just out of college, R was a much better choice. It is really just preference of which one fits your skillset and interests more. | How do R and Python complement each other in data science?
If you look at R as more of a statistical tool and not as a programming language, it is really great. It has far more flexibility than Stata or SPSS, but can do everything they can as well. I learned |
4,486 | How do R and Python complement each other in data science? | Adding to some of the prior answers:
In my experience, there's nothing easier than using R's dplyr + tidyr, ggplot and Rmarkdown in getting from raw data to presentable results. Python offers a lot, and I'm using it more and more, but I sure love the way Hadley's packages tie together. | How do R and Python complement each other in data science? | Adding to some of the prior answers:
In my experience, there's nothing easier than using R's dplyr + tidyr, ggplot and Rmarkdown in getting from raw data to presentable results. Python offers a lot, | How do R and Python complement each other in data science?
Adding to some of the prior answers:
In my experience, there's nothing easier than using R's dplyr + tidyr, ggplot and Rmarkdown in getting from raw data to presentable results. Python offers a lot, and I'm using it more and more, but I sure love the way Hadley's packages tie together. | How do R and Python complement each other in data science?
Adding to some of the prior answers:
In my experience, there's nothing easier than using R's dplyr + tidyr, ggplot and Rmarkdown in getting from raw data to presentable results. Python offers a lot, |
4,487 | How do R and Python complement each other in data science? | Python has a wide adoption outside science, so you benefit from all that. As "An Angry Guide to R" points out, R was developed by a community, which had to the first order zero software developers.
I would say that today R has two main strengths: some really mature highly specialized packages in some areas, and state-of-the-art reproducible research package knitr.
Python appears to be better suited for everything else.
This is an opinion of course, as almost everything in this thread. I am kind of amazed that this thread is still alive. | How do R and Python complement each other in data science? | Python has a wide adoption outside science, so you benefit from all that. As "An Angry Guide to R" points out, R was developed by a community, which had to the first order zero software developers.
I | How do R and Python complement each other in data science?
Python has a wide adoption outside science, so you benefit from all that. As "An Angry Guide to R" points out, R was developed by a community, which had to the first order zero software developers.
I would say that today R has two main strengths: some really mature highly specialized packages in some areas, and state-of-the-art reproducible research package knitr.
Python appears to be better suited for everything else.
This is an opinion of course, as almost everything in this thread. I am kind of amazed that this thread is still alive. | How do R and Python complement each other in data science?
Python has a wide adoption outside science, so you benefit from all that. As "An Angry Guide to R" points out, R was developed by a community, which had to the first order zero software developers.
I |
4,488 | How do R and Python complement each other in data science? | As described in other answers, Python is a good general-purpose programming language, whereas R has serious flaws as a programming language but has a richer set of data-analysis libraries. In recent years, Python has been catching up to R with the development of mature data-analysis libraries such as scikit-learn, whereas R is never going to be fixed. In practice, I use Python (actually, Hy) for almost everything and only turn to R for relatively esoteric methods such as quantile regression (the implementation of which in Python's statsmodels appears to be broken). There are several ways to call R from Python; PypeR is one that's simple enough that I've gotten it to work in such hostile environments as a Windows server.
Edit: I encourage anybody who would like to argue about this further to talk to the authors of the linked essay instead of commenting on this answer. | How do R and Python complement each other in data science? | As described in other answers, Python is a good general-purpose programming language, whereas R has serious flaws as a programming language but has a richer set of data-analysis libraries. In recent y | How do R and Python complement each other in data science?
As described in other answers, Python is a good general-purpose programming language, whereas R has serious flaws as a programming language but has a richer set of data-analysis libraries. In recent years, Python has been catching up to R with the development of mature data-analysis libraries such as scikit-learn, whereas R is never going to be fixed. In practice, I use Python (actually, Hy) for almost everything and only turn to R for relatively esoteric methods such as quantile regression (the implementation of which in Python's statsmodels appears to be broken). There are several ways to call R from Python; PypeR is one that's simple enough that I've gotten it to work in such hostile environments as a Windows server.
Edit: I encourage anybody who would like to argue about this further to talk to the authors of the linked essay instead of commenting on this answer. | How do R and Python complement each other in data science?
As described in other answers, Python is a good general-purpose programming language, whereas R has serious flaws as a programming language but has a richer set of data-analysis libraries. In recent y |
4,489 | How to determine best cutoff point and its confidence interval using ROC curve in R? | Thanks to all who aswered this question. I agree that there could be no one correct answer and criteria greatly depend on the aims that stand behind of the certain diagnostic test.
Finally I had found an R package OptimalCutpoints dedicated exactly to finding cutoff point in such type of analysis. Actually there are several methods of determining cutoff point.
"CB" (cost-benefit method);
"MCT" (minimizes Misclassification Cost Term);
"MinValueSp" (a minimum value set for Specificity);
"MinValueSe" (a minimum value set for Sensitivity);
"RangeSp" (a range of values set for Specificity);
"RangeSe" (a range of values set for Sensitivity);
"ValueSp" (a value set for Specificity);
"ValueSe" (a value set for Sensitivity);
"MinValueSpSe" (a minimum value set for Specificity and Sensitivity);
"MaxSp" (maximizes Specificity);
"MaxSe" (maximizes Sensitivity);
"MaxSpSe" (maximizes Sensitivity and Specificity simultaneously);
"Max-SumSpSe" (maximizes the sum of Sensitivity and Specificity);
"MaxProdSpSe" (maximizes the product of Sensitivity and Specificity);
"ROC01" (minimizes distance between ROC plot and point (0,1));
"SpEqualSe" (Sensitivity = Specificity);
"Youden" (Youden Index);
"MaxEfficiency" (maximizes Efficiency or Accuracy);
"Minimax" (minimizes the most frequent error);
"AUC" (maximizes concordance which is a function of AUC);
"MaxDOR" (maximizes Diagnostic Odds Ratio);
"MaxKappa" (maximizes Kappa Index);
"MaxAccuracyArea" (maximizes Accuracy Area);
"MinErrorRate" (minimizes Error Rate);
"MinValueNPV" (a minimum value set for Negative Predictive Value);
"MinValuePPV" (a minimum value set for Positive Predictive Value);
"MinValueNPVPPV" (a minimum value set for Predictive Values);
"PROC01" (minimizes distance between PROC plot and point (0,1));
"NPVEqualPPV" (Negative Predictive Value = Positive Predictive Value);
"ValueDLR.Negative" (a value set for Negative Diagnostic Likelihood Ratio);
"ValueDLR.Positive" (a value set for Positive Diagnostic Likelihood Ratio);
"MinPvalue" (minimizes p-value associated with the statistical Chi-squared test which measures the association between the marker and the binary result obtained on
using the cutpoint);
"ObservedPrev" (The closest value to observed prevalence);
"MeanPrev" (The closest value to the mean of the diagnostic test values);
"PrevalenceMatching" (The value for which predicted prevalence is practically equal to observed prevalence).
So now the task is narrowed to selecting the method that is the best match for each situation.
There are many other configuration options described in package documentation including several methods of determining confidence intervals and detailed description of each of the methods. | How to determine best cutoff point and its confidence interval using ROC curve in R? | Thanks to all who aswered this question. I agree that there could be no one correct answer and criteria greatly depend on the aims that stand behind of the certain diagnostic test.
Finally I had found | How to determine best cutoff point and its confidence interval using ROC curve in R?
Thanks to all who aswered this question. I agree that there could be no one correct answer and criteria greatly depend on the aims that stand behind of the certain diagnostic test.
Finally I had found an R package OptimalCutpoints dedicated exactly to finding cutoff point in such type of analysis. Actually there are several methods of determining cutoff point.
"CB" (cost-benefit method);
"MCT" (minimizes Misclassification Cost Term);
"MinValueSp" (a minimum value set for Specificity);
"MinValueSe" (a minimum value set for Sensitivity);
"RangeSp" (a range of values set for Specificity);
"RangeSe" (a range of values set for Sensitivity);
"ValueSp" (a value set for Specificity);
"ValueSe" (a value set for Sensitivity);
"MinValueSpSe" (a minimum value set for Specificity and Sensitivity);
"MaxSp" (maximizes Specificity);
"MaxSe" (maximizes Sensitivity);
"MaxSpSe" (maximizes Sensitivity and Specificity simultaneously);
"Max-SumSpSe" (maximizes the sum of Sensitivity and Specificity);
"MaxProdSpSe" (maximizes the product of Sensitivity and Specificity);
"ROC01" (minimizes distance between ROC plot and point (0,1));
"SpEqualSe" (Sensitivity = Specificity);
"Youden" (Youden Index);
"MaxEfficiency" (maximizes Efficiency or Accuracy);
"Minimax" (minimizes the most frequent error);
"AUC" (maximizes concordance which is a function of AUC);
"MaxDOR" (maximizes Diagnostic Odds Ratio);
"MaxKappa" (maximizes Kappa Index);
"MaxAccuracyArea" (maximizes Accuracy Area);
"MinErrorRate" (minimizes Error Rate);
"MinValueNPV" (a minimum value set for Negative Predictive Value);
"MinValuePPV" (a minimum value set for Positive Predictive Value);
"MinValueNPVPPV" (a minimum value set for Predictive Values);
"PROC01" (minimizes distance between PROC plot and point (0,1));
"NPVEqualPPV" (Negative Predictive Value = Positive Predictive Value);
"ValueDLR.Negative" (a value set for Negative Diagnostic Likelihood Ratio);
"ValueDLR.Positive" (a value set for Positive Diagnostic Likelihood Ratio);
"MinPvalue" (minimizes p-value associated with the statistical Chi-squared test which measures the association between the marker and the binary result obtained on
using the cutpoint);
"ObservedPrev" (The closest value to observed prevalence);
"MeanPrev" (The closest value to the mean of the diagnostic test values);
"PrevalenceMatching" (The value for which predicted prevalence is practically equal to observed prevalence).
So now the task is narrowed to selecting the method that is the best match for each situation.
There are many other configuration options described in package documentation including several methods of determining confidence intervals and detailed description of each of the methods. | How to determine best cutoff point and its confidence interval using ROC curve in R?
Thanks to all who aswered this question. I agree that there could be no one correct answer and criteria greatly depend on the aims that stand behind of the certain diagnostic test.
Finally I had found |
4,490 | How to determine best cutoff point and its confidence interval using ROC curve in R? | In my opinion, there are multiple cut-off options. You might weight sensitivity and specificity differently (for example, maybe for you it is more important to have a high sensitive test even though this means having a low specific test. Or vice-versa).
If sensitivity and specificity have the same importance to you, one way of calculating the cut-off is choosing that value that minimizes the Euclidean distance between your ROC curve and the upper left corner of your graph.
Another way is using the value that maximizes (sensitivity + specificity - 1) as a cut-off.
Unfortunately, I do not have references for these two methods as I have learned them from professors or other statisticians. I have only heard referring to the latter method as the 'Youden's index' [1]).
[1] https://en.wikipedia.org/wiki/Youden%27s_J_statistic | How to determine best cutoff point and its confidence interval using ROC curve in R? | In my opinion, there are multiple cut-off options. You might weight sensitivity and specificity differently (for example, maybe for you it is more important to have a high sensitive test even though t | How to determine best cutoff point and its confidence interval using ROC curve in R?
In my opinion, there are multiple cut-off options. You might weight sensitivity and specificity differently (for example, maybe for you it is more important to have a high sensitive test even though this means having a low specific test. Or vice-versa).
If sensitivity and specificity have the same importance to you, one way of calculating the cut-off is choosing that value that minimizes the Euclidean distance between your ROC curve and the upper left corner of your graph.
Another way is using the value that maximizes (sensitivity + specificity - 1) as a cut-off.
Unfortunately, I do not have references for these two methods as I have learned them from professors or other statisticians. I have only heard referring to the latter method as the 'Youden's index' [1]).
[1] https://en.wikipedia.org/wiki/Youden%27s_J_statistic | How to determine best cutoff point and its confidence interval using ROC curve in R?
In my opinion, there are multiple cut-off options. You might weight sensitivity and specificity differently (for example, maybe for you it is more important to have a high sensitive test even though t |
4,491 | How to determine best cutoff point and its confidence interval using ROC curve in R? | Resist the temptation to find a cutoff. Unless you have a pre-specified utility/loss/cost function, a cutoff flies in the face of optimal decision-making. And an ROC curve is irrelevant to this issue. | How to determine best cutoff point and its confidence interval using ROC curve in R? | Resist the temptation to find a cutoff. Unless you have a pre-specified utility/loss/cost function, a cutoff flies in the face of optimal decision-making. And an ROC curve is irrelevant to this issu | How to determine best cutoff point and its confidence interval using ROC curve in R?
Resist the temptation to find a cutoff. Unless you have a pre-specified utility/loss/cost function, a cutoff flies in the face of optimal decision-making. And an ROC curve is irrelevant to this issue. | How to determine best cutoff point and its confidence interval using ROC curve in R?
Resist the temptation to find a cutoff. Unless you have a pre-specified utility/loss/cost function, a cutoff flies in the face of optimal decision-making. And an ROC curve is irrelevant to this issu |
4,492 | How to determine best cutoff point and its confidence interval using ROC curve in R? | Mathematically speaking, you need another condition to solve for the cut-off.
You may translate @Andrea's point to: "use external knowledge about the underlying problem".
Example conditions:
for this application, we need sensitivity >= x, and/or specificity >= y.
a false negative is 10 x as bad as a false positive. (That would give you a modification of the closest point to the ideal corner.) | How to determine best cutoff point and its confidence interval using ROC curve in R? | Mathematically speaking, you need another condition to solve for the cut-off.
You may translate @Andrea's point to: "use external knowledge about the underlying problem".
Example conditions:
for thi | How to determine best cutoff point and its confidence interval using ROC curve in R?
Mathematically speaking, you need another condition to solve for the cut-off.
You may translate @Andrea's point to: "use external knowledge about the underlying problem".
Example conditions:
for this application, we need sensitivity >= x, and/or specificity >= y.
a false negative is 10 x as bad as a false positive. (That would give you a modification of the closest point to the ideal corner.) | How to determine best cutoff point and its confidence interval using ROC curve in R?
Mathematically speaking, you need another condition to solve for the cut-off.
You may translate @Andrea's point to: "use external knowledge about the underlying problem".
Example conditions:
for thi |
4,493 | How to determine best cutoff point and its confidence interval using ROC curve in R? | Visualize accuracy versus cutoff. You can read more details at ROCR documentation and very nice presentation from the same. | How to determine best cutoff point and its confidence interval using ROC curve in R? | Visualize accuracy versus cutoff. You can read more details at ROCR documentation and very nice presentation from the same. | How to determine best cutoff point and its confidence interval using ROC curve in R?
Visualize accuracy versus cutoff. You can read more details at ROCR documentation and very nice presentation from the same. | How to determine best cutoff point and its confidence interval using ROC curve in R?
Visualize accuracy versus cutoff. You can read more details at ROCR documentation and very nice presentation from the same. |
4,494 | How to determine best cutoff point and its confidence interval using ROC curve in R? | What's more important - there's very few datapoints behind this curve. When you do decide how you're going to make the sensitivity/specificity tradeoff I'd strongly encourage you to bootstrap the curve and the resulting cutoff number. You may find that there's a lot of uncertainty in your estimated best cutoff. | How to determine best cutoff point and its confidence interval using ROC curve in R? | What's more important - there's very few datapoints behind this curve. When you do decide how you're going to make the sensitivity/specificity tradeoff I'd strongly encourage you to bootstrap the cur | How to determine best cutoff point and its confidence interval using ROC curve in R?
What's more important - there's very few datapoints behind this curve. When you do decide how you're going to make the sensitivity/specificity tradeoff I'd strongly encourage you to bootstrap the curve and the resulting cutoff number. You may find that there's a lot of uncertainty in your estimated best cutoff. | How to determine best cutoff point and its confidence interval using ROC curve in R?
What's more important - there's very few datapoints behind this curve. When you do decide how you're going to make the sensitivity/specificity tradeoff I'd strongly encourage you to bootstrap the cur |
4,495 | Interpretation of R's output for binomial regression | What you have done is logistic regression. This can be done in basically any statistical software, and the output will be similar (at least in content, albeit the presentation may differ). There is a guide to logistic regression with R on UCLA's excellent statistics help website. If you are unfamiliar with this, my answer here: difference between logit and probit models, may help you understand what LR is about (although it is written in a different context).
You seem to have two models presented, I will primarily focus on the top one. In addition, there seems to have been an error in copying and pasting the model or output, so I will swap leaves.presence with Area in the output to make it consistent with the model. Here is the model I'm referring to (notice that I added (link="logit"), which is implied by family=binomial; see ?glm and ?family):
glm(formula = leaves.presence ~ Area, family = binomial(link="logit"), data = n)
Let's walk through this output (notice that I changed the name of the variable in the second line under Coefficients):
Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4
Just as there are residuals in linear (OLS) regression, there can be residuals in logistic regression and other generalized linear models. They are more complicated when the response variable is not continuous, however. GLiMs can have five different types of residuals, but what comes listed standard are the deviance residuals. (Deviance and deviance residuals are more advanced, so I'll be brief here; if this discussion is somewhat hard to follow, I wouldn't worry too much, you can skip it):
Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
For every data point used in your model, the deviance associated with that point is calculated. Having done this for each point, you have a set of such residuals, and the above output is simply a non-parametric description of their distribution.
Next we see the information about the covariates, which is what people typically are primarily interested in:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
For a simple logistic regression model like this one, there is only one covariate (Area here) and the intercept (also sometimes called the 'constant'). If you had a multiple logistic regression, there would be additional covariates listed below these, but the interpretation of the output would be the same. Under Estimate in the second row is the coefficient associated with the variable listed to the left. It is the estimated amount by which the log odds of leaves.presence would increase if Area were one unit higher. The log odds of leaves.presence when Area is $0$ is just above in the first row. (If you are not sufficiently familiar with log odds, it may help you to read my answer here: interpretation of simple predictions to odds ratios in logistic regression.) In the next column, we see the standard error associated with these estimates. That is, they are an estimate of how much, on average, these estimates would bounce around if the study were re-run identically, but with new data, over and over. (If you are not very familiar with the idea of a standard error, it may help you to read my answer here: how to interpret coefficient standard errors in linear regression.) If we were to divide the estimate by the standard error, we would get a quotient which is assumed to be normally distributed with large enough samples. This value is listed in under z value. Below Pr(>|z|) are listed the two-tailed p-values that correspond to those z-values in a standard normal distribution. Lastly, there are the traditional significance stars (and note the key below the coefficients table).
The Dispersion line is printed by default with GLiMs, but doesn't add much information here (it is more important with count models, e.g.). We can ignore this.
Lastly, we get information about the model and its goodness of fit:
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4
The line about missingness is often, um, missing. It shows up here because you had 314 observations for which either leaves.presence, Area, or both were missing. Those partial observations were not used in fitting the model.
The Residual deviance is a measure of the lack of fit of your model taken as a whole, whereas the Null deviance is such a measure for a reduced model that only includes the intercept. Notice that the degrees of freedom associated with these two differs by only one. Since your model has only one covariate, only one additional parameter has been estimated (the Estimate for Area), and thus only one additional degree of freedom has been consumed. These two values can be used in conducting a test of the model as a whole, which would be analogous to the global $F$-test that comes with a multiple linear regression model. Since you have only one covariate, such a test would be uninteresting in this case.
The AIC is another measure of goodness of fit that takes into account the ability of the model to fit the data. This is very useful when comparing two models where one may fit better but perhaps only by virtue of being more flexible and thus better able to fit any data. Since you have only one model, this is uninformative.
The reference to Fisher scoring iterations has to do with how the model was estimated. A linear model can be fit by solving closed form equations. Unfortunately, that cannot be done with most GLiMs including logistic regression. Instead, an iterative approach (the Newton-Raphson algorithm by default) is used. Loosely, the model is fit based on a guess about what the estimates might be. The algorithm then looks around to see if the fit would be improved by using different estimates instead. If so, it moves in that direction (say, using a higher value for the estimate) and then fits the model again. The algorithm stops when it doesn't perceive that moving again would yield much additional improvement. This line tells you how many iterations there were before the process stopped and output the results.
Regarding the second model and output you list, this is just a different way of displaying results. Specifically, these
Coefficients:
(Intercept) Areal
-0.3877697 0.0008166
are the same kind of estimates discussed above (albeit from a different model and presented with less supplementary information). | Interpretation of R's output for binomial regression | What you have done is logistic regression. This can be done in basically any statistical software, and the output will be similar (at least in content, albeit the presentation may differ). There is | Interpretation of R's output for binomial regression
What you have done is logistic regression. This can be done in basically any statistical software, and the output will be similar (at least in content, albeit the presentation may differ). There is a guide to logistic regression with R on UCLA's excellent statistics help website. If you are unfamiliar with this, my answer here: difference between logit and probit models, may help you understand what LR is about (although it is written in a different context).
You seem to have two models presented, I will primarily focus on the top one. In addition, there seems to have been an error in copying and pasting the model or output, so I will swap leaves.presence with Area in the output to make it consistent with the model. Here is the model I'm referring to (notice that I added (link="logit"), which is implied by family=binomial; see ?glm and ?family):
glm(formula = leaves.presence ~ Area, family = binomial(link="logit"), data = n)
Let's walk through this output (notice that I changed the name of the variable in the second line under Coefficients):
Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4
Just as there are residuals in linear (OLS) regression, there can be residuals in logistic regression and other generalized linear models. They are more complicated when the response variable is not continuous, however. GLiMs can have five different types of residuals, but what comes listed standard are the deviance residuals. (Deviance and deviance residuals are more advanced, so I'll be brief here; if this discussion is somewhat hard to follow, I wouldn't worry too much, you can skip it):
Deviance Residuals:
Min 1Q Median 3Q Max
-1.213 -1.044 -1.023 1.312 1.344
For every data point used in your model, the deviance associated with that point is calculated. Having done this for each point, you have a set of such residuals, and the above output is simply a non-parametric description of their distribution.
Next we see the information about the covariates, which is what people typically are primarily interested in:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.3877697 0.0282178 -13.742 < 2e-16 ***
Area 0.0008166 0.0002472 3.303 0.000956 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
For a simple logistic regression model like this one, there is only one covariate (Area here) and the intercept (also sometimes called the 'constant'). If you had a multiple logistic regression, there would be additional covariates listed below these, but the interpretation of the output would be the same. Under Estimate in the second row is the coefficient associated with the variable listed to the left. It is the estimated amount by which the log odds of leaves.presence would increase if Area were one unit higher. The log odds of leaves.presence when Area is $0$ is just above in the first row. (If you are not sufficiently familiar with log odds, it may help you to read my answer here: interpretation of simple predictions to odds ratios in logistic regression.) In the next column, we see the standard error associated with these estimates. That is, they are an estimate of how much, on average, these estimates would bounce around if the study were re-run identically, but with new data, over and over. (If you are not very familiar with the idea of a standard error, it may help you to read my answer here: how to interpret coefficient standard errors in linear regression.) If we were to divide the estimate by the standard error, we would get a quotient which is assumed to be normally distributed with large enough samples. This value is listed in under z value. Below Pr(>|z|) are listed the two-tailed p-values that correspond to those z-values in a standard normal distribution. Lastly, there are the traditional significance stars (and note the key below the coefficients table).
The Dispersion line is printed by default with GLiMs, but doesn't add much information here (it is more important with count models, e.g.). We can ignore this.
Lastly, we get information about the model and its goodness of fit:
Null deviance: 16662 on 12237 degrees of freedom
Residual deviance: 16651 on 12236 degrees of freedom
(314 observations deleted due to missingness)
AIC: 16655
Number of Fisher Scoring iterations: 4
The line about missingness is often, um, missing. It shows up here because you had 314 observations for which either leaves.presence, Area, or both were missing. Those partial observations were not used in fitting the model.
The Residual deviance is a measure of the lack of fit of your model taken as a whole, whereas the Null deviance is such a measure for a reduced model that only includes the intercept. Notice that the degrees of freedom associated with these two differs by only one. Since your model has only one covariate, only one additional parameter has been estimated (the Estimate for Area), and thus only one additional degree of freedom has been consumed. These two values can be used in conducting a test of the model as a whole, which would be analogous to the global $F$-test that comes with a multiple linear regression model. Since you have only one covariate, such a test would be uninteresting in this case.
The AIC is another measure of goodness of fit that takes into account the ability of the model to fit the data. This is very useful when comparing two models where one may fit better but perhaps only by virtue of being more flexible and thus better able to fit any data. Since you have only one model, this is uninformative.
The reference to Fisher scoring iterations has to do with how the model was estimated. A linear model can be fit by solving closed form equations. Unfortunately, that cannot be done with most GLiMs including logistic regression. Instead, an iterative approach (the Newton-Raphson algorithm by default) is used. Loosely, the model is fit based on a guess about what the estimates might be. The algorithm then looks around to see if the fit would be improved by using different estimates instead. If so, it moves in that direction (say, using a higher value for the estimate) and then fits the model again. The algorithm stops when it doesn't perceive that moving again would yield much additional improvement. This line tells you how many iterations there were before the process stopped and output the results.
Regarding the second model and output you list, this is just a different way of displaying results. Specifically, these
Coefficients:
(Intercept) Areal
-0.3877697 0.0008166
are the same kind of estimates discussed above (albeit from a different model and presented with less supplementary information). | Interpretation of R's output for binomial regression
What you have done is logistic regression. This can be done in basically any statistical software, and the output will be similar (at least in content, albeit the presentation may differ). There is |
4,496 | Interpretation of R's output for binomial regression | Call: This is just the call that you made to the function. It will be the exact same code you typed into R. This can be helpful for seeing if you made any typos.
(Deviance) Residuals: You can pretty much ignore these for logistic regression. For Poisson or linear regression, you want these to be more-or-less normally distributed (which is the same thing the top two diagnostic plots are checking). You can check this by seeing if the absolute value of 1Q and 3Q are close(ish) to each other, and if the median is close to 0. The mean is not shown because it's always 0. If any of these are super off then you probably have some weird skew in your data. (This will also show up in your diagnostic plots!)
Coefficients: This is the meat of the output.
Intercept: For Poisson and linear regression, this is the predicted output when all our inputs are 0. For logistic regression, this value will be further away from 0 the bigger the difference between the number of observation in each class.. The standard error represents how uncertain we are about this (lower is better). In this case, because our intercept is far from 0 and our standard error is much smaller than the intercept, we can be pretty sure that one of our classes (failed or didn't fail) has a lot of more observations in it. (In this case it's "didn't fail", thankfully!)
Various inputs (each input will be on a different line): This estimate represents how much we think the output will change each time we increase this input by 1. The bigger the estimate, the bigger the effect of this input variable on the output. The standard error is how certain about it we are. Usually, we can be pretty sure an input is informative is the standard error is 1/10 of the estimate. So in this case we're pretty sure the intercept is important.
Signif. Codes: This is a key to the significance of each :input and the intercept. These are only correct if you only ever fit one model to your data. (In other words, they’re great for experimental data if you from the start which variables you’re interested in and not as informative for data analysis or variable selection.)
Wait, why can't we use statistical significance? You can, I just wouldn't generally recommend it. In data science you'll often be fitting multiple models using the same dataset to try and pick the best model. If you ever run more than one test for statistical significance on the same dataset, you need to adust your p-value to make up for it. You can think about it this way: if you decide that you'll accept results that are below p = 0.05, you're basically saying that you're ok with being wrong one in twenty times. If you then do five tests, however, and for each one there's a 1/20 chance that you'll be wrong, you now have a 1/4 chance of having been wrong on at least one of those tests... but you don't know which one. You can correct for it (by multiplying the p-value you'll accept as significant by the number of tests you'll preform) but in practice I find it's generally easier to avoid using p-values altogether.
(Dispersion parameter for binomial family taken to be 1): You'll only see this for Poisson and binomial (logistic) regression. It's just letting you know that there has been an additional scaling parameter added to help fit the model. You can ignore it.
Null deviance: The null deviance tells us how well we can predict our output only using the intercept. Smaller is better.
Residual deviance: The residual deviance tells us how well we can predict our output using the intercept and our inputs. Smaller is better. The bigger the difference between the null deviance and residual deviance is, the more helpful our input variables were for predicting the output variable.
AIC: The AIC is the "Akaike information criterion" and it's an estimate of how well your model is describing the patterns in your data. It's mainly used for comparing models trained on the same dataset. If you need to pick between models, the model with the lower AIC is doing a better job describing the variance in the data.
Number of Fisher Scoring iterations: This is just a measure of how long it took to fit you model. You can safely ignore it.
I suggest this toturial to learn more.
https://www.kaggle.com/rtatman/regression-challenge-day-5 | Interpretation of R's output for binomial regression | Call: This is just the call that you made to the function. It will be the exact same code you typed into R. This can be helpful for seeing if you made any typos.
(Deviance) Residuals: You can pretty m | Interpretation of R's output for binomial regression
Call: This is just the call that you made to the function. It will be the exact same code you typed into R. This can be helpful for seeing if you made any typos.
(Deviance) Residuals: You can pretty much ignore these for logistic regression. For Poisson or linear regression, you want these to be more-or-less normally distributed (which is the same thing the top two diagnostic plots are checking). You can check this by seeing if the absolute value of 1Q and 3Q are close(ish) to each other, and if the median is close to 0. The mean is not shown because it's always 0. If any of these are super off then you probably have some weird skew in your data. (This will also show up in your diagnostic plots!)
Coefficients: This is the meat of the output.
Intercept: For Poisson and linear regression, this is the predicted output when all our inputs are 0. For logistic regression, this value will be further away from 0 the bigger the difference between the number of observation in each class.. The standard error represents how uncertain we are about this (lower is better). In this case, because our intercept is far from 0 and our standard error is much smaller than the intercept, we can be pretty sure that one of our classes (failed or didn't fail) has a lot of more observations in it. (In this case it's "didn't fail", thankfully!)
Various inputs (each input will be on a different line): This estimate represents how much we think the output will change each time we increase this input by 1. The bigger the estimate, the bigger the effect of this input variable on the output. The standard error is how certain about it we are. Usually, we can be pretty sure an input is informative is the standard error is 1/10 of the estimate. So in this case we're pretty sure the intercept is important.
Signif. Codes: This is a key to the significance of each :input and the intercept. These are only correct if you only ever fit one model to your data. (In other words, they’re great for experimental data if you from the start which variables you’re interested in and not as informative for data analysis or variable selection.)
Wait, why can't we use statistical significance? You can, I just wouldn't generally recommend it. In data science you'll often be fitting multiple models using the same dataset to try and pick the best model. If you ever run more than one test for statistical significance on the same dataset, you need to adust your p-value to make up for it. You can think about it this way: if you decide that you'll accept results that are below p = 0.05, you're basically saying that you're ok with being wrong one in twenty times. If you then do five tests, however, and for each one there's a 1/20 chance that you'll be wrong, you now have a 1/4 chance of having been wrong on at least one of those tests... but you don't know which one. You can correct for it (by multiplying the p-value you'll accept as significant by the number of tests you'll preform) but in practice I find it's generally easier to avoid using p-values altogether.
(Dispersion parameter for binomial family taken to be 1): You'll only see this for Poisson and binomial (logistic) regression. It's just letting you know that there has been an additional scaling parameter added to help fit the model. You can ignore it.
Null deviance: The null deviance tells us how well we can predict our output only using the intercept. Smaller is better.
Residual deviance: The residual deviance tells us how well we can predict our output using the intercept and our inputs. Smaller is better. The bigger the difference between the null deviance and residual deviance is, the more helpful our input variables were for predicting the output variable.
AIC: The AIC is the "Akaike information criterion" and it's an estimate of how well your model is describing the patterns in your data. It's mainly used for comparing models trained on the same dataset. If you need to pick between models, the model with the lower AIC is doing a better job describing the variance in the data.
Number of Fisher Scoring iterations: This is just a measure of how long it took to fit you model. You can safely ignore it.
I suggest this toturial to learn more.
https://www.kaggle.com/rtatman/regression-challenge-day-5 | Interpretation of R's output for binomial regression
Call: This is just the call that you made to the function. It will be the exact same code you typed into R. This can be helpful for seeing if you made any typos.
(Deviance) Residuals: You can pretty m |
4,497 | What are the main theorems in Machine (Deep) Learning? | As I wrote in the comments, this question seems too broad to me, but I'll make an attempt to an answer. In order to set some boundaries, I will start with a little math which underlies most of ML, and then concentrate on recent results for DL.
The bias-variance tradeoff is referred to in countless books, courses, MOOCs, blogs, tweets, etc. on ML, so we can't start without mentioning it:
$$\mathbb{E}[(Y-\hat{f}(X))^2|X=x_0]=\sigma_{\epsilon}^2+\left(\mathbb{E}\hat{f}(x_0)-f(x_0)\right)^2+\mathbb{E}\left[\left(\hat{f}(x_0)-\mathbb{E}\hat{f}(x_0)\right)^2\right]=\text{Irreducible error + Bias}^2 \text{ + Variance}$$
Proof here: https://web.stanford.edu/~hastie/ElemStatLearn/
The Gauss-Markov Theorem (yes, linear regression will remain an important part of Machine Learning, no matter what: deal with it) clarifies that, when the linear model is true and some assumptions on the error term are valid, OLS has the minimum mean squared error (which in the above expression is just $\text{Bias}^2 \text{ + Variance}$) only among the unbiased linear estimators of the linear model. Thus there could well be linear estimators with bias (or nonlinear estimators) which have a better mean square error, and thus a better expected prediction error, than OLS. And this paves the way to all the regularization arsenal (ridge regression, LASSO, weight decay, etc.) which is a workhorse of ML. A proof is given here (and in countless other books):
https://www.amazon.com/Linear-Statistical-Models-James-Stapleton/dp/0470231467
Probably more relevant to the explosion of regularization approaches, as noted by Carlos Cinelli in the comments, and definitely more fun to learn about, is the James-Stein theorem. Consider $n$ independent, same variance but not same mean Gaussian random variables:
$$X_i|\mu_i\sim \mathcal{N}(\theta_i,\sigma^2), \quad i=1,\dots,n$$
in other words, we have an $n-$components Gaussian random vector $\mathbf{X}\sim \mathcal{N}(\boldsymbol{\theta},\sigma^2I)$. We have one sample $\mathbf{x}$ from $\mathbf{X}$ and we want to estimate $\boldsymbol{\theta}$. The MLE (and also UMVUE) estimator is obviously $\hat{\boldsymbol{\theta}}_{MLE}=\mathbf{x}$. Consider the James-Stein estimator
$$\hat{\boldsymbol{\theta}}_{JS}= \left(1-\frac{(n-2)\sigma^2}{||\mathbf{x}||^2}\right)\mathbf{x} $$
Clearly, if $(n-2)\sigma^2\leq||\mathbf{x}||^2$, $\hat{\boldsymbol{\theta}}_{JS}$ shrinks the MLE estimate towards zero. The James-Stein theorem states that for $n\geq4$, $\hat{\boldsymbol{\theta}}_{JS}$ strictly dominates $\hat{\boldsymbol{\theta}}_{MLE}$, i.e., it has lower MSE $\forall \ \boldsymbol{\theta}$. Pheraps surprisingly, even if we shrink towards any other constant $\boldsymbol{c}\neq \mathbf{0}$, $\hat{\boldsymbol{\theta}}_{JS}$ still dominates $\hat{\boldsymbol{\theta}}_{MLE}$. Since the $X_i$ are independent, it may seem weird that, when trying to estimate the height of three unrelated persons, including a sample from the number of apples produced in Spain, may improve our estimate on average. The key point here is "on average": the mean square error for the simultaneous estimation of all the components of the parameter vector is smaller, but the square error for one or more components may well be larger, and indeed it often is, when you have "extreme" observations.
Finding out that MLE, which was indeed the "optimal" estimator for the univariate estimation case, was dethroned for multivariate estimation, was quite a shock at the time, and led to a great interest in shrinkage, better known as regularization in ML parlance. One could note some similarities with mixed models and the concept of "borrowing strength": there is indeed some connection, as discussed here
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
Reference: James, W., Stein, C., Estimation with Quadratic Loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, 361--379, University of California Press, Berkeley, Calif., 1961
Principal Component Analysis is key to the important topic of dimension reduction, and it's based on the Singular Value Decomposition: for each $N\times p$ real matrix $X$ (although the theorem easily generalizes to complex matrices) we can write
$$X=UDV^T$$
where $U$ of size $N \times p$ is orthogonal, $D$ is a $p \times p$ diagonal matrix with nonnegative diagonal elements and $U$ of size $p \times p$ is again orthogonal. For proofs and algorithms on how to compute it see: Golub, G., and Van Loan, C. (1983), Matrix computations, John Hopkins University press, Baltimore.
Mercer's theorem is the founding stone for a lot of different ML methods: thin plate splines, support vector machines, the Kriging estimate of a Gaussian random process, etc. Basically, is one of the two theorems behind the so-called kernel trick. Let $K(x,y):[a,b]\times[a,b]\to\mathbb{R}$ be a symmmetric continuous function or kernel. if $K$ is positive semidefinite, then it admits an orthornormal basis of eigenfunctions corresponding to nonnegative eigenvalues:
$$K(x,y)=\sum_{i=1}^\infty\gamma_i \phi_i(x)\phi_i(y)$$
The importance of this theorem for ML theory is testified by the number of references it gets in famous texts, such as for example Rasmussen & Williams text on Gaussian processes.
Reference: J. Mercer, Functions of positive and negative type, and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 209:415-446, 1909
There is also a simpler presentation in Konrad Jörgens, Linear integral operators, Pitman, Boston, 1982.
The other theorem which, together with Mercer's theorem, lays out the theoretical foundation of the kernel trick, is the representer theorem. Suppose you have a sample space $\mathcal{X}$ and a symmetric positive semidefinite kernel $K: \mathcal{X} \times \mathcal{X}\to \mathbb{R}$. Also let $\mathcal{H}_K$ be the RKHS associated with $K$. Finally, let $S=\{\mathbb{x}_i,y_i\}_{i=1}^n$ be a training sample. The theorem says that among all functions $f\in \mathcal{H}_K$, which all admit an infinite representation in terms of eigenfunctions of $K$ because of Mercer's theorem, the one that minimizes the regularized risk always has a finite representation in the basis formed by the kernel evaluated at the $n$ training points, i.e.
$$\min_{f \in \mathcal{H}_K} \sum_{i=1}^n L(y_i,f(x_i))+\lambda||f||^2_{\mathcal{H}_K}=\min_{\{c_j\}_1^\infty} \sum_{i=1}^n L(y_i,\sum_j^\infty c_j\phi_j(x_i))+\lambda\sum_j^\infty \frac{c_j^2}{\gamma_j}=\sum_{i=1}^n\alpha_i K(x,x_i)$$
(the theorem is the last equality). References: Wahba, G. 1990, Spline Models for Observational Data, SIAM, Philadelphia.
The universal approximation theorem has been already cited by user Tobias Windisch and is much less relevant to Machine Learning than it is to functional analysis, even if it may not seem so at a first glance. The problem is that the theorem only says that such a network exists, but:
it doesn't give any correlation between the size $N$ of the hidden layer and some measure of complexity of the target function $f(x)$, such as for example Total Variation. If $f(x)=\sin(\omega x):[0,2\pi]\to[-1,1]$ and the $N$ required for a fixed error $\epsilon$ growed exponentially with $\omega$, then single hidden layer neural networks would be useless.
it doesn't say if the network $F(x)$ is learnable. In other words assume that given $f$ and $\epsilon$, we know that a size $N$ NN will approximate $f$ with the required tolerance in the hypercube. Then by using training sets of size $M$ and a learning procedure such as for example back-prop, do we have any guarantee that by increasing $M$ we can recover $F$?
finally, and worse of them all, it doesn't say anything about the prediction error of neural networks. What we're really interested in is an estimate of the prediction error, at least averaged over all training sets of size $M$. The theorem doesn't help in this respect.
A smaller pain point with the Hornik's version of this theorem is that it doesn't hold for ReLU activation functions. However, Bartlett has since proved an extended version which covers this gap.
Until now, I guess all the theorems I considered were well-known to anybody. So now it's time for the fun stuff :-) Let's see a few Deep Learning theorems:
Assumptions:
the deep neural network $\Phi(X,W)$ (for fixed $W$, $\Phi_W(X)$ is the function which associates the inputs of the neural network with its outputs) and the regularization loss $\Theta(W)$ are both sums of positively homogeneous functions of the same degree
the loss function $L(Y,\Phi(X,W))$ is convex and at least once differentiable in $X$, in a compact set $S$
Then:
any local minimum for $L(Y,\Phi(X,W))+\lambda\Theta(W)$ such that a subnetwork of $\Phi(X,W)$ has zero weights, is a global minimum (Theorem 1)
above a critical network size, local descent will always converge to a global minimum from any initialization (Theorem 2).
This is very interesting: CNNs made only of convolutional layers, ReLU, max-pooling, fully connected ReLU and linear layers are positively homogenous functions, while if we include sigmoid activation functions, this isn't true anymore, which may partly explain the superior performance in some applications of ReLU + max pooling with respect to sigmoids. What's more, the theorems only hold if also $\Theta$ is positively homogeneous in $W$ of the same degree as $\Phi$. Now, the fun fact is that $l_1$ or $l_2$ regularization, although positively homogeneous, don't have the same degree of $\Phi$ (the degree of $\Phi$, in the simple CNN case mentioned before, increases with the number of layers). Instead, more modern regularization methods such as batch normalization and path-SGD do correspond to a positively homogeneous regularization function of the same degree as $\Phi$, and dropout, while not fitting this framework exactly, holds strong similarities to it. This may explain why, in order to get high accuracy with CNNs, $l_1$ and $l_2$ regularization are not enough, but we need to employ all kinds of devilish tricks, such as dropout and batch normalization! To the best of my knowledge, this is the closest thing to an explanation of the efficacy of batch normalization, which is otherwise very obscure, as correctly noted by Al Rahimi in his talk.
Another observation that some people make, based on Theorem 1, is that it could explain why ReLU work well, even with the problem of dead neurons. According to this intuition, the fact that, during training, some ReLU neurons "die" (go to zero activation and then never recover from that, since for $x<0$ the gradient of ReLU is zero) is "a feature, not a bug", because if we have reached a minimum and a full subnetwork has died, then we're provably reached a global minimum (under the hypotheses of Theorem 1). I may be missing something, but I think this interpretation is far-fetched. First of all, during training ReLUs can "die" well before we have reached a local minimun. Secondly, it has to be proved that when ReLU units "die", they always do it over a full subnetwork: the only case where this is trivially true is when you have just one hidden layer, in which case of course each single neuron is a subnetwork. But in general I would be very cautious in seeing "dead neurons" as a good thing.
References:
B. Haeffele and R. Vidal, Global optimality in neural network training,
In IEEE Conference on Computer Vision and Pattern Recognition,
2017.
B. Haeffele and R. Vidal. Global optimality in tensor factorization,
deep learning, and beyond, arXiv, abs/1506.07540, 2015.
Image classification requires learning representations which are invariant (or at least robust, i.e., very weakly sensitive) to various transformations such as location, pose, viewpoint, lighting, expression, etc. which are commonly present in natural images, but do not contain info for the classification task. Same thing for speech recognition: changes in pitch, volume, pace, accent. etc. should not lead to a change in the classification of the word. Operations such as convolution, max pooling, average pooling, etc., used in CNNs, have exactly this goal, so intuitively we expect that they would work for these applications. But do we have theorems to support this intuition? There is a vertical translation invariance theorem, which, notwithstanding the name, has nothing to do with translation in the vertical direction, but it's basically a result which says that features learnt in following layers get more and more invariant, as the number of layers grows. This is opposed to an older horizontal translation invariance theorem which however holds for scattering networks, but not for CNNs.
The theorem is very technical, however:
assume $f$ (your input image) is square-integrable
assume your filter commutes with the translation operator $T_t$, which maps the input image $f$ to a translated copy of itself $T_t f$. A learned convolution kernel (filter) satisfies this hypothesis.
assume all filters, nonlinearities and pooling in your network satisfy a so-called weak admissibility condition, which is basically some sort of weak regularity and boundedness conditions. These conditions are satisfied by learned convolution kernel (as long as some normalization operation is performed on each layer), ReLU, sigmoid, tanh, etc, nonlinearities, and by average pooling, but not by max-pooling. So it covers some (not all) real world CNN architectures.
Assume finally that each layer $n$ has a pooling factor $S_n> 1$, i.e., pooling is applied in each layer and effectively discards information. The condition $S_n\geq 1 $ would also suffice for a weaker version of the theorem.
Indicate with $\Phi^n(f)$ the output of layer $n$ of the CNN, when the input is $f$. Then finally:
$$\lim_{n\to\infty}|||\Phi^n(T_f f)-\Phi^n(f)|||=0$$
(the triple bars are not an error) which basically means that each layer learns features which become more and more invariant, and in the limit of an infinitely deep network we have a perfectly invariant architecture. Since CNNs have a finite number of layers, they're not perfectly translation-invariant, which is something well-known to practitioners.
Reference: T. Wiatowski and H. Bolcskei, A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction, arXiv:1512.06293v3.
To conclude, numerous bounds for the generalization error of a Deep Neural Network based on its Vapnik-Chervonkensis dimension or on the Rademacher complexity grow with the number of parameters (some even exponentially), which means they can't explain why DNNs work so well in practice even when the number of parameters is considerably larger than the number of training samples. As a matter of fact, VC theory is not very useful in Deep Learning.
Conversely, some results from last year bound the generalization error of a DNN classifier with a quantity which is independent of the neural network's depth and size, but depends only on the structure of the training set and the input space. Under some pretty technical assumptions on the learning procedure, and on the training set and input space, but with very little assumptions on the DNN (in particular, CNNs are fully covered), then with probability at least $1-\delta$, we have
$$\text{GE} \leq \sqrt{2\log{2}N_y\frac{\mathcal{N_{\gamma}}}{m}}+\sqrt{\frac{2\log{(1/\delta)}}{m}}$$
where:
$\text{GE}$ is the generalization error, defined as the difference between the expected loss (the average loss of the learned classifier on all possible test points) and the empirical loss (just the good ol' training set error)
$N_y$ is the number of classes
$m$ is the size of the training set
$\mathcal{N_{\gamma}}$ is the covering number of the data, a quantity related to the structure of the input space and to the the minimal separation among points of different classes in the training set. Reference:
J. Sokolic, R. Giryes, G. Sapiro, and M. Rodrigues. Generalization
error of invariant classifiers. In AISTATS, 2017 | What are the main theorems in Machine (Deep) Learning? | As I wrote in the comments, this question seems too broad to me, but I'll make an attempt to an answer. In order to set some boundaries, I will start with a little math which underlies most of ML, and | What are the main theorems in Machine (Deep) Learning?
As I wrote in the comments, this question seems too broad to me, but I'll make an attempt to an answer. In order to set some boundaries, I will start with a little math which underlies most of ML, and then concentrate on recent results for DL.
The bias-variance tradeoff is referred to in countless books, courses, MOOCs, blogs, tweets, etc. on ML, so we can't start without mentioning it:
$$\mathbb{E}[(Y-\hat{f}(X))^2|X=x_0]=\sigma_{\epsilon}^2+\left(\mathbb{E}\hat{f}(x_0)-f(x_0)\right)^2+\mathbb{E}\left[\left(\hat{f}(x_0)-\mathbb{E}\hat{f}(x_0)\right)^2\right]=\text{Irreducible error + Bias}^2 \text{ + Variance}$$
Proof here: https://web.stanford.edu/~hastie/ElemStatLearn/
The Gauss-Markov Theorem (yes, linear regression will remain an important part of Machine Learning, no matter what: deal with it) clarifies that, when the linear model is true and some assumptions on the error term are valid, OLS has the minimum mean squared error (which in the above expression is just $\text{Bias}^2 \text{ + Variance}$) only among the unbiased linear estimators of the linear model. Thus there could well be linear estimators with bias (or nonlinear estimators) which have a better mean square error, and thus a better expected prediction error, than OLS. And this paves the way to all the regularization arsenal (ridge regression, LASSO, weight decay, etc.) which is a workhorse of ML. A proof is given here (and in countless other books):
https://www.amazon.com/Linear-Statistical-Models-James-Stapleton/dp/0470231467
Probably more relevant to the explosion of regularization approaches, as noted by Carlos Cinelli in the comments, and definitely more fun to learn about, is the James-Stein theorem. Consider $n$ independent, same variance but not same mean Gaussian random variables:
$$X_i|\mu_i\sim \mathcal{N}(\theta_i,\sigma^2), \quad i=1,\dots,n$$
in other words, we have an $n-$components Gaussian random vector $\mathbf{X}\sim \mathcal{N}(\boldsymbol{\theta},\sigma^2I)$. We have one sample $\mathbf{x}$ from $\mathbf{X}$ and we want to estimate $\boldsymbol{\theta}$. The MLE (and also UMVUE) estimator is obviously $\hat{\boldsymbol{\theta}}_{MLE}=\mathbf{x}$. Consider the James-Stein estimator
$$\hat{\boldsymbol{\theta}}_{JS}= \left(1-\frac{(n-2)\sigma^2}{||\mathbf{x}||^2}\right)\mathbf{x} $$
Clearly, if $(n-2)\sigma^2\leq||\mathbf{x}||^2$, $\hat{\boldsymbol{\theta}}_{JS}$ shrinks the MLE estimate towards zero. The James-Stein theorem states that for $n\geq4$, $\hat{\boldsymbol{\theta}}_{JS}$ strictly dominates $\hat{\boldsymbol{\theta}}_{MLE}$, i.e., it has lower MSE $\forall \ \boldsymbol{\theta}$. Pheraps surprisingly, even if we shrink towards any other constant $\boldsymbol{c}\neq \mathbf{0}$, $\hat{\boldsymbol{\theta}}_{JS}$ still dominates $\hat{\boldsymbol{\theta}}_{MLE}$. Since the $X_i$ are independent, it may seem weird that, when trying to estimate the height of three unrelated persons, including a sample from the number of apples produced in Spain, may improve our estimate on average. The key point here is "on average": the mean square error for the simultaneous estimation of all the components of the parameter vector is smaller, but the square error for one or more components may well be larger, and indeed it often is, when you have "extreme" observations.
Finding out that MLE, which was indeed the "optimal" estimator for the univariate estimation case, was dethroned for multivariate estimation, was quite a shock at the time, and led to a great interest in shrinkage, better known as regularization in ML parlance. One could note some similarities with mixed models and the concept of "borrowing strength": there is indeed some connection, as discussed here
Unified view on shrinkage: what is the relation (if any) between Stein's paradox, ridge regression, and random effects in mixed models?
Reference: James, W., Stein, C., Estimation with Quadratic Loss. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, 361--379, University of California Press, Berkeley, Calif., 1961
Principal Component Analysis is key to the important topic of dimension reduction, and it's based on the Singular Value Decomposition: for each $N\times p$ real matrix $X$ (although the theorem easily generalizes to complex matrices) we can write
$$X=UDV^T$$
where $U$ of size $N \times p$ is orthogonal, $D$ is a $p \times p$ diagonal matrix with nonnegative diagonal elements and $U$ of size $p \times p$ is again orthogonal. For proofs and algorithms on how to compute it see: Golub, G., and Van Loan, C. (1983), Matrix computations, John Hopkins University press, Baltimore.
Mercer's theorem is the founding stone for a lot of different ML methods: thin plate splines, support vector machines, the Kriging estimate of a Gaussian random process, etc. Basically, is one of the two theorems behind the so-called kernel trick. Let $K(x,y):[a,b]\times[a,b]\to\mathbb{R}$ be a symmmetric continuous function or kernel. if $K$ is positive semidefinite, then it admits an orthornormal basis of eigenfunctions corresponding to nonnegative eigenvalues:
$$K(x,y)=\sum_{i=1}^\infty\gamma_i \phi_i(x)\phi_i(y)$$
The importance of this theorem for ML theory is testified by the number of references it gets in famous texts, such as for example Rasmussen & Williams text on Gaussian processes.
Reference: J. Mercer, Functions of positive and negative type, and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 209:415-446, 1909
There is also a simpler presentation in Konrad Jörgens, Linear integral operators, Pitman, Boston, 1982.
The other theorem which, together with Mercer's theorem, lays out the theoretical foundation of the kernel trick, is the representer theorem. Suppose you have a sample space $\mathcal{X}$ and a symmetric positive semidefinite kernel $K: \mathcal{X} \times \mathcal{X}\to \mathbb{R}$. Also let $\mathcal{H}_K$ be the RKHS associated with $K$. Finally, let $S=\{\mathbb{x}_i,y_i\}_{i=1}^n$ be a training sample. The theorem says that among all functions $f\in \mathcal{H}_K$, which all admit an infinite representation in terms of eigenfunctions of $K$ because of Mercer's theorem, the one that minimizes the regularized risk always has a finite representation in the basis formed by the kernel evaluated at the $n$ training points, i.e.
$$\min_{f \in \mathcal{H}_K} \sum_{i=1}^n L(y_i,f(x_i))+\lambda||f||^2_{\mathcal{H}_K}=\min_{\{c_j\}_1^\infty} \sum_{i=1}^n L(y_i,\sum_j^\infty c_j\phi_j(x_i))+\lambda\sum_j^\infty \frac{c_j^2}{\gamma_j}=\sum_{i=1}^n\alpha_i K(x,x_i)$$
(the theorem is the last equality). References: Wahba, G. 1990, Spline Models for Observational Data, SIAM, Philadelphia.
The universal approximation theorem has been already cited by user Tobias Windisch and is much less relevant to Machine Learning than it is to functional analysis, even if it may not seem so at a first glance. The problem is that the theorem only says that such a network exists, but:
it doesn't give any correlation between the size $N$ of the hidden layer and some measure of complexity of the target function $f(x)$, such as for example Total Variation. If $f(x)=\sin(\omega x):[0,2\pi]\to[-1,1]$ and the $N$ required for a fixed error $\epsilon$ growed exponentially with $\omega$, then single hidden layer neural networks would be useless.
it doesn't say if the network $F(x)$ is learnable. In other words assume that given $f$ and $\epsilon$, we know that a size $N$ NN will approximate $f$ with the required tolerance in the hypercube. Then by using training sets of size $M$ and a learning procedure such as for example back-prop, do we have any guarantee that by increasing $M$ we can recover $F$?
finally, and worse of them all, it doesn't say anything about the prediction error of neural networks. What we're really interested in is an estimate of the prediction error, at least averaged over all training sets of size $M$. The theorem doesn't help in this respect.
A smaller pain point with the Hornik's version of this theorem is that it doesn't hold for ReLU activation functions. However, Bartlett has since proved an extended version which covers this gap.
Until now, I guess all the theorems I considered were well-known to anybody. So now it's time for the fun stuff :-) Let's see a few Deep Learning theorems:
Assumptions:
the deep neural network $\Phi(X,W)$ (for fixed $W$, $\Phi_W(X)$ is the function which associates the inputs of the neural network with its outputs) and the regularization loss $\Theta(W)$ are both sums of positively homogeneous functions of the same degree
the loss function $L(Y,\Phi(X,W))$ is convex and at least once differentiable in $X$, in a compact set $S$
Then:
any local minimum for $L(Y,\Phi(X,W))+\lambda\Theta(W)$ such that a subnetwork of $\Phi(X,W)$ has zero weights, is a global minimum (Theorem 1)
above a critical network size, local descent will always converge to a global minimum from any initialization (Theorem 2).
This is very interesting: CNNs made only of convolutional layers, ReLU, max-pooling, fully connected ReLU and linear layers are positively homogenous functions, while if we include sigmoid activation functions, this isn't true anymore, which may partly explain the superior performance in some applications of ReLU + max pooling with respect to sigmoids. What's more, the theorems only hold if also $\Theta$ is positively homogeneous in $W$ of the same degree as $\Phi$. Now, the fun fact is that $l_1$ or $l_2$ regularization, although positively homogeneous, don't have the same degree of $\Phi$ (the degree of $\Phi$, in the simple CNN case mentioned before, increases with the number of layers). Instead, more modern regularization methods such as batch normalization and path-SGD do correspond to a positively homogeneous regularization function of the same degree as $\Phi$, and dropout, while not fitting this framework exactly, holds strong similarities to it. This may explain why, in order to get high accuracy with CNNs, $l_1$ and $l_2$ regularization are not enough, but we need to employ all kinds of devilish tricks, such as dropout and batch normalization! To the best of my knowledge, this is the closest thing to an explanation of the efficacy of batch normalization, which is otherwise very obscure, as correctly noted by Al Rahimi in his talk.
Another observation that some people make, based on Theorem 1, is that it could explain why ReLU work well, even with the problem of dead neurons. According to this intuition, the fact that, during training, some ReLU neurons "die" (go to zero activation and then never recover from that, since for $x<0$ the gradient of ReLU is zero) is "a feature, not a bug", because if we have reached a minimum and a full subnetwork has died, then we're provably reached a global minimum (under the hypotheses of Theorem 1). I may be missing something, but I think this interpretation is far-fetched. First of all, during training ReLUs can "die" well before we have reached a local minimun. Secondly, it has to be proved that when ReLU units "die", they always do it over a full subnetwork: the only case where this is trivially true is when you have just one hidden layer, in which case of course each single neuron is a subnetwork. But in general I would be very cautious in seeing "dead neurons" as a good thing.
References:
B. Haeffele and R. Vidal, Global optimality in neural network training,
In IEEE Conference on Computer Vision and Pattern Recognition,
2017.
B. Haeffele and R. Vidal. Global optimality in tensor factorization,
deep learning, and beyond, arXiv, abs/1506.07540, 2015.
Image classification requires learning representations which are invariant (or at least robust, i.e., very weakly sensitive) to various transformations such as location, pose, viewpoint, lighting, expression, etc. which are commonly present in natural images, but do not contain info for the classification task. Same thing for speech recognition: changes in pitch, volume, pace, accent. etc. should not lead to a change in the classification of the word. Operations such as convolution, max pooling, average pooling, etc., used in CNNs, have exactly this goal, so intuitively we expect that they would work for these applications. But do we have theorems to support this intuition? There is a vertical translation invariance theorem, which, notwithstanding the name, has nothing to do with translation in the vertical direction, but it's basically a result which says that features learnt in following layers get more and more invariant, as the number of layers grows. This is opposed to an older horizontal translation invariance theorem which however holds for scattering networks, but not for CNNs.
The theorem is very technical, however:
assume $f$ (your input image) is square-integrable
assume your filter commutes with the translation operator $T_t$, which maps the input image $f$ to a translated copy of itself $T_t f$. A learned convolution kernel (filter) satisfies this hypothesis.
assume all filters, nonlinearities and pooling in your network satisfy a so-called weak admissibility condition, which is basically some sort of weak regularity and boundedness conditions. These conditions are satisfied by learned convolution kernel (as long as some normalization operation is performed on each layer), ReLU, sigmoid, tanh, etc, nonlinearities, and by average pooling, but not by max-pooling. So it covers some (not all) real world CNN architectures.
Assume finally that each layer $n$ has a pooling factor $S_n> 1$, i.e., pooling is applied in each layer and effectively discards information. The condition $S_n\geq 1 $ would also suffice for a weaker version of the theorem.
Indicate with $\Phi^n(f)$ the output of layer $n$ of the CNN, when the input is $f$. Then finally:
$$\lim_{n\to\infty}|||\Phi^n(T_f f)-\Phi^n(f)|||=0$$
(the triple bars are not an error) which basically means that each layer learns features which become more and more invariant, and in the limit of an infinitely deep network we have a perfectly invariant architecture. Since CNNs have a finite number of layers, they're not perfectly translation-invariant, which is something well-known to practitioners.
Reference: T. Wiatowski and H. Bolcskei, A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction, arXiv:1512.06293v3.
To conclude, numerous bounds for the generalization error of a Deep Neural Network based on its Vapnik-Chervonkensis dimension or on the Rademacher complexity grow with the number of parameters (some even exponentially), which means they can't explain why DNNs work so well in practice even when the number of parameters is considerably larger than the number of training samples. As a matter of fact, VC theory is not very useful in Deep Learning.
Conversely, some results from last year bound the generalization error of a DNN classifier with a quantity which is independent of the neural network's depth and size, but depends only on the structure of the training set and the input space. Under some pretty technical assumptions on the learning procedure, and on the training set and input space, but with very little assumptions on the DNN (in particular, CNNs are fully covered), then with probability at least $1-\delta$, we have
$$\text{GE} \leq \sqrt{2\log{2}N_y\frac{\mathcal{N_{\gamma}}}{m}}+\sqrt{\frac{2\log{(1/\delta)}}{m}}$$
where:
$\text{GE}$ is the generalization error, defined as the difference between the expected loss (the average loss of the learned classifier on all possible test points) and the empirical loss (just the good ol' training set error)
$N_y$ is the number of classes
$m$ is the size of the training set
$\mathcal{N_{\gamma}}$ is the covering number of the data, a quantity related to the structure of the input space and to the the minimal separation among points of different classes in the training set. Reference:
J. Sokolic, R. Giryes, G. Sapiro, and M. Rodrigues. Generalization
error of invariant classifiers. In AISTATS, 2017 | What are the main theorems in Machine (Deep) Learning?
As I wrote in the comments, this question seems too broad to me, but I'll make an attempt to an answer. In order to set some boundaries, I will start with a little math which underlies most of ML, and |
4,498 | What are the main theorems in Machine (Deep) Learning? | I think the following theorem that you allude to is considered to be pretty fundamental in statistical learning.
Theorem (Vapnik and Chervonenkis, 1971) Let $H$ be a
hypothesis class of functions from a domain $X$ to $\{0, 1\}$ and let the loss function
be the $0 − 1$ loss. Then, the following are equivalent:
$H$ has the uniform convergence property.
$H$ is PAC learnable.
$H$ has a finite VC-dimension.
Proved in a quantitive version here:
V.N. Vapnik and A.Y. Chervonenkis: On the uniform convergence of relative
frequencies of events to their probabilities. Theory of Probability and its Applications,
16(2): 264–280, 1971.
The version formulated above along with an nice exposition of other results from learning theory is available here:
Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. | What are the main theorems in Machine (Deep) Learning? | I think the following theorem that you allude to is considered to be pretty fundamental in statistical learning.
Theorem (Vapnik and Chervonenkis, 1971) Let $H$ be a
hypothesis class of functions from | What are the main theorems in Machine (Deep) Learning?
I think the following theorem that you allude to is considered to be pretty fundamental in statistical learning.
Theorem (Vapnik and Chervonenkis, 1971) Let $H$ be a
hypothesis class of functions from a domain $X$ to $\{0, 1\}$ and let the loss function
be the $0 − 1$ loss. Then, the following are equivalent:
$H$ has the uniform convergence property.
$H$ is PAC learnable.
$H$ has a finite VC-dimension.
Proved in a quantitive version here:
V.N. Vapnik and A.Y. Chervonenkis: On the uniform convergence of relative
frequencies of events to their probabilities. Theory of Probability and its Applications,
16(2): 264–280, 1971.
The version formulated above along with an nice exposition of other results from learning theory is available here:
Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. | What are the main theorems in Machine (Deep) Learning?
I think the following theorem that you allude to is considered to be pretty fundamental in statistical learning.
Theorem (Vapnik and Chervonenkis, 1971) Let $H$ be a
hypothesis class of functions from |
4,499 | What are the main theorems in Machine (Deep) Learning? | The Kernel Trick is a general idea that's used in a lot of places, and comes from a lot of abstract maths about Hilbert Spaces. Way too much theory for me to type (copy...) out into an answer here, but if you skim through this you can get a good idea of its rigorous underpinnings:
http://www.stats.ox.ac.uk/~sejdinov/teaching/atml14/Theory_2014.pdf | What are the main theorems in Machine (Deep) Learning? | The Kernel Trick is a general idea that's used in a lot of places, and comes from a lot of abstract maths about Hilbert Spaces. Way too much theory for me to type (copy...) out into an answer here, bu | What are the main theorems in Machine (Deep) Learning?
The Kernel Trick is a general idea that's used in a lot of places, and comes from a lot of abstract maths about Hilbert Spaces. Way too much theory for me to type (copy...) out into an answer here, but if you skim through this you can get a good idea of its rigorous underpinnings:
http://www.stats.ox.ac.uk/~sejdinov/teaching/atml14/Theory_2014.pdf | What are the main theorems in Machine (Deep) Learning?
The Kernel Trick is a general idea that's used in a lot of places, and comes from a lot of abstract maths about Hilbert Spaces. Way too much theory for me to type (copy...) out into an answer here, bu |
4,500 | What are the main theorems in Machine (Deep) Learning? | My favourite one is the Kraft inequality.
Theorem: For any description method $C$ for finite alphabet $A = \{1,\dots, m\}$, the code word lengths $L_C(1), \dots, L_C(2)$ must satisfy the inequality $\sum_{x \in A} 2 ^{-L_C(x)} \leq 1$.
This inequality relates compression with probability densities: given a code, the length of an outcome represented by that code is the negative log probability of a model identified by the code.
Further, the no free lunch theorem for machine learning has a less well known sibling the no hyper compression theorem, which states that not all sequences can be compressed. | What are the main theorems in Machine (Deep) Learning? | My favourite one is the Kraft inequality.
Theorem: For any description method $C$ for finite alphabet $A = \{1,\dots, m\}$, the code word lengths $L_C(1), \dots, L_C(2)$ must satisfy the inequality $ | What are the main theorems in Machine (Deep) Learning?
My favourite one is the Kraft inequality.
Theorem: For any description method $C$ for finite alphabet $A = \{1,\dots, m\}$, the code word lengths $L_C(1), \dots, L_C(2)$ must satisfy the inequality $\sum_{x \in A} 2 ^{-L_C(x)} \leq 1$.
This inequality relates compression with probability densities: given a code, the length of an outcome represented by that code is the negative log probability of a model identified by the code.
Further, the no free lunch theorem for machine learning has a less well known sibling the no hyper compression theorem, which states that not all sequences can be compressed. | What are the main theorems in Machine (Deep) Learning?
My favourite one is the Kraft inequality.
Theorem: For any description method $C$ for finite alphabet $A = \{1,\dots, m\}$, the code word lengths $L_C(1), \dots, L_C(2)$ must satisfy the inequality $ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.