idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
5,101
Is it possible to give variable sized images as input to a convolutional neural network?
I had to work through this problem today so I thought I'd share what I found that worked. I've found that there were quite a few "this could work in theory" answers and tidbits on the web but less from a practical "here's how you concretely implement this". To implement this using Tensorflow Keras, I had to do the following. Perhaps someone else can find some of these can be modified, relaxed, or dropped. Set the input of the network to allow for a variable size input using "None" as a placeholder dimension on the input_shape. See Francois Chollet's answer here. Use convolutional layers only until a global pooling operation has occurred (e.g. GlobalMaxPooling2D). Then Dense layers etc. can be used because the size is now fixed. Use a batch size of 1 only. This avoids dealing with mixed sizes within a batch. Write a small custom Sequence that creates batches of size 1 from the list of inputs. I did this to avoid dealing with different sizes inside a single Numpy array. Use Model.fit_generator on your custom Sequence for training and validation. (v.s. Model.fit) For some reason, Model.predict_generator popped even when using the Sequence as above. I had to resort to using Model.predict on individual inputs. Note that calls to Model.predict did complain about performance - which is unsurprising given the inefficiency of the solution - but it works!
Is it possible to give variable sized images as input to a convolutional neural network?
I had to work through this problem today so I thought I'd share what I found that worked. I've found that there were quite a few "this could work in theory" answers and tidbits on the web but less fro
Is it possible to give variable sized images as input to a convolutional neural network? I had to work through this problem today so I thought I'd share what I found that worked. I've found that there were quite a few "this could work in theory" answers and tidbits on the web but less from a practical "here's how you concretely implement this". To implement this using Tensorflow Keras, I had to do the following. Perhaps someone else can find some of these can be modified, relaxed, or dropped. Set the input of the network to allow for a variable size input using "None" as a placeholder dimension on the input_shape. See Francois Chollet's answer here. Use convolutional layers only until a global pooling operation has occurred (e.g. GlobalMaxPooling2D). Then Dense layers etc. can be used because the size is now fixed. Use a batch size of 1 only. This avoids dealing with mixed sizes within a batch. Write a small custom Sequence that creates batches of size 1 from the list of inputs. I did this to avoid dealing with different sizes inside a single Numpy array. Use Model.fit_generator on your custom Sequence for training and validation. (v.s. Model.fit) For some reason, Model.predict_generator popped even when using the Sequence as above. I had to resort to using Model.predict on individual inputs. Note that calls to Model.predict did complain about performance - which is unsurprising given the inefficiency of the solution - but it works!
Is it possible to give variable sized images as input to a convolutional neural network? I had to work through this problem today so I thought I'd share what I found that worked. I've found that there were quite a few "this could work in theory" answers and tidbits on the web but less fro
5,102
Is it possible to give variable sized images as input to a convolutional neural network?
Yes, simply select an appropriate backbone network which doesn't rely on the size of the input image to be some precise value -- most networks satisfy this criteria.
Is it possible to give variable sized images as input to a convolutional neural network?
Yes, simply select an appropriate backbone network which doesn't rely on the size of the input image to be some precise value -- most networks satisfy this criteria.
Is it possible to give variable sized images as input to a convolutional neural network? Yes, simply select an appropriate backbone network which doesn't rely on the size of the input image to be some precise value -- most networks satisfy this criteria.
Is it possible to give variable sized images as input to a convolutional neural network? Yes, simply select an appropriate backbone network which doesn't rely on the size of the input image to be some precise value -- most networks satisfy this criteria.
5,103
Does anyone know any good open source software for visualizing data from database?
I've never tried it, but there's an open source desktop / browser-based visualisation suite called WEAVE (short for Web-based Analysis and Visualization Environment). Like Tableau, it's intended to let you explore data through an interactive click-based interface. Unlike Tableau, it's open source: you can download the source code and install your own version on your own machine which can be as private or as public as you want it to be. Don't expect anything nearly as slick and user-friendly as Tableau, but it looks like an interesting, powerful project for someone prepared to put the time in to learning to use it. Source code on GitHub WEAVE project home page including live demos Short write-up on Flowing Data Some screenshots from their homepage: Or, you can look into rolling your own. There are some really good open source javacript tools for supporting programming data visualisation in a browser. If you don't mind coding some Javascript and some kind of server-side layer to serve up the data, give these a try: Miso Dataset for getting, processing, managing and cleaning the data on the client side in Javascript (includes a CSV parser) D3 for interactive visualisations in SVG (works in every browser except IE8 and earlier and old (v1,v2) Android phones). gRaphael for interactive cross-browser standard charts Raphael if you need SVG output to work in Internet Explorer 6, 7, and 8. D34Raphael combines D3's visualisation tools with Raphael's IE compatibility and abstraction If you're good with javascript, Raphael is a good way to build something custom-made. Here's a different approach to pumping D3 output through Raphael to be cross-browser Tip: If you decide to work with Raphael and the latest version is still 2.1.0, I'd advise applying this bug fix to the code). If you're interested in the web programming option, here's a slightly more detailed write-up I wrote on Raphael and D3 for stackoverflow. There are also some free (not open source) online datavis suites worth mentioning (probably not suitable for direct DB connection but worth a look): Raw by Density Design - blog introduction - (hit "Choose a data sample" to try it out) - mostly copy and paste based, not sure if it has an API that can connect to a database but good for trying things out quickly. Tableau Public - a free-to-use online version of Tableau. The catch is, the data you enter into it and any visualisations you create must be publicly available. And something completely different: if you have a quality server lying around and you happen to want to make awesome google-maps style tile-based 'slippy' maps using open source tech (probably not what you're looking for - but it's possible!), check out MapBox TileMill. Have a look through the gallery of examples on their home page - some of them are truly stunning. See also related project Modest Maps, an open source Javascript library for interacting with maps developed by Stamen Design (a really highly rated agency specialising in interactive maps). It's considered to be an improvement on the more established OpenLayers. All open source. WEAVE is the best GUI-based open-source tool I know of for personal visual analysis. The other tools listed are top of the range tools for online publishing of visualisations (for example, D3 is used by and developed by the award-winning NY Times graphics team), and are more often used for visualisation in the context of public-facing communications than exploratory analysis, but they can be used for analysis too.
Does anyone know any good open source software for visualizing data from database?
I've never tried it, but there's an open source desktop / browser-based visualisation suite called WEAVE (short for Web-based Analysis and Visualization Environment). Like Tableau, it's intended to le
Does anyone know any good open source software for visualizing data from database? I've never tried it, but there's an open source desktop / browser-based visualisation suite called WEAVE (short for Web-based Analysis and Visualization Environment). Like Tableau, it's intended to let you explore data through an interactive click-based interface. Unlike Tableau, it's open source: you can download the source code and install your own version on your own machine which can be as private or as public as you want it to be. Don't expect anything nearly as slick and user-friendly as Tableau, but it looks like an interesting, powerful project for someone prepared to put the time in to learning to use it. Source code on GitHub WEAVE project home page including live demos Short write-up on Flowing Data Some screenshots from their homepage: Or, you can look into rolling your own. There are some really good open source javacript tools for supporting programming data visualisation in a browser. If you don't mind coding some Javascript and some kind of server-side layer to serve up the data, give these a try: Miso Dataset for getting, processing, managing and cleaning the data on the client side in Javascript (includes a CSV parser) D3 for interactive visualisations in SVG (works in every browser except IE8 and earlier and old (v1,v2) Android phones). gRaphael for interactive cross-browser standard charts Raphael if you need SVG output to work in Internet Explorer 6, 7, and 8. D34Raphael combines D3's visualisation tools with Raphael's IE compatibility and abstraction If you're good with javascript, Raphael is a good way to build something custom-made. Here's a different approach to pumping D3 output through Raphael to be cross-browser Tip: If you decide to work with Raphael and the latest version is still 2.1.0, I'd advise applying this bug fix to the code). If you're interested in the web programming option, here's a slightly more detailed write-up I wrote on Raphael and D3 for stackoverflow. There are also some free (not open source) online datavis suites worth mentioning (probably not suitable for direct DB connection but worth a look): Raw by Density Design - blog introduction - (hit "Choose a data sample" to try it out) - mostly copy and paste based, not sure if it has an API that can connect to a database but good for trying things out quickly. Tableau Public - a free-to-use online version of Tableau. The catch is, the data you enter into it and any visualisations you create must be publicly available. And something completely different: if you have a quality server lying around and you happen to want to make awesome google-maps style tile-based 'slippy' maps using open source tech (probably not what you're looking for - but it's possible!), check out MapBox TileMill. Have a look through the gallery of examples on their home page - some of them are truly stunning. See also related project Modest Maps, an open source Javascript library for interacting with maps developed by Stamen Design (a really highly rated agency specialising in interactive maps). It's considered to be an improvement on the more established OpenLayers. All open source. WEAVE is the best GUI-based open-source tool I know of for personal visual analysis. The other tools listed are top of the range tools for online publishing of visualisations (for example, D3 is used by and developed by the award-winning NY Times graphics team), and are more often used for visualisation in the context of public-facing communications than exploratory analysis, but they can be used for analysis too.
Does anyone know any good open source software for visualizing data from database? I've never tried it, but there's an open source desktop / browser-based visualisation suite called WEAVE (short for Web-based Analysis and Visualization Environment). Like Tableau, it's intended to le
5,104
Does anyone know any good open source software for visualizing data from database?
Point and click interfaces seem easier, but in the long run you will benifit by learing "writing the code". One advantage of script based systems over point, click, drag interfaces is the audit trail/history (some GUIs do have a history, but they generally are not as easy to work with as a saved script). If you write some code to create your graph and save it then it is always easy to rerun it, or to make some small edits then rerun, it is not always easy to remember the set of clicks and drags used to create a prior graph. Scripts will also be much quicker for large numbers of plots. It will take a little more time to write the code for the first plot, but adding only a couple of lines and some small modifications can let you loop through 100's or more variables with little additional effort where you need to do the same set of clicks and drags over and over again for each plot. Many of the script based plotting tools do have GUIs that allow you to use point and click to get started, but help you learn the code and transition to the more powerful methods. I recommend R which is free and open source and does have some GUIs available (Rcmdr, jgr, rstudio, etc.) as a good option.
Does anyone know any good open source software for visualizing data from database?
Point and click interfaces seem easier, but in the long run you will benifit by learing "writing the code". One advantage of script based systems over point, click, drag interfaces is the audit trai
Does anyone know any good open source software for visualizing data from database? Point and click interfaces seem easier, but in the long run you will benifit by learing "writing the code". One advantage of script based systems over point, click, drag interfaces is the audit trail/history (some GUIs do have a history, but they generally are not as easy to work with as a saved script). If you write some code to create your graph and save it then it is always easy to rerun it, or to make some small edits then rerun, it is not always easy to remember the set of clicks and drags used to create a prior graph. Scripts will also be much quicker for large numbers of plots. It will take a little more time to write the code for the first plot, but adding only a couple of lines and some small modifications can let you loop through 100's or more variables with little additional effort where you need to do the same set of clicks and drags over and over again for each plot. Many of the script based plotting tools do have GUIs that allow you to use point and click to get started, but help you learn the code and transition to the more powerful methods. I recommend R which is free and open source and does have some GUIs available (Rcmdr, jgr, rstudio, etc.) as a good option.
Does anyone know any good open source software for visualizing data from database? Point and click interfaces seem easier, but in the long run you will benifit by learing "writing the code". One advantage of script based systems over point, click, drag interfaces is the audit trai
5,105
Does anyone know any good open source software for visualizing data from database?
RapidMiner has good visualisations: http://rapid-i.com/component/option,com_myblog/show,New-Plotters-for-RapidMiner.html/Itemid,172/lang,en/ And of course, there is R + ggplot2, using a web interface or a graphical frontend: http://labs.dataspora.com/ggplot2/ http://www.deducer.org/pmwiki/index.php?n=Main.PlotBuilder
Does anyone know any good open source software for visualizing data from database?
RapidMiner has good visualisations: http://rapid-i.com/component/option,com_myblog/show,New-Plotters-for-RapidMiner.html/Itemid,172/lang,en/ And of course, there is R + ggplot2, using a web interface
Does anyone know any good open source software for visualizing data from database? RapidMiner has good visualisations: http://rapid-i.com/component/option,com_myblog/show,New-Plotters-for-RapidMiner.html/Itemid,172/lang,en/ And of course, there is R + ggplot2, using a web interface or a graphical frontend: http://labs.dataspora.com/ggplot2/ http://www.deducer.org/pmwiki/index.php?n=Main.PlotBuilder
Does anyone know any good open source software for visualizing data from database? RapidMiner has good visualisations: http://rapid-i.com/component/option,com_myblog/show,New-Plotters-for-RapidMiner.html/Itemid,172/lang,en/ And of course, there is R + ggplot2, using a web interface
5,106
Does anyone know any good open source software for visualizing data from database?
You can use the free cloud service at https://my.infocaptor.com/free_data_visualization.php The online version lets you upload any csv/excel data and quickly visualize it. You don't need login for that. If you want to work with databases then you will need to login or you can download the software. PS: I am part of the company making this product
Does anyone know any good open source software for visualizing data from database?
You can use the free cloud service at https://my.infocaptor.com/free_data_visualization.php The online version lets you upload any csv/excel data and quickly visualize it. You don't need login for tha
Does anyone know any good open source software for visualizing data from database? You can use the free cloud service at https://my.infocaptor.com/free_data_visualization.php The online version lets you upload any csv/excel data and quickly visualize it. You don't need login for that. If you want to work with databases then you will need to login or you can download the software. PS: I am part of the company making this product
Does anyone know any good open source software for visualizing data from database? You can use the free cloud service at https://my.infocaptor.com/free_data_visualization.php The online version lets you upload any csv/excel data and quickly visualize it. You don't need login for tha
5,107
Does anyone know any good open source software for visualizing data from database?
I would use SCaVis data analysis and data visualization program. It is written in Java and runs on any platform, including Mac and Linux. You can also prototype charts using Python.
Does anyone know any good open source software for visualizing data from database?
I would use SCaVis data analysis and data visualization program. It is written in Java and runs on any platform, including Mac and Linux. You can also prototype charts using Python.
Does anyone know any good open source software for visualizing data from database? I would use SCaVis data analysis and data visualization program. It is written in Java and runs on any platform, including Mac and Linux. You can also prototype charts using Python.
Does anyone know any good open source software for visualizing data from database? I would use SCaVis data analysis and data visualization program. It is written in Java and runs on any platform, including Mac and Linux. You can also prototype charts using Python.
5,108
Does anyone know any good open source software for visualizing data from database?
There is a new tool called Helical Insight which is a open source BI tool using which you can create charts, reports, dashboards & various data visualizations. Using this you can create reports in 2 ways: Self service BI & Instant BI. In Self service BI you drag n drop columns which you want, add filters to ultimately create insights. 'Instant BI' is a feature in which you can type any business question and get instant insights accordingly. As far as data visualization is concerned, you can have inbuilt simple charts, scientific charts & also it is very to embed your own chart in it. Visit www.helicalinsight.com
Does anyone know any good open source software for visualizing data from database?
There is a new tool called Helical Insight which is a open source BI tool using which you can create charts, reports, dashboards & various data visualizations. Using this you can create reports in 2 w
Does anyone know any good open source software for visualizing data from database? There is a new tool called Helical Insight which is a open source BI tool using which you can create charts, reports, dashboards & various data visualizations. Using this you can create reports in 2 ways: Self service BI & Instant BI. In Self service BI you drag n drop columns which you want, add filters to ultimately create insights. 'Instant BI' is a feature in which you can type any business question and get instant insights accordingly. As far as data visualization is concerned, you can have inbuilt simple charts, scientific charts & also it is very to embed your own chart in it. Visit www.helicalinsight.com
Does anyone know any good open source software for visualizing data from database? There is a new tool called Helical Insight which is a open source BI tool using which you can create charts, reports, dashboards & various data visualizations. Using this you can create reports in 2 w
5,109
Does anyone know any good open source software for visualizing data from database?
There actually is a correct answer to this question, Orange. It was already around 2.something release at the time the question was posted. On Linux-based systems it can be simply installed through the Python Package Index with pip install orange3 and it is also in the Arch User Repository for Arch Linux, Manjaro, ALARM and other Arch-based distros. Also, there is a practically the same question on StackExchange mentioning a few more commercial/web alternatives (which is closed and linking back here). A few more can be found on Quora, but Orange is the only open-source, compiled and having a GUI at once that I know of. It has a rather perfected, aesthetic and minimalistic interface in my opinion.
Does anyone know any good open source software for visualizing data from database?
There actually is a correct answer to this question, Orange. It was already around 2.something release at the time the question was posted. On Linux-based systems it can be simply installed through th
Does anyone know any good open source software for visualizing data from database? There actually is a correct answer to this question, Orange. It was already around 2.something release at the time the question was posted. On Linux-based systems it can be simply installed through the Python Package Index with pip install orange3 and it is also in the Arch User Repository for Arch Linux, Manjaro, ALARM and other Arch-based distros. Also, there is a practically the same question on StackExchange mentioning a few more commercial/web alternatives (which is closed and linking back here). A few more can be found on Quora, but Orange is the only open-source, compiled and having a GUI at once that I know of. It has a rather perfected, aesthetic and minimalistic interface in my opinion.
Does anyone know any good open source software for visualizing data from database? There actually is a correct answer to this question, Orange. It was already around 2.something release at the time the question was posted. On Linux-based systems it can be simply installed through th
5,110
Does anyone know any good open source software for visualizing data from database?
Maybe http://www-958.ibm.com/software/data/cognos/manyeyes/ is what you want. Beware that the data you upload is public though. Edit: Sorry, I see you asked for open source. My bad.
Does anyone know any good open source software for visualizing data from database?
Maybe http://www-958.ibm.com/software/data/cognos/manyeyes/ is what you want. Beware that the data you upload is public though. Edit: Sorry, I see you asked for open source. My bad.
Does anyone know any good open source software for visualizing data from database? Maybe http://www-958.ibm.com/software/data/cognos/manyeyes/ is what you want. Beware that the data you upload is public though. Edit: Sorry, I see you asked for open source. My bad.
Does anyone know any good open source software for visualizing data from database? Maybe http://www-958.ibm.com/software/data/cognos/manyeyes/ is what you want. Beware that the data you upload is public though. Edit: Sorry, I see you asked for open source. My bad.
5,111
Does anyone know any good open source software for visualizing data from database?
There is also a young program for (automated) reading, filter, process, interpolate and plot n-dimensional values from different sources (like libreOffice- or csv-files) and variable size: diaGrabber. You have to use some simple python-commands to create a case. After this you can manipulate the graphical output in an interactive GUI.
Does anyone know any good open source software for visualizing data from database?
There is also a young program for (automated) reading, filter, process, interpolate and plot n-dimensional values from different sources (like libreOffice- or csv-files) and variable size: diaGrabber.
Does anyone know any good open source software for visualizing data from database? There is also a young program for (automated) reading, filter, process, interpolate and plot n-dimensional values from different sources (like libreOffice- or csv-files) and variable size: diaGrabber. You have to use some simple python-commands to create a case. After this you can manipulate the graphical output in an interactive GUI.
Does anyone know any good open source software for visualizing data from database? There is also a young program for (automated) reading, filter, process, interpolate and plot n-dimensional values from different sources (like libreOffice- or csv-files) and variable size: diaGrabber.
5,112
Comparing two models using anova() function in R
When you use anova(lm.1,lm.2,test="Chisq"), it performs the Chi-square test to compare lm.1 and lm.2 (i.e. it tests whether reduction in the residual sum of squares are statistically significant or not). Note that this makes sense only if lm.1 and lm.2 are nested models. For example, in the 1st anova that you used, the p-value of the test is 0.82. It means that the fitted model "modelAdd" is not significantly different from modelGen at the level of $\alpha=0.05$. However, using the p-value in the 3rd anova, the model "modelRec" is significantly different form model "modelGen" at $\alpha=0.1$. Check out ANOVA for Linear Model Fits as well.
Comparing two models using anova() function in R
When you use anova(lm.1,lm.2,test="Chisq"), it performs the Chi-square test to compare lm.1 and lm.2 (i.e. it tests whether reduction in the residual sum of squares are statistically significant or no
Comparing two models using anova() function in R When you use anova(lm.1,lm.2,test="Chisq"), it performs the Chi-square test to compare lm.1 and lm.2 (i.e. it tests whether reduction in the residual sum of squares are statistically significant or not). Note that this makes sense only if lm.1 and lm.2 are nested models. For example, in the 1st anova that you used, the p-value of the test is 0.82. It means that the fitted model "modelAdd" is not significantly different from modelGen at the level of $\alpha=0.05$. However, using the p-value in the 3rd anova, the model "modelRec" is significantly different form model "modelGen" at $\alpha=0.1$. Check out ANOVA for Linear Model Fits as well.
Comparing two models using anova() function in R When you use anova(lm.1,lm.2,test="Chisq"), it performs the Chi-square test to compare lm.1 and lm.2 (i.e. it tests whether reduction in the residual sum of squares are statistically significant or no
5,113
Comparing two models using anova() function in R
I agree with the OP that the help for the anova() function is not particularly helpful. The help of anova.lm() (as suggested by @Stat) is much more informative. For me, however, the real eye-opener was "YaRrr! The Pirate's Guide to R", Chap.15.3: The anova() function will take the model objects as arguments, and return an ANOVA testing whether the more complex model is significantly better at capturing the data than the simpler model. If the resulting p-value is sufficiently low (usually less than 0.05), we conclude that the more complex model is significantly better than the simpler model, and thus favor the more complex model. If the p-value is not sufficiently low (usually greater than 0.05), we should favor the simpler model. This explains pretty well (IMO) the strategy behind the comparison, taking model complexity into account. Note that there's another use of the anova() function, namely to compare non-linear regression fits, as explained in this excellent CV post. Briefly, you compare a "group-wise" nonlinear regression fit to a "pooled" fit (same data, same functional form of course). If the anova() function gives you a "significant difference", then you conclude that the groupwise fit describes your data better than the pooled fit. If there's "no significant difference", then the groupwise fits are equivalent to the pooled fit, i.e. "no difference between the curves".
Comparing two models using anova() function in R
I agree with the OP that the help for the anova() function is not particularly helpful. The help of anova.lm() (as suggested by @Stat) is much more informative. For me, however, the real eye-opener wa
Comparing two models using anova() function in R I agree with the OP that the help for the anova() function is not particularly helpful. The help of anova.lm() (as suggested by @Stat) is much more informative. For me, however, the real eye-opener was "YaRrr! The Pirate's Guide to R", Chap.15.3: The anova() function will take the model objects as arguments, and return an ANOVA testing whether the more complex model is significantly better at capturing the data than the simpler model. If the resulting p-value is sufficiently low (usually less than 0.05), we conclude that the more complex model is significantly better than the simpler model, and thus favor the more complex model. If the p-value is not sufficiently low (usually greater than 0.05), we should favor the simpler model. This explains pretty well (IMO) the strategy behind the comparison, taking model complexity into account. Note that there's another use of the anova() function, namely to compare non-linear regression fits, as explained in this excellent CV post. Briefly, you compare a "group-wise" nonlinear regression fit to a "pooled" fit (same data, same functional form of course). If the anova() function gives you a "significant difference", then you conclude that the groupwise fit describes your data better than the pooled fit. If there's "no significant difference", then the groupwise fits are equivalent to the pooled fit, i.e. "no difference between the curves".
Comparing two models using anova() function in R I agree with the OP that the help for the anova() function is not particularly helpful. The help of anova.lm() (as suggested by @Stat) is much more informative. For me, however, the real eye-opener wa
5,114
Random forest assumptions
Thanks for a very good question! I will try to give my intuition behind it. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this is the general pipeline): At each step of building individual tree we find the best split of data While building a tree we use not the whole dataset, but bootstrap sample We aggregate the individual tree outputs by averaging (actually 2 and 3 means together more general bagging procedure). Assume first point. It is not always possible to find the best split. For example in the following dataset each split will give exactly one misclassified object. And I think that exactly this point can be confusing: indeed, the behaviour of the individual split is somehow similar to the behaviour of Naive Bayes classifier: if the variables are dependent - there is no better split for Decision Trees and Naive Bayes classifier also fails (just to remind: independent variables is the main assumption that we make in Naive Bayes classifier; all other assumptions come from the probabilistic model that we choose). But here comes the great advantage of decision trees: we take any split and continue splitting further. And for the following splits we will find a perfect separation (in red). And as we have no probabilistic model, but just binary split, we don't need to make any assumption at all. That was about Decision Tree, but it also applies for Random Forest. The difference is that for Random Forest we use Bootstrap Aggregation. It has no model underneath, and the only assumption that it relies is that sampling is representative. But this is usually a common assumption. For example, if one class consist of two components and in our dataset one component is represented by 100 samples, and another component is represented by 1 sample - probably most individual decision trees will see only the first component and Random Forest will misclassify the second one. Hope it will give some further understanding.
Random forest assumptions
Thanks for a very good question! I will try to give my intuition behind it. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this
Random forest assumptions Thanks for a very good question! I will try to give my intuition behind it. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this is the general pipeline): At each step of building individual tree we find the best split of data While building a tree we use not the whole dataset, but bootstrap sample We aggregate the individual tree outputs by averaging (actually 2 and 3 means together more general bagging procedure). Assume first point. It is not always possible to find the best split. For example in the following dataset each split will give exactly one misclassified object. And I think that exactly this point can be confusing: indeed, the behaviour of the individual split is somehow similar to the behaviour of Naive Bayes classifier: if the variables are dependent - there is no better split for Decision Trees and Naive Bayes classifier also fails (just to remind: independent variables is the main assumption that we make in Naive Bayes classifier; all other assumptions come from the probabilistic model that we choose). But here comes the great advantage of decision trees: we take any split and continue splitting further. And for the following splits we will find a perfect separation (in red). And as we have no probabilistic model, but just binary split, we don't need to make any assumption at all. That was about Decision Tree, but it also applies for Random Forest. The difference is that for Random Forest we use Bootstrap Aggregation. It has no model underneath, and the only assumption that it relies is that sampling is representative. But this is usually a common assumption. For example, if one class consist of two components and in our dataset one component is represented by 100 samples, and another component is represented by 1 sample - probably most individual decision trees will see only the first component and Random Forest will misclassify the second one. Hope it will give some further understanding.
Random forest assumptions Thanks for a very good question! I will try to give my intuition behind it. In order to understand this, remember the "ingredients" of random forest classifier (there are some modifications, but this
5,115
Random forest assumptions
In one 2010 paper the authors documented that random forest models unreliably estimated the importance of variables when variables were multicolinear across multi-dimensional statistical space. I usually check for this before running random forest models. http://www.esajournals.org/doi/abs/10.1890/08-0879.1
Random forest assumptions
In one 2010 paper the authors documented that random forest models unreliably estimated the importance of variables when variables were multicolinear across multi-dimensional statistical space. I usua
Random forest assumptions In one 2010 paper the authors documented that random forest models unreliably estimated the importance of variables when variables were multicolinear across multi-dimensional statistical space. I usually check for this before running random forest models. http://www.esajournals.org/doi/abs/10.1890/08-0879.1
Random forest assumptions In one 2010 paper the authors documented that random forest models unreliably estimated the importance of variables when variables were multicolinear across multi-dimensional statistical space. I usua
5,116
How do you do bootstrapping with time series data?
As @cardinal points out, variations on the 'block bootstrap' are a natural approach. Here, depending on the method, you select stretches of the time series, either overlapping or not and of fixed length or random, which can guarantee stationarity in the samples (Politis and Romano, 1991) then stitch them back together to create resampled times series on which you compute your statistic. You can also try to build models of the temporal dependencies, leading to the Markov methods, autoregressive sieves and others. But block bootstrapping is probably the easiest of these methods to implement. Gonçalves and Politis (2011) is a very short review with references. A book length treatment is Lahiri (2010).
How do you do bootstrapping with time series data?
As @cardinal points out, variations on the 'block bootstrap' are a natural approach. Here, depending on the method, you select stretches of the time series, either overlapping or not and of fixed len
How do you do bootstrapping with time series data? As @cardinal points out, variations on the 'block bootstrap' are a natural approach. Here, depending on the method, you select stretches of the time series, either overlapping or not and of fixed length or random, which can guarantee stationarity in the samples (Politis and Romano, 1991) then stitch them back together to create resampled times series on which you compute your statistic. You can also try to build models of the temporal dependencies, leading to the Markov methods, autoregressive sieves and others. But block bootstrapping is probably the easiest of these methods to implement. Gonçalves and Politis (2011) is a very short review with references. A book length treatment is Lahiri (2010).
How do you do bootstrapping with time series data? As @cardinal points out, variations on the 'block bootstrap' are a natural approach. Here, depending on the method, you select stretches of the time series, either overlapping or not and of fixed len
5,117
How do you do bootstrapping with time series data?
The resampling method introduced in Efron (1979) was designed for i.i.d. univariate data but is easily extended to multivariate data. As discussed in . If $ x_1,···,x_n $ is a sample of vectors, to maintain the covariance structure of the data in the sample. It is not immediately obvious whether one can resample a time series $ x_1,x_2,···,x_n $. A time series is essentially a sample of size 1 from a stochastic process. Resampling a sample is original sample, so one learns nothing by resampling. Therefore, resampling of a time series requires new ideas. Model-based resampling is easily adopted to time series. The resamples are obtained by simulating the time series model. For example, if the model is ARIMA(p,d,q), then the resamples of an ARIMA(p, q) model with MLEs (from the differenced series) of the autoregressive and moving average coefficients and the noise variance. The resamples are the sequences of partial sum of the simulated ARIMA(p, q) process. Model-free resampling of time series is accomplished by block resampling, also called block bootstrap, which can be implemented using the tsboot function in R’s boot package. The idea is to break the series into roughly equal-length blocks of consecutive observations, to resample the block with replacement, and then to paste the blocks together. For example, if the time series is of length 200 and one uses 10 blocks of length 20, then the blocks are the first 20 observations, the next 20, and so forth. A possible resample is the fourth block (observation 61 to 80), then the last block (observation 181 to 200), then the second block (observation 21 to 40), then the fourth block again, and so on until there are 10 blocks in the resample.
How do you do bootstrapping with time series data?
The resampling method introduced in Efron (1979) was designed for i.i.d. univariate data but is easily extended to multivariate data. As discussed in . If $ x_1,···,x_n $ is a sample of vectors, to ma
How do you do bootstrapping with time series data? The resampling method introduced in Efron (1979) was designed for i.i.d. univariate data but is easily extended to multivariate data. As discussed in . If $ x_1,···,x_n $ is a sample of vectors, to maintain the covariance structure of the data in the sample. It is not immediately obvious whether one can resample a time series $ x_1,x_2,···,x_n $. A time series is essentially a sample of size 1 from a stochastic process. Resampling a sample is original sample, so one learns nothing by resampling. Therefore, resampling of a time series requires new ideas. Model-based resampling is easily adopted to time series. The resamples are obtained by simulating the time series model. For example, if the model is ARIMA(p,d,q), then the resamples of an ARIMA(p, q) model with MLEs (from the differenced series) of the autoregressive and moving average coefficients and the noise variance. The resamples are the sequences of partial sum of the simulated ARIMA(p, q) process. Model-free resampling of time series is accomplished by block resampling, also called block bootstrap, which can be implemented using the tsboot function in R’s boot package. The idea is to break the series into roughly equal-length blocks of consecutive observations, to resample the block with replacement, and then to paste the blocks together. For example, if the time series is of length 200 and one uses 10 blocks of length 20, then the blocks are the first 20 observations, the next 20, and so forth. A possible resample is the fourth block (observation 61 to 80), then the last block (observation 181 to 200), then the second block (observation 21 to 40), then the fourth block again, and so on until there are 10 blocks in the resample.
How do you do bootstrapping with time series data? The resampling method introduced in Efron (1979) was designed for i.i.d. univariate data but is easily extended to multivariate data. As discussed in . If $ x_1,···,x_n $ is a sample of vectors, to ma
5,118
Motivation for Kolmogorov distance between distributions
Mark, the main reason of which I am aware for the use of K-S is because it arises naturally from Glivenko-Cantelli theorems in univariate empirical processes. The one reference I'd recommend is A.W.van der Vaart "Asymptotic Statistics", ch. 19. A more advanced monograph is "Weak Convergence and Empirical Processes" by Wellner and van der Vaart. I'd add two quick notes: another measure of distance commonly used in univariate distributions is the Cramer-von Mises distance, which is an L^2 distance; in general vector spaces different distances are employed; the space of interest in many papers is polish. A very good introduction is Billingsley's "Convergence of Probability Measures". I apologize if I can't be more specific. I hope this helps.
Motivation for Kolmogorov distance between distributions
Mark, the main reason of which I am aware for the use of K-S is because it arises naturally from Glivenko-Cantelli theorems in univariate empirical processes. The one reference I'd recommend is A.W.va
Motivation for Kolmogorov distance between distributions Mark, the main reason of which I am aware for the use of K-S is because it arises naturally from Glivenko-Cantelli theorems in univariate empirical processes. The one reference I'd recommend is A.W.van der Vaart "Asymptotic Statistics", ch. 19. A more advanced monograph is "Weak Convergence and Empirical Processes" by Wellner and van der Vaart. I'd add two quick notes: another measure of distance commonly used in univariate distributions is the Cramer-von Mises distance, which is an L^2 distance; in general vector spaces different distances are employed; the space of interest in many papers is polish. A very good introduction is Billingsley's "Convergence of Probability Measures". I apologize if I can't be more specific. I hope this helps.
Motivation for Kolmogorov distance between distributions Mark, the main reason of which I am aware for the use of K-S is because it arises naturally from Glivenko-Cantelli theorems in univariate empirical processes. The one reference I'd recommend is A.W.va
5,119
Motivation for Kolmogorov distance between distributions
As a summary, my answer is : if you have an explicit expression or can figure out some how what your distance is measuring (what "differences" it gives weigth to), then you can say what it is better for. An other complementary way to analyse and compare such test is the minimax theory. At the end some test will be good for some alternatives and some for others. For a given set of alternatives it is sometime possible to show if your test has optimal property in the worst case: this is the minimax theory. Some details Hence You can tell about the properties of two different test by regarding the set of alternative for which they are minimax (if such alternative exist) i.e. (using the word of Donoho and Jin) by comparing their "optimal detection boudary" Link. Let me go distance by distance: KS distance is obtained calculating supremum of difference between empirical cdf and cdf. Being a suppremum it will be highly sensitive to local alternatives (local change in the cdf) but not with global change (at least using L2 distance between cdf would be less local (Am I openning open door ?)). However, the most important thing is that is uses the cdf. This implies an asymetry: you give more importance to the changes in the tail of your distribution. Wassertein metric (what you meant by Kantorovitch Rubinstein ? ) http://en.wikipedia.org/wiki/Wasserstein_metric is ubiquitous and hence hard to compare. For the particular case of W2 it has been uses in Link and it is related to the L2 distance to inverse of cdf. My understanding is that it gives even more weight to the tails but I think you should read the paper to know more about it. For the case of the L1 distance between density function it will highly depend on how you estimate your dentity function from the data... but otherwise it seems to be a "balanced test" not giving importance to tails. To recall and extend the comment I made which complete the answer: I know you did not meant to be exhaustive but you could add Anderson darling statistic (see http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test). This made me remind of a paper fromo Jager and Wellner (see Link) which extands/generalises Anderson darling statistic (and include in particular higher criticism of Tukey). Higher criticism was already shown to be minimax for a wide range of alternatives and the same is done by Jager and Wellner for their extention. I don't think that minimax property has been shown for Kolmogorov test. Anyway, understanding for which type of alternative your test is minimax helps you to know where is its strength, so you should read the paper above..
Motivation for Kolmogorov distance between distributions
As a summary, my answer is : if you have an explicit expression or can figure out some how what your distance is measuring (what "differences" it gives weigth to), then you can say what it is better f
Motivation for Kolmogorov distance between distributions As a summary, my answer is : if you have an explicit expression or can figure out some how what your distance is measuring (what "differences" it gives weigth to), then you can say what it is better for. An other complementary way to analyse and compare such test is the minimax theory. At the end some test will be good for some alternatives and some for others. For a given set of alternatives it is sometime possible to show if your test has optimal property in the worst case: this is the minimax theory. Some details Hence You can tell about the properties of two different test by regarding the set of alternative for which they are minimax (if such alternative exist) i.e. (using the word of Donoho and Jin) by comparing their "optimal detection boudary" Link. Let me go distance by distance: KS distance is obtained calculating supremum of difference between empirical cdf and cdf. Being a suppremum it will be highly sensitive to local alternatives (local change in the cdf) but not with global change (at least using L2 distance between cdf would be less local (Am I openning open door ?)). However, the most important thing is that is uses the cdf. This implies an asymetry: you give more importance to the changes in the tail of your distribution. Wassertein metric (what you meant by Kantorovitch Rubinstein ? ) http://en.wikipedia.org/wiki/Wasserstein_metric is ubiquitous and hence hard to compare. For the particular case of W2 it has been uses in Link and it is related to the L2 distance to inverse of cdf. My understanding is that it gives even more weight to the tails but I think you should read the paper to know more about it. For the case of the L1 distance between density function it will highly depend on how you estimate your dentity function from the data... but otherwise it seems to be a "balanced test" not giving importance to tails. To recall and extend the comment I made which complete the answer: I know you did not meant to be exhaustive but you could add Anderson darling statistic (see http://en.wikipedia.org/wiki/Anderson%E2%80%93Darling_test). This made me remind of a paper fromo Jager and Wellner (see Link) which extands/generalises Anderson darling statistic (and include in particular higher criticism of Tukey). Higher criticism was already shown to be minimax for a wide range of alternatives and the same is done by Jager and Wellner for their extention. I don't think that minimax property has been shown for Kolmogorov test. Anyway, understanding for which type of alternative your test is minimax helps you to know where is its strength, so you should read the paper above..
Motivation for Kolmogorov distance between distributions As a summary, my answer is : if you have an explicit expression or can figure out some how what your distance is measuring (what "differences" it gives weigth to), then you can say what it is better f
5,120
Motivation for Kolmogorov distance between distributions
Computational issues are the strongest argument I've heard one way or the other. The single biggest advantage of the Kolmogorov distance is that it's very easy to compute analytically for pretty much any CDF. Most other distance metrics don't have a closed-form expression except, sometimes, in the Gaussian case. The Kolmogorov distance of a sample also has a known sampling distribution given the CDF (I don't think most other ones do), which ends up being related to the Wiener process. This is the basis for the Kolmogorov-Smirnoff test for comparing a sample to a distribution or two samples to each other. On a more functional-analysis note, the sup norm is nice in that (as you mention) it basically defines uniform convergence. This leaves you with norm convergence implying pointwise convergence, and so you if you're clever about how you define your function sequences you can work within a RKHS and use all of the nice tools that that provides as well.
Motivation for Kolmogorov distance between distributions
Computational issues are the strongest argument I've heard one way or the other. The single biggest advantage of the Kolmogorov distance is that it's very easy to compute analytically for pretty much
Motivation for Kolmogorov distance between distributions Computational issues are the strongest argument I've heard one way or the other. The single biggest advantage of the Kolmogorov distance is that it's very easy to compute analytically for pretty much any CDF. Most other distance metrics don't have a closed-form expression except, sometimes, in the Gaussian case. The Kolmogorov distance of a sample also has a known sampling distribution given the CDF (I don't think most other ones do), which ends up being related to the Wiener process. This is the basis for the Kolmogorov-Smirnoff test for comparing a sample to a distribution or two samples to each other. On a more functional-analysis note, the sup norm is nice in that (as you mention) it basically defines uniform convergence. This leaves you with norm convergence implying pointwise convergence, and so you if you're clever about how you define your function sequences you can work within a RKHS and use all of the nice tools that that provides as well.
Motivation for Kolmogorov distance between distributions Computational issues are the strongest argument I've heard one way or the other. The single biggest advantage of the Kolmogorov distance is that it's very easy to compute analytically for pretty much
5,121
Motivation for Kolmogorov distance between distributions
I think you have to consider the theoretical vs applied advantages of the different notions of distance. Mathematically natural objects don’t necessarily translate well into application. Kolmogorov-Smirnov is the most well-known for application, and is entrenched in testing for goodness of fit. I suppose that one of the reasons for this is that when the underlying distribution $F$ is continuous the distribution of the statistic is independent of $F$. Another is that it can be easily inverted to give confidence bands for the CDF. But it’s often used in a different way where $F$ is estimated by $\hat{F}$, and the test statistic takes the form $$\sup_x | F_n(x) - \hat{F}(x)|.$$ The interest is in seeing how well $\hat{F}$ fit the data and acting as if $\hat{F} = F$, even though the asymptotic theory does not necessarily apply.
Motivation for Kolmogorov distance between distributions
I think you have to consider the theoretical vs applied advantages of the different notions of distance. Mathematically natural objects don’t necessarily translate well into application. Kolmogorov-
Motivation for Kolmogorov distance between distributions I think you have to consider the theoretical vs applied advantages of the different notions of distance. Mathematically natural objects don’t necessarily translate well into application. Kolmogorov-Smirnov is the most well-known for application, and is entrenched in testing for goodness of fit. I suppose that one of the reasons for this is that when the underlying distribution $F$ is continuous the distribution of the statistic is independent of $F$. Another is that it can be easily inverted to give confidence bands for the CDF. But it’s often used in a different way where $F$ is estimated by $\hat{F}$, and the test statistic takes the form $$\sup_x | F_n(x) - \hat{F}(x)|.$$ The interest is in seeing how well $\hat{F}$ fit the data and acting as if $\hat{F} = F$, even though the asymptotic theory does not necessarily apply.
Motivation for Kolmogorov distance between distributions I think you have to consider the theoretical vs applied advantages of the different notions of distance. Mathematically natural objects don’t necessarily translate well into application. Kolmogorov-
5,122
Motivation for Kolmogorov distance between distributions
I can't give you additional reasons to use the Kolmogorov-Smirnov test. But, I can give you an important reason not to use it. It does not fit the tail of the distribution well. In this regard, a superior distribution fitting test is Anderson-Darling. As a second best, the Chi Square test is pretty good. Both are deemed much superior to the K-S test in this regard.
Motivation for Kolmogorov distance between distributions
I can't give you additional reasons to use the Kolmogorov-Smirnov test. But, I can give you an important reason not to use it. It does not fit the tail of the distribution well. In this regard, a s
Motivation for Kolmogorov distance between distributions I can't give you additional reasons to use the Kolmogorov-Smirnov test. But, I can give you an important reason not to use it. It does not fit the tail of the distribution well. In this regard, a superior distribution fitting test is Anderson-Darling. As a second best, the Chi Square test is pretty good. Both are deemed much superior to the K-S test in this regard.
Motivation for Kolmogorov distance between distributions I can't give you additional reasons to use the Kolmogorov-Smirnov test. But, I can give you an important reason not to use it. It does not fit the tail of the distribution well. In this regard, a s
5,123
Motivation for Kolmogorov distance between distributions
From the point of view of functional analysis and measure theory the $L^p$ type distances do not define measurable sets on spaces of functions (infinite dimensional spaces loose countable additive in the metric ball coverings). This firmly disqualifies any sort of measurable interpretation of the distances of choices 2 & 3. Of course Kolomogorov, being much brighter than any of us posting, especially including myself, anticipated this. The clever bit is that while the distance in the KS test is of the $L^0$ variety, the uniform norm itself is not used to define the measurable sets. Rather the sets are part of a stochastic filtration on the differences between the distributions evaluated at the observed values; which is equivalent to the stopping time problem. In short the uniform norm distance of choice 1 is preferable because the test it implies is equivalent to the stopping time problem, which itself produces computationally tractable probabilities. Where as choices 2 & 3 cannot define measurable subsets of functions.
Motivation for Kolmogorov distance between distributions
From the point of view of functional analysis and measure theory the $L^p$ type distances do not define measurable sets on spaces of functions (infinite dimensional spaces loose countable additive in
Motivation for Kolmogorov distance between distributions From the point of view of functional analysis and measure theory the $L^p$ type distances do not define measurable sets on spaces of functions (infinite dimensional spaces loose countable additive in the metric ball coverings). This firmly disqualifies any sort of measurable interpretation of the distances of choices 2 & 3. Of course Kolomogorov, being much brighter than any of us posting, especially including myself, anticipated this. The clever bit is that while the distance in the KS test is of the $L^0$ variety, the uniform norm itself is not used to define the measurable sets. Rather the sets are part of a stochastic filtration on the differences between the distributions evaluated at the observed values; which is equivalent to the stopping time problem. In short the uniform norm distance of choice 1 is preferable because the test it implies is equivalent to the stopping time problem, which itself produces computationally tractable probabilities. Where as choices 2 & 3 cannot define measurable subsets of functions.
Motivation for Kolmogorov distance between distributions From the point of view of functional analysis and measure theory the $L^p$ type distances do not define measurable sets on spaces of functions (infinite dimensional spaces loose countable additive in
5,124
What is the distribution of the Euclidean distance between two normally distributed random variables?
The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution of $Q = z_1^2 + z_2^2$ where $z = a - b$ follows a bivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$. This is a quadratic form in the bivariate random variable $z$. Briefly, one nice general result for the $p$-dimensional case where $z \sim N_p(\mu, \Sigma)$ and $$Q = \sum_{j=1}^p z_j^2$$ is that the moment generating function is $$E(e^{tQ}) = e^{t \sum_{j=1}^p \frac{b_j^2 \lambda_j}{1-2t\lambda_j}}\prod_{j=1}^p (1-2t\lambda_j)^{-1/2}$$ where $\lambda_1, \ldots, \lambda_p$ are the eigenvalues of $\Sigma$ and $b$ is a linear function of $\mu$. See Theorem 3.2a.2 (page 42) in the book cited above (we assume here that $\Sigma$ is non-singular). Another useful representation is 3.1a.1 (page 29) $$Q = \sum_{j=1}^p \lambda_j(u_j + b_j)^2$$ where $u_1, \ldots, u_p$ are i.i.d. $N(0, 1)$. The entire Chapter 4 in the book is devoted to the representation and computation of densities and distribution functions, which is not at all trivial. I am only superficially familiar with the book, but my impression is that all the general representations are in terms of infinite series expansions. So in a certain way the answer to the question is, yes, the distribution of the squared euclidean distance between two bivariate normal vectors belongs to a known (and well studied) class of distributions parametrized by the four parameters $\lambda_1, \lambda_2 > 0$ and $b_1, b_2 \in \mathbb{R}$. However, I am pretty sure you won't find this distribution in your standard textbooks. Note, moreover, that $a$ and $b$ do not need to be independent. Joint normality is enough (which is automatic if they are independent and each normal), then the difference $a-b$ follows a normal distribution.
What is the distribution of the Euclidean distance between two normally distributed random variables
The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution
What is the distribution of the Euclidean distance between two normally distributed random variables? The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution of $Q = z_1^2 + z_2^2$ where $z = a - b$ follows a bivariate normal distribution with mean $\mu$ and covariance matrix $\Sigma$. This is a quadratic form in the bivariate random variable $z$. Briefly, one nice general result for the $p$-dimensional case where $z \sim N_p(\mu, \Sigma)$ and $$Q = \sum_{j=1}^p z_j^2$$ is that the moment generating function is $$E(e^{tQ}) = e^{t \sum_{j=1}^p \frac{b_j^2 \lambda_j}{1-2t\lambda_j}}\prod_{j=1}^p (1-2t\lambda_j)^{-1/2}$$ where $\lambda_1, \ldots, \lambda_p$ are the eigenvalues of $\Sigma$ and $b$ is a linear function of $\mu$. See Theorem 3.2a.2 (page 42) in the book cited above (we assume here that $\Sigma$ is non-singular). Another useful representation is 3.1a.1 (page 29) $$Q = \sum_{j=1}^p \lambda_j(u_j + b_j)^2$$ where $u_1, \ldots, u_p$ are i.i.d. $N(0, 1)$. The entire Chapter 4 in the book is devoted to the representation and computation of densities and distribution functions, which is not at all trivial. I am only superficially familiar with the book, but my impression is that all the general representations are in terms of infinite series expansions. So in a certain way the answer to the question is, yes, the distribution of the squared euclidean distance between two bivariate normal vectors belongs to a known (and well studied) class of distributions parametrized by the four parameters $\lambda_1, \lambda_2 > 0$ and $b_1, b_2 \in \mathbb{R}$. However, I am pretty sure you won't find this distribution in your standard textbooks. Note, moreover, that $a$ and $b$ do not need to be independent. Joint normality is enough (which is automatic if they are independent and each normal), then the difference $a-b$ follows a normal distribution.
What is the distribution of the Euclidean distance between two normally distributed random variables The answer to this question can be found in the book Quadratic forms in random variables by Mathai and Provost (1992, Marcel Dekker, Inc.). As the comments clarify, you need to find the distribution
5,125
What is the distribution of the Euclidean distance between two normally distributed random variables?
First define the bivariate distribution of the difference vector, $\mu_d = \mu_1 - \mu_2$, which will be simply $\Sigma_d = \Sigma_1 + \Sigma_2$; this follows from the multivariate uncertainty propagation $\Sigma_d = \mathrm{J} \Sigma_{12} \mathrm{J}^T$, involving a block diagonal matrix $\Sigma_{12} = \begin{bmatrix} \Sigma_1 & \\ & \Sigma_2 \end{bmatrix}$ and the Jacobian $\mathrm{J} = \begin{bmatrix} +\mathrm{I}, & -\mathrm{I} \end{bmatrix}$ . Secondly, look for the distribution of the difference vector length, or the radial distance from the origin, which is Hoyt distributed: The radius around the true mean in a bivariate correlated normal random variable with unequal variances, re-written in polar coordinates (radius and angle), follows a Hoyt distribution. The pdf and cdf are defined in closed form, numerical root finding is used to find cdf^−1. Reduces to the Rayleigh distribution if the correlation is 0 and the variances are equal. A more general distribution arises if you allow for a biased difference (shifted origin), from Ballistipedia:
What is the distribution of the Euclidean distance between two normally distributed random variables
First define the bivariate distribution of the difference vector, $\mu_d = \mu_1 - \mu_2$, which will be simply $\Sigma_d = \Sigma_1 + \Sigma_2$; this follows from the multivariate uncertainty propag
What is the distribution of the Euclidean distance between two normally distributed random variables? First define the bivariate distribution of the difference vector, $\mu_d = \mu_1 - \mu_2$, which will be simply $\Sigma_d = \Sigma_1 + \Sigma_2$; this follows from the multivariate uncertainty propagation $\Sigma_d = \mathrm{J} \Sigma_{12} \mathrm{J}^T$, involving a block diagonal matrix $\Sigma_{12} = \begin{bmatrix} \Sigma_1 & \\ & \Sigma_2 \end{bmatrix}$ and the Jacobian $\mathrm{J} = \begin{bmatrix} +\mathrm{I}, & -\mathrm{I} \end{bmatrix}$ . Secondly, look for the distribution of the difference vector length, or the radial distance from the origin, which is Hoyt distributed: The radius around the true mean in a bivariate correlated normal random variable with unequal variances, re-written in polar coordinates (radius and angle), follows a Hoyt distribution. The pdf and cdf are defined in closed form, numerical root finding is used to find cdf^−1. Reduces to the Rayleigh distribution if the correlation is 0 and the variances are equal. A more general distribution arises if you allow for a biased difference (shifted origin), from Ballistipedia:
What is the distribution of the Euclidean distance between two normally distributed random variables First define the bivariate distribution of the difference vector, $\mu_d = \mu_1 - \mu_2$, which will be simply $\Sigma_d = \Sigma_1 + \Sigma_2$; this follows from the multivariate uncertainty propag
5,126
What is the distribution of the Euclidean distance between two normally distributed random variables?
Why not test it out? set.seed(347) x <- rnorm(10000) y <- rnorm(10000) x2 <- rnorm(10000) y2 <- rnorm(10000) qdf <- data.frame(x,y,x2,y2) qdf <- data.frame(qdf,(x-x2)^2+(y-y2)^2) colnames(qdf)[5] <- "euclid" plot(c(x,y),c(x2,y2)) plot(qdf$euclid) hist(qdf$euclid) plot(dentist(qdf$euclid))
What is the distribution of the Euclidean distance between two normally distributed random variables
Why not test it out? set.seed(347) x <- rnorm(10000) y <- rnorm(10000) x2 <- rnorm(10000) y2 <- rnorm(10000) qdf <- data.frame(x,y,x2,y2) qdf <- data.frame(qdf,(x-x2)^2+(y-y2)^2) colnames(qdf)[5] <-
What is the distribution of the Euclidean distance between two normally distributed random variables? Why not test it out? set.seed(347) x <- rnorm(10000) y <- rnorm(10000) x2 <- rnorm(10000) y2 <- rnorm(10000) qdf <- data.frame(x,y,x2,y2) qdf <- data.frame(qdf,(x-x2)^2+(y-y2)^2) colnames(qdf)[5] <- "euclid" plot(c(x,y),c(x2,y2)) plot(qdf$euclid) hist(qdf$euclid) plot(dentist(qdf$euclid))
What is the distribution of the Euclidean distance between two normally distributed random variables Why not test it out? set.seed(347) x <- rnorm(10000) y <- rnorm(10000) x2 <- rnorm(10000) y2 <- rnorm(10000) qdf <- data.frame(x,y,x2,y2) qdf <- data.frame(qdf,(x-x2)^2+(y-y2)^2) colnames(qdf)[5] <-
5,127
Survival Analysis tools in Python [closed]
AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to Rpy. Even if there were a pure python package available, I would be very careful in using it, in particular I would look at: How often does it get updated. Does it have a large user base? Does it have advanced techniques? One of the benefits of R, is that these standard packages get a massive amount of testing and user feed back. When dealing with real data, unexpected edge cases can creep in.
Survival Analysis tools in Python [closed]
AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to Rpy. Even if there were a pure python package available, I would be very care
Survival Analysis tools in Python [closed] AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to Rpy. Even if there were a pure python package available, I would be very careful in using it, in particular I would look at: How often does it get updated. Does it have a large user base? Does it have advanced techniques? One of the benefits of R, is that these standard packages get a massive amount of testing and user feed back. When dealing with real data, unexpected edge cases can creep in.
Survival Analysis tools in Python [closed] AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to Rpy. Even if there were a pure python package available, I would be very care
5,128
Survival Analysis tools in Python [closed]
Check out the lifelines¹ project for a simple and clean implementation of survival models in Python, including Estimators of survival functions Estimators of cumulative hazard curves Cox's proportional hazard regression model Cox's time varying regression model parametric AFT models Aalen's additive regression model Multivariate testing Benefits: built on top of Pandas pure Python & easy to install built in plotting functions simple interface Documentation is available here: documentation and examples Example usage: from lifelines import KaplanMeierFitter survival_times = np.array([0., 3., 4.5, 10., 1.]) events = np.array([False, True, True, False, True]) kmf = KaplanMeierFitter() kmf.fit(survival_times, event_observed=events) print(kmf.survival_function_) print(kmf.median_) kmf.plot() Example plots from the built-in plotting library: Disclaimer: I'm the main author. Ping me (email in profile) for questions or feedback about lifelines.
Survival Analysis tools in Python [closed]
Check out the lifelines¹ project for a simple and clean implementation of survival models in Python, including Estimators of survival functions Estimators of cumulative hazard curves Cox's proportio
Survival Analysis tools in Python [closed] Check out the lifelines¹ project for a simple and clean implementation of survival models in Python, including Estimators of survival functions Estimators of cumulative hazard curves Cox's proportional hazard regression model Cox's time varying regression model parametric AFT models Aalen's additive regression model Multivariate testing Benefits: built on top of Pandas pure Python & easy to install built in plotting functions simple interface Documentation is available here: documentation and examples Example usage: from lifelines import KaplanMeierFitter survival_times = np.array([0., 3., 4.5, 10., 1.]) events = np.array([False, True, True, False, True]) kmf = KaplanMeierFitter() kmf.fit(survival_times, event_observed=events) print(kmf.survival_function_) print(kmf.median_) kmf.plot() Example plots from the built-in plotting library: Disclaimer: I'm the main author. Ping me (email in profile) for questions or feedback about lifelines.
Survival Analysis tools in Python [closed] Check out the lifelines¹ project for a simple and clean implementation of survival models in Python, including Estimators of survival functions Estimators of cumulative hazard curves Cox's proportio
5,129
Survival Analysis tools in Python [closed]
python-asurv is an effort to port the asurv software for survival methods in astronomy. Might be worth keeping an eye on, but cgillespie is right about the things to watch out for: it has a long way to go and development doesn't seem active. (AFAICT only one method exists and even completed, the package may be lacking for, say, biostatisticians.) You're probably better off using survival package in R from Python through something like RPy or PypeR. I haven't had any problems doing this myself.
Survival Analysis tools in Python [closed]
python-asurv is an effort to port the asurv software for survival methods in astronomy. Might be worth keeping an eye on, but cgillespie is right about the things to watch out for: it has a long way
Survival Analysis tools in Python [closed] python-asurv is an effort to port the asurv software for survival methods in astronomy. Might be worth keeping an eye on, but cgillespie is right about the things to watch out for: it has a long way to go and development doesn't seem active. (AFAICT only one method exists and even completed, the package may be lacking for, say, biostatisticians.) You're probably better off using survival package in R from Python through something like RPy or PypeR. I haven't had any problems doing this myself.
Survival Analysis tools in Python [closed] python-asurv is an effort to port the asurv software for survival methods in astronomy. Might be worth keeping an eye on, but cgillespie is right about the things to watch out for: it has a long way
5,130
Survival Analysis tools in Python [closed]
PyIMSL contains a handful of routines for survival analyses. It is Free As In Beer for noncommercial use, fully supported otherwise. From the documentation in the Statistics User Guide... Computes Kaplan-Meier estimates of survival probabilties: kaplanMeierEstimates() Analyzes survival and reliability data using Cox’s proportional hazards model: propHazardsGenLin() Analyzes survival data using the generalized linear model: survivalGlm() Estimates using various parametric modes: survivalEstimates() Estimates a reliability hazard function using a nonparametric approach: nonparamHazardRate() Produces population and cohort life tables: lifeTables()
Survival Analysis tools in Python [closed]
PyIMSL contains a handful of routines for survival analyses. It is Free As In Beer for noncommercial use, fully supported otherwise. From the documentation in the Statistics User Guide... Computes Kap
Survival Analysis tools in Python [closed] PyIMSL contains a handful of routines for survival analyses. It is Free As In Beer for noncommercial use, fully supported otherwise. From the documentation in the Statistics User Guide... Computes Kaplan-Meier estimates of survival probabilties: kaplanMeierEstimates() Analyzes survival and reliability data using Cox’s proportional hazards model: propHazardsGenLin() Analyzes survival data using the generalized linear model: survivalGlm() Estimates using various parametric modes: survivalEstimates() Estimates a reliability hazard function using a nonparametric approach: nonparamHazardRate() Produces population and cohort life tables: lifeTables()
Survival Analysis tools in Python [closed] PyIMSL contains a handful of routines for survival analyses. It is Free As In Beer for noncommercial use, fully supported otherwise. From the documentation in the Statistics User Guide... Computes Kap
5,131
Survival Analysis tools in Python [closed]
You can now use R from within IPython, so you might want to look into using IPython with the R extension.
Survival Analysis tools in Python [closed]
You can now use R from within IPython, so you might want to look into using IPython with the R extension.
Survival Analysis tools in Python [closed] You can now use R from within IPython, so you might want to look into using IPython with the R extension.
Survival Analysis tools in Python [closed] You can now use R from within IPython, so you might want to look into using IPython with the R extension.
5,132
Survival Analysis tools in Python [closed]
I also want to mention scikit-survival, which provides models for survival analysis that can be easily combined with tools from scikit-learn (e.g. KFold cross-validation). As of this writing, scikit-survival includes implementations of Nelson-Aalen estimator of cumulative hazard function. Kaplan-Meier estimator of survival function. Cox's proportional hazard’s model with and without elastic net penalty. Accelerated failure time model. Survival Support Vector Machine. Gradient boosted Cox model. concordance index for performance evaluation.
Survival Analysis tools in Python [closed]
I also want to mention scikit-survival, which provides models for survival analysis that can be easily combined with tools from scikit-learn (e.g. KFold cross-validation). As of this writing, scikit-s
Survival Analysis tools in Python [closed] I also want to mention scikit-survival, which provides models for survival analysis that can be easily combined with tools from scikit-learn (e.g. KFold cross-validation). As of this writing, scikit-survival includes implementations of Nelson-Aalen estimator of cumulative hazard function. Kaplan-Meier estimator of survival function. Cox's proportional hazard’s model with and without elastic net penalty. Accelerated failure time model. Survival Support Vector Machine. Gradient boosted Cox model. concordance index for performance evaluation.
Survival Analysis tools in Python [closed] I also want to mention scikit-survival, which provides models for survival analysis that can be easily combined with tools from scikit-learn (e.g. KFold cross-validation). As of this writing, scikit-s
5,133
Survival Analysis tools in Python [closed]
Apart from using R through RPy or equivalent there are a number of survival analysis routines in the statsmodels (formerly sicpy.statsmodel) python library. They are in the "sandbox" package though, meaning they aren't supposed to be ready for production right now. E.g. you have the Cox model of proportional hazard coded here.
Survival Analysis tools in Python [closed]
Apart from using R through RPy or equivalent there are a number of survival analysis routines in the statsmodels (formerly sicpy.statsmodel) python library. They are in the "sandbox" package though, m
Survival Analysis tools in Python [closed] Apart from using R through RPy or equivalent there are a number of survival analysis routines in the statsmodels (formerly sicpy.statsmodel) python library. They are in the "sandbox" package though, meaning they aren't supposed to be ready for production right now. E.g. you have the Cox model of proportional hazard coded here.
Survival Analysis tools in Python [closed] Apart from using R through RPy or equivalent there are a number of survival analysis routines in the statsmodels (formerly sicpy.statsmodel) python library. They are in the "sandbox" package though, m
5,134
Why is median age a better statistic than mean age?
Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older people, illiterate people, and people in some third-world countries tend to round their ages to a multiple of 5 or 10, for instance. The median is more resistant to such errors than the mean. Moreover, median ages are typically 20 – 40, but people can live to 100 and more (an increasing and noticeable proportion of the population of modern countries now lives beyond 100). People of such age have 1.5 to 4 times the influence on the mean than they do on the median compared to very young people. Thus, the median is a bit more up-to-date statistic concerning a country's age distribution and is a little more independent of mortality rates and life expectancy than the mean is. Finally, the median gives us a slightly better picture of what the age distribution itself looks like: when you see a median of 35, for example, you know that half the population is older than 35 and you can infer some things about birth rates, ages of parents, and so on; but if the mean is 35, you can't say as much, because that 35 could be influenced by a large population bulge at age 70, for example, or perhaps a population gap in some age range due to an old war or epidemic. Thus, for demographic, not statistical, reasons, a median appears more worthy of the role of an omnibus value for summarizing the ages of relatively large populations of people.
Why is median age a better statistic than mean age?
Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older peop
Why is median age a better statistic than mean age? Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older people, illiterate people, and people in some third-world countries tend to round their ages to a multiple of 5 or 10, for instance. The median is more resistant to such errors than the mean. Moreover, median ages are typically 20 – 40, but people can live to 100 and more (an increasing and noticeable proportion of the population of modern countries now lives beyond 100). People of such age have 1.5 to 4 times the influence on the mean than they do on the median compared to very young people. Thus, the median is a bit more up-to-date statistic concerning a country's age distribution and is a little more independent of mortality rates and life expectancy than the mean is. Finally, the median gives us a slightly better picture of what the age distribution itself looks like: when you see a median of 35, for example, you know that half the population is older than 35 and you can infer some things about birth rates, ages of parents, and so on; but if the mean is 35, you can't say as much, because that 35 could be influenced by a large population bulge at age 70, for example, or perhaps a population gap in some age range due to an old war or epidemic. Thus, for demographic, not statistical, reasons, a median appears more worthy of the role of an omnibus value for summarizing the ages of relatively large populations of people.
Why is median age a better statistic than mean age? Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older peop
5,135
Why is median age a better statistic than mean age?
John gave you a good answer on the sister site. One aspect he didn't mention explicitly is robustness: median as a measure of central location does better than the mean as it has a higher breakdown point (of 50%) whereas the mean has a very low one of 0 (see wikipedia for details). Intuitively, it means that individual bad observations do not skew the median whereas they do for the mean.
Why is median age a better statistic than mean age?
John gave you a good answer on the sister site. One aspect he didn't mention explicitly is robustness: median as a measure of central location does better than the mean as it has a higher breakdown
Why is median age a better statistic than mean age? John gave you a good answer on the sister site. One aspect he didn't mention explicitly is robustness: median as a measure of central location does better than the mean as it has a higher breakdown point (of 50%) whereas the mean has a very low one of 0 (see wikipedia for details). Intuitively, it means that individual bad observations do not skew the median whereas they do for the mean.
Why is median age a better statistic than mean age? John gave you a good answer on the sister site. One aspect he didn't mention explicitly is robustness: median as a measure of central location does better than the mean as it has a higher breakdown
5,136
Why is median age a better statistic than mean age?
Here's my answer first posted on math.stackexchange: Median is what many people actually have in mind when they say "mean." It's easier to interpret the median: half the population is above this age and half are below. Mean is a little more subtle. People look for symmetry and sometimes impose symmetry when it isn't there. The age distribution in a population is far from symmetric, so the mean could be misleading. Age distributions are something like a pyramid. Lots of children, not many elderly. (Or at least that's how it is in a sort of steady state. In the US, the post-WWII baby boom generation has distorted this distribution as they age. Some people have called this "squaring the pyramid" because the boomers have made the top of the pyramid wider than it has been in the past.) With an asymmetrical distribution, it may be better to report the median because it is a symmetrical statistic. The median is symmetrical even if the sampling distribution isn't.
Why is median age a better statistic than mean age?
Here's my answer first posted on math.stackexchange: Median is what many people actually have in mind when they say "mean." It's easier to interpret the median: half the population is above this age a
Why is median age a better statistic than mean age? Here's my answer first posted on math.stackexchange: Median is what many people actually have in mind when they say "mean." It's easier to interpret the median: half the population is above this age and half are below. Mean is a little more subtle. People look for symmetry and sometimes impose symmetry when it isn't there. The age distribution in a population is far from symmetric, so the mean could be misleading. Age distributions are something like a pyramid. Lots of children, not many elderly. (Or at least that's how it is in a sort of steady state. In the US, the post-WWII baby boom generation has distorted this distribution as they age. Some people have called this "squaring the pyramid" because the boomers have made the top of the pyramid wider than it has been in the past.) With an asymmetrical distribution, it may be better to report the median because it is a symmetrical statistic. The median is symmetrical even if the sampling distribution isn't.
Why is median age a better statistic than mean age? Here's my answer first posted on math.stackexchange: Median is what many people actually have in mind when they say "mean." It's easier to interpret the median: half the population is above this age a
5,137
Why is median age a better statistic than mean age?
Why is an axe better than a hatchet? That's similar to your question. They just mean and do different things. If one is talking about medians then the story they are trying to convey, the model they are trying to apply to the data, is different than one with means.
Why is median age a better statistic than mean age?
Why is an axe better than a hatchet? That's similar to your question. They just mean and do different things. If one is talking about medians then the story they are trying to convey, the model they
Why is median age a better statistic than mean age? Why is an axe better than a hatchet? That's similar to your question. They just mean and do different things. If one is talking about medians then the story they are trying to convey, the model they are trying to apply to the data, is different than one with means.
Why is median age a better statistic than mean age? Why is an axe better than a hatchet? That's similar to your question. They just mean and do different things. If one is talking about medians then the story they are trying to convey, the model they
5,138
Why is median age a better statistic than mean age?
For a concrete example, consider the mean ages for the Congo (DRC) and Japan. One is devastated by civil war, the other is well developed with an ageing population. The mean isn't terribly interesting for an apples to apples comparison. On the other hand, the median can be informative as a measure of central tendency since by definition we have half above, half below. The wikipedia article on Population Pyramid might be enlightening (see the sections on youth bulge, ageing populations).
Why is median age a better statistic than mean age?
For a concrete example, consider the mean ages for the Congo (DRC) and Japan. One is devastated by civil war, the other is well developed with an ageing population. The mean isn't terribly interesti
Why is median age a better statistic than mean age? For a concrete example, consider the mean ages for the Congo (DRC) and Japan. One is devastated by civil war, the other is well developed with an ageing population. The mean isn't terribly interesting for an apples to apples comparison. On the other hand, the median can be informative as a measure of central tendency since by definition we have half above, half below. The wikipedia article on Population Pyramid might be enlightening (see the sections on youth bulge, ageing populations).
Why is median age a better statistic than mean age? For a concrete example, consider the mean ages for the Congo (DRC) and Japan. One is devastated by civil war, the other is well developed with an ageing population. The mean isn't terribly interesti
5,139
Why is median age a better statistic than mean age?
Public Health Data repositories in the United States are moving toward an AGE in years format of five year increments due to the impact of the HIPAA regulations regarding the intentional blinding and masking of data for personal privacy reasons. Given this challenge to what was had been in the past (prior to HIPAA) a fairly scale level of measure data element based upon the difference between date of birth and date of death, we may need to reconsider AGE as a scale variable that can be parametrically described at all in public health data sets, in favor of models that describe AGE in a non-parametric fashion, as an ordinal level of measure. I know this may seem "over the top" to many factions within the biomedical informatics community, but this idea may have some merit in terms of "interpretation" as described in the comments above. What about all of the analytical power that is available to the non-parametric approaches? Yes, it is true that every one of us almost universally will attempt to apply GLM (general linear model) techniques to a variable that presents itself to us in distributions that behave the way AGE does. At the same time the shape of that distribution and how that shape is being determined by multiple dimension interaction effects upon multi-dimensional centroids and sub-group centroids present in the distribution, must be taken into consideration. What to do with these very complex data sets? When a data element fails to meet the "assumptions of the model", we progressively scan across (I said across, not down; we should be equal opportunity employers of method, each tool comes from the factory with form follows function rules) the list of other possible models to find the ones that "do not fail" the assumptions tests. In the present format in public health data sets, we really do need (as a data visualization community) to come up with a more standard model for handling AGE in five year increments (5YI). My vote for data visualization of AGE (given the new 5YI format) is to use histograms and box and whisker plots. Yes, this means the median. (No pun intended!) Sometimes a picture really is worth a thousand words, and an abstract is a summary of a thousand words. The box and whisker plot shows the "shape" of the distribution as a meaningful symbolic representation of the histogram at nearly an iconic level of resolution. Comparing the distributions of five year age increments by showing "side by side" box and whisker plots where one can instantly visually compare patterns of 75th to 50th (median) to lower 25th ntiles, would make an elegant "universal standard" for comparing AGE across the world. For those of us that continue to enjoy the thrill of data representation through the textual mechanics of tabular display, the "stem and leaf" diagram may also be of service when employed as an animated visual graphics element in a "sparkline" approach that portrays variation of the shapes of distributions over time. AGE has come of age. It needs to be explored further with the more powerful computational algorithms that are now available.
Why is median age a better statistic than mean age?
Public Health Data repositories in the United States are moving toward an AGE in years format of five year increments due to the impact of the HIPAA regulations regarding the intentional blinding and
Why is median age a better statistic than mean age? Public Health Data repositories in the United States are moving toward an AGE in years format of five year increments due to the impact of the HIPAA regulations regarding the intentional blinding and masking of data for personal privacy reasons. Given this challenge to what was had been in the past (prior to HIPAA) a fairly scale level of measure data element based upon the difference between date of birth and date of death, we may need to reconsider AGE as a scale variable that can be parametrically described at all in public health data sets, in favor of models that describe AGE in a non-parametric fashion, as an ordinal level of measure. I know this may seem "over the top" to many factions within the biomedical informatics community, but this idea may have some merit in terms of "interpretation" as described in the comments above. What about all of the analytical power that is available to the non-parametric approaches? Yes, it is true that every one of us almost universally will attempt to apply GLM (general linear model) techniques to a variable that presents itself to us in distributions that behave the way AGE does. At the same time the shape of that distribution and how that shape is being determined by multiple dimension interaction effects upon multi-dimensional centroids and sub-group centroids present in the distribution, must be taken into consideration. What to do with these very complex data sets? When a data element fails to meet the "assumptions of the model", we progressively scan across (I said across, not down; we should be equal opportunity employers of method, each tool comes from the factory with form follows function rules) the list of other possible models to find the ones that "do not fail" the assumptions tests. In the present format in public health data sets, we really do need (as a data visualization community) to come up with a more standard model for handling AGE in five year increments (5YI). My vote for data visualization of AGE (given the new 5YI format) is to use histograms and box and whisker plots. Yes, this means the median. (No pun intended!) Sometimes a picture really is worth a thousand words, and an abstract is a summary of a thousand words. The box and whisker plot shows the "shape" of the distribution as a meaningful symbolic representation of the histogram at nearly an iconic level of resolution. Comparing the distributions of five year age increments by showing "side by side" box and whisker plots where one can instantly visually compare patterns of 75th to 50th (median) to lower 25th ntiles, would make an elegant "universal standard" for comparing AGE across the world. For those of us that continue to enjoy the thrill of data representation through the textual mechanics of tabular display, the "stem and leaf" diagram may also be of service when employed as an animated visual graphics element in a "sparkline" approach that portrays variation of the shapes of distributions over time. AGE has come of age. It needs to be explored further with the more powerful computational algorithms that are now available.
Why is median age a better statistic than mean age? Public Health Data repositories in the United States are moving toward an AGE in years format of five year increments due to the impact of the HIPAA regulations regarding the intentional blinding and
5,140
Why is median age a better statistic than mean age?
I don't think there is a good descriptive reason for choosing median over mean for age distributions. There is one of practicality when comparing reported data. Many countries report their population in 5-year age intervals with the top band open-ended. This causes some difficulties calculating the mean from the intervals, especially for the youngest interval (affected by infant mortality rates), the top "interval" (what is the mean of an 80+ "interval"?), and the near top intervals (the mean of each interval is usually lower than the middle). It is far easier to estimate the median by interpolating within the median interval, often approximating by assuming a flat or trapezium age distribution in that interval (death rates in many countries are relatively low around the median age, making this a more reasonable approximation than it is for the young or old).
Why is median age a better statistic than mean age?
I don't think there is a good descriptive reason for choosing median over mean for age distributions. There is one of practicality when comparing reported data. Many countries report their populati
Why is median age a better statistic than mean age? I don't think there is a good descriptive reason for choosing median over mean for age distributions. There is one of practicality when comparing reported data. Many countries report their population in 5-year age intervals with the top band open-ended. This causes some difficulties calculating the mean from the intervals, especially for the youngest interval (affected by infant mortality rates), the top "interval" (what is the mean of an 80+ "interval"?), and the near top intervals (the mean of each interval is usually lower than the middle). It is far easier to estimate the median by interpolating within the median interval, often approximating by assuming a flat or trapezium age distribution in that interval (death rates in many countries are relatively low around the median age, making this a more reasonable approximation than it is for the young or old).
Why is median age a better statistic than mean age? I don't think there is a good descriptive reason for choosing median over mean for age distributions. There is one of practicality when comparing reported data. Many countries report their populati
5,141
Why is median age a better statistic than mean age?
To give a useful answer the original question requires we know the question behind the question. In other words, "Why do you want some sort of summary statistic comparing the age distribution of different countries?" The median might be the most useful for some questions. The mean might be the most useful for others. And there are probably questions where "percent above (or below) some particular age" would be the most useful statistic.
Why is median age a better statistic than mean age?
To give a useful answer the original question requires we know the question behind the question. In other words, "Why do you want some sort of summary statistic comparing the age distribution of diff
Why is median age a better statistic than mean age? To give a useful answer the original question requires we know the question behind the question. In other words, "Why do you want some sort of summary statistic comparing the age distribution of different countries?" The median might be the most useful for some questions. The mean might be the most useful for others. And there are probably questions where "percent above (or below) some particular age" would be the most useful statistic.
Why is median age a better statistic than mean age? To give a useful answer the original question requires we know the question behind the question. In other words, "Why do you want some sort of summary statistic comparing the age distribution of diff
5,142
Why is median age a better statistic than mean age?
You're getting good answers here, but let me just add my 2 cents. I work in pharmacometrics, which deals in things like blood volume, elimination rate, base level of drug effect, maximum drug effect, and parameters like that. We make a distinction between variables that can take on any value plus or minus, versus values that can only be positive. An example of a variable that can take on any value, plus or minus, would be drug effect, which could be positive, zero, or negative. An example of a variable that can only realistically be positive is blood volume or drug elimination rate. We model these things with distributions that are typically either normal or lognormal, normal for the any-valued ones, and lognormal for the only-positive ones. A lognormal number is the number E taken to the power of a normally distributed number, and that is why it can only be positive. For a normally distributed variable, the median, mean, and mode are the same number, so it doesn't matter which you use. However, for a lognormally distributed variable, the mean is larger than both the median and the mode, so it is not really very useful. In fact, the median is where the underlying normal has its mean, so it is a much more attractive measure. Since age (presumably) can never be negative, a lognormal distribution is probably a better description of it than normal, so median (E to the mean of the underlying normal) is more useful.
Why is median age a better statistic than mean age?
You're getting good answers here, but let me just add my 2 cents. I work in pharmacometrics, which deals in things like blood volume, elimination rate, base level of drug effect, maximum drug effect,
Why is median age a better statistic than mean age? You're getting good answers here, but let me just add my 2 cents. I work in pharmacometrics, which deals in things like blood volume, elimination rate, base level of drug effect, maximum drug effect, and parameters like that. We make a distinction between variables that can take on any value plus or minus, versus values that can only be positive. An example of a variable that can take on any value, plus or minus, would be drug effect, which could be positive, zero, or negative. An example of a variable that can only realistically be positive is blood volume or drug elimination rate. We model these things with distributions that are typically either normal or lognormal, normal for the any-valued ones, and lognormal for the only-positive ones. A lognormal number is the number E taken to the power of a normally distributed number, and that is why it can only be positive. For a normally distributed variable, the median, mean, and mode are the same number, so it doesn't matter which you use. However, for a lognormally distributed variable, the mean is larger than both the median and the mode, so it is not really very useful. In fact, the median is where the underlying normal has its mean, so it is a much more attractive measure. Since age (presumably) can never be negative, a lognormal distribution is probably a better description of it than normal, so median (E to the mean of the underlying normal) is more useful.
Why is median age a better statistic than mean age? You're getting good answers here, but let me just add my 2 cents. I work in pharmacometrics, which deals in things like blood volume, elimination rate, base level of drug effect, maximum drug effect,
5,143
Why is median age a better statistic than mean age?
I've been taught that median should be used with range and mean with standard deviation. When we talk about age, I think range is a more relevant way to express the spread, and easier to understand for most. For example in a study population, the mean age was 53 (SD 5.4) or the median age was 48 (range 23-77). For that reason, I would prefer to use median rather than mean. But I would be very interested to here what a statistician or stats pro would say about using mean with range? I see this quite a bit in scientific papers.
Why is median age a better statistic than mean age?
I've been taught that median should be used with range and mean with standard deviation. When we talk about age, I think range is a more relevant way to express the spread, and easier to understand fo
Why is median age a better statistic than mean age? I've been taught that median should be used with range and mean with standard deviation. When we talk about age, I think range is a more relevant way to express the spread, and easier to understand for most. For example in a study population, the mean age was 53 (SD 5.4) or the median age was 48 (range 23-77). For that reason, I would prefer to use median rather than mean. But I would be very interested to here what a statistician or stats pro would say about using mean with range? I see this quite a bit in scientific papers.
Why is median age a better statistic than mean age? I've been taught that median should be used with range and mean with standard deviation. When we talk about age, I think range is a more relevant way to express the spread, and easier to understand fo
5,144
Why is median age a better statistic than mean age?
John's answer on math.stackexchange can be viewed as the following: When you have a skewed distribution the median may be a better summary statistic than the mean. Note that when he says that there are more infants than adults he essentially is suggesting that the age distribution is a skewed distribution.
Why is median age a better statistic than mean age?
John's answer on math.stackexchange can be viewed as the following: When you have a skewed distribution the median may be a better summary statistic than the mean. Note that when he says that there
Why is median age a better statistic than mean age? John's answer on math.stackexchange can be viewed as the following: When you have a skewed distribution the median may be a better summary statistic than the mean. Note that when he says that there are more infants than adults he essentially is suggesting that the age distribution is a skewed distribution.
Why is median age a better statistic than mean age? John's answer on math.stackexchange can be viewed as the following: When you have a skewed distribution the median may be a better summary statistic than the mean. Note that when he says that there
5,145
Why is median age a better statistic than mean age?
I hope the mean age would be influenced by the outliers in your data set while this is not the case for a median age. Let us take an example of a data set vaccinated patients: 1,2,3,4,4,5,6,6,6,78 years the mean would be:11.5 and median age of these patients is 4.5. this mean age has been affected by the outlier 78. median is the best while dealing with data sets of the skewed distribution.
Why is median age a better statistic than mean age?
I hope the mean age would be influenced by the outliers in your data set while this is not the case for a median age. Let us take an example of a data set vaccinated patients: 1,2,3,4,4,5,6,6,6,78 yea
Why is median age a better statistic than mean age? I hope the mean age would be influenced by the outliers in your data set while this is not the case for a median age. Let us take an example of a data set vaccinated patients: 1,2,3,4,4,5,6,6,6,78 years the mean would be:11.5 and median age of these patients is 4.5. this mean age has been affected by the outlier 78. median is the best while dealing with data sets of the skewed distribution.
Why is median age a better statistic than mean age? I hope the mean age would be influenced by the outliers in your data set while this is not the case for a median age. Let us take an example of a data set vaccinated patients: 1,2,3,4,4,5,6,6,6,78 yea
5,146
Why is median age a better statistic than mean age?
Certainly in the case of demographic analysis, I would think that both the mean and median would be valuable, especially in combination with each other, if you are looking for outliers or areas of growth that may be mislabeled by the median alone. In communities with a large retirement community or in an area with a birth rate explosion, the median alone may not give you the whole picture, and that is where the mean, in comparison, can be very useful.
Why is median age a better statistic than mean age?
Certainly in the case of demographic analysis, I would think that both the mean and median would be valuable, especially in combination with each other, if you are looking for outliers or areas of gro
Why is median age a better statistic than mean age? Certainly in the case of demographic analysis, I would think that both the mean and median would be valuable, especially in combination with each other, if you are looking for outliers or areas of growth that may be mislabeled by the median alone. In communities with a large retirement community or in an area with a birth rate explosion, the median alone may not give you the whole picture, and that is where the mean, in comparison, can be very useful.
Why is median age a better statistic than mean age? Certainly in the case of demographic analysis, I would think that both the mean and median would be valuable, especially in combination with each other, if you are looking for outliers or areas of gro
5,147
Why is median age a better statistic than mean age?
SHORT ANSWER: Median age is not simply better than mean age; however, you may have noticed that more people use it. So a better question might be: "Why would more demographers use median age than mean age?" A statistic, as a vocabulary term, has its origins in the state (nominally a legal entity) attempting to understand its human population. So think about the people in those governments and how much information they want or need, as well as how much time they have to devote to understanding the precise mathematical meaning of scientific words. The easiest way to sum up a lot of data, without using a picture, is to report a single number; this is known as an estimator for the parameter in question (in this cage, the time elapsed since birth of a human being, precise to the level of years). As E.T. Jaynes showed in his book Probability Theory: The Logic of Science, one could choose to construct an estimator based on a utilitarian loss function summarizing the consequences of making a mistake based on using a single number instead of an entire data set when making decisions based on that information. In Jaynes' book, he shows via mathematical proof that the mode, or maximum likelihood estimator, is the estimator minimizing loss shaped like a Dirac delta function. The mean minimizes quadratic loss functions, such that the further one gets from the estimate, the quantity of loss (undesirable consequence) goes up very quickly once you pass the unit scale. The median, on the other hand, minimizes a loss function shaped like an inverted triangle, such that it is only five times less desirable to be off by one unit of precision, rather than 25 times (as in the case when using the mean). In fact, the unit of precision makes no difference whatsoever, because there is no curvature in such a triangular pointy loss function. With this theoretical foundation, one could literally draw loss functions that are not symmetric at all and form an infinite number of new estimators custom-suited to the needs of their consumers/users. Another alternative to dealing with the cultural expectation of a single number is to educate those same users/consumers of information that a measure of central tendency can provide more information when paired with other parameters of a distribution, such as variance, skew, and kurtosis (might want to start with just variance and skew to ease them into it. The variance is just one example of a dispersion measure; another Jaynes suggests (in other writings) is to form a Bayesian posterior distribution and calculate the width of the shortest credible interval with value 0.5 (or confidence interval/standard deviation etc. if you do not buy into Bayesian theory--please, let's not get sidetracked). A more intuitive method which is possibly easier for more people to grasp would be the inter-quartile range, especially when reported with the median as its corresponding measure of central tendency. I am not sure if there is a non-parametric form of skew or kurtosis, but if they do exist they will almost certainly be easier to understand than these parametric analogs. I have a hunch that a major, if not dominant, part of the reason median age crops up more often than mean age is because it simply appeals more to people with less time or desire to go into theoretical details about things like sigma-algebras, Lebesgue measure theory, etc. which are all technically necessary to understand the more common foundations of probabilistic reasoning.
Why is median age a better statistic than mean age?
SHORT ANSWER: Median age is not simply better than mean age; however, you may have noticed that more people use it. So a better question might be: "Why would more demographers use median age than me
Why is median age a better statistic than mean age? SHORT ANSWER: Median age is not simply better than mean age; however, you may have noticed that more people use it. So a better question might be: "Why would more demographers use median age than mean age?" A statistic, as a vocabulary term, has its origins in the state (nominally a legal entity) attempting to understand its human population. So think about the people in those governments and how much information they want or need, as well as how much time they have to devote to understanding the precise mathematical meaning of scientific words. The easiest way to sum up a lot of data, without using a picture, is to report a single number; this is known as an estimator for the parameter in question (in this cage, the time elapsed since birth of a human being, precise to the level of years). As E.T. Jaynes showed in his book Probability Theory: The Logic of Science, one could choose to construct an estimator based on a utilitarian loss function summarizing the consequences of making a mistake based on using a single number instead of an entire data set when making decisions based on that information. In Jaynes' book, he shows via mathematical proof that the mode, or maximum likelihood estimator, is the estimator minimizing loss shaped like a Dirac delta function. The mean minimizes quadratic loss functions, such that the further one gets from the estimate, the quantity of loss (undesirable consequence) goes up very quickly once you pass the unit scale. The median, on the other hand, minimizes a loss function shaped like an inverted triangle, such that it is only five times less desirable to be off by one unit of precision, rather than 25 times (as in the case when using the mean). In fact, the unit of precision makes no difference whatsoever, because there is no curvature in such a triangular pointy loss function. With this theoretical foundation, one could literally draw loss functions that are not symmetric at all and form an infinite number of new estimators custom-suited to the needs of their consumers/users. Another alternative to dealing with the cultural expectation of a single number is to educate those same users/consumers of information that a measure of central tendency can provide more information when paired with other parameters of a distribution, such as variance, skew, and kurtosis (might want to start with just variance and skew to ease them into it. The variance is just one example of a dispersion measure; another Jaynes suggests (in other writings) is to form a Bayesian posterior distribution and calculate the width of the shortest credible interval with value 0.5 (or confidence interval/standard deviation etc. if you do not buy into Bayesian theory--please, let's not get sidetracked). A more intuitive method which is possibly easier for more people to grasp would be the inter-quartile range, especially when reported with the median as its corresponding measure of central tendency. I am not sure if there is a non-parametric form of skew or kurtosis, but if they do exist they will almost certainly be easier to understand than these parametric analogs. I have a hunch that a major, if not dominant, part of the reason median age crops up more often than mean age is because it simply appeals more to people with less time or desire to go into theoretical details about things like sigma-algebras, Lebesgue measure theory, etc. which are all technically necessary to understand the more common foundations of probabilistic reasoning.
Why is median age a better statistic than mean age? SHORT ANSWER: Median age is not simply better than mean age; however, you may have noticed that more people use it. So a better question might be: "Why would more demographers use median age than me
5,148
How to interpret coefficients from a polynomial model fit?
My detailed answer is below, but the general (i.e. real) answer to this kind of question is: 1) experiment, mess around, look at the data, you can't break the computer no matter what you do, so ... experiment; or 2) read the documentation. Here is some R code which replicates the problem identified in this question, more or less: # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/95939/ # # It is an exploration of why the result from lm(y_x+I(x^2)) # looks so different from the result from lm(y~poly(x,2)) library(ggplot2) epsilon <- 0.25*rnorm(100) x <- seq(from=1, to=5, length.out=100) y <- 4 - 0.6*x + 0.1*x^2 + epsilon # Minimum is at x=3, the expected y value there is 4 - 0.6*3 + 0.1*3^2 ggplot(data=NULL,aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ poly(x, 2)) summary(lm(y~x+I(x^2))) # Looks right summary(lm(y ~ poly(x, 2))) # Looks like garbage # What happened? # What do x and x^2 look like: head(cbind(x,x^2)) #What does poly(x,2) look like: head(poly(x,2)) The first lm returns the expected answer: Call: lm(formula = y ~ x + I(x^2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** x -0.53929 0.11221 -4.806 5.62e-06 *** I(x^2) 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 The second lm returns something odd: Call: lm(formula = y ~ poly(x, 2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.24489 0.02241 144.765 < 2e-16 *** poly(x, 2)1 0.02853 0.22415 0.127 0.899 poly(x, 2)2 1.09835 0.22415 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 Since lm is the same in the two calls, it has to be the arguments of lm which are different. So, let's look at the arguments. Obviously, y is the same. It's the other parts. Let's look at the first few observations on the right-hand-side variables in the first call of lm. The return of head(cbind(x,x^2)) looks like: x [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 This is as expected. First column is x and second column is x^2. How about the second call of lm, the one with poly? The return of head(poly(x,2)) looks like: 1 2 [1,] -0.1714816 0.2169976 [2,] -0.1680173 0.2038462 [3,] -0.1645531 0.1909632 [4,] -0.1610888 0.1783486 [5,] -0.1576245 0.1660025 [6,] -0.1541602 0.1539247 OK, that's really different. First column is not x, and second column is not x^2. So, whatever poly(x,2) does, it does not return x and x^2. If we want to know what poly does, we might start by reading its help file. So we say help(poly). The description says: Returns or evaluates orthogonal polynomials of degree 1 to degree over the specified set of points x. These are all orthogonal to the constant polynomial of degree 0. Alternatively, evaluate raw polynomials. Now, either you know what "orthogonal polynomials" are or you don't. If you don't, then use Wikipedia or Bing (not Google, of course, because Google is evil---not as bad as Apple, naturally, but still bad). Or, you might decide you don't care what orthogonal polynomials are. You might notice the phrase "raw polynomials" and you might notice a little further down in the help file that poly has an option raw which is, by default, equal to FALSE. Those two considerations might inspire you to try out head(poly(x, 2, raw=TRUE)) which returns: 1 2 [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 Excited by this discovery (it looks right, now, yes?), you might go on to try summary(lm(y ~ poly(x, 2, raw=TRUE))) This returns: Call: lm(formula = y ~ poly(x, 2, raw = TRUE)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** poly(x, 2, raw = TRUE)1 -0.53929 0.11221 -4.806 5.62e-06 *** poly(x, 2, raw = TRUE)2 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 There are at least two levels to the above answer. First, I answered your question. Second, and much more importantly, I illustrated how you are supposed to go about answering questions like this yourself. Every single person who "knows how to program" has gone through a sequence like the one above sixty million times. Even people as depressingly bad at programming as I am go through this sequence all the time. It's normal for code not to work. It's normal to misunderstand what functions do. The way to deal with it is to screw around, experiment, look at the data, and RTFM. Get yourself out of "mindlessly following a recipe" mode and into "detective" mode.
How to interpret coefficients from a polynomial model fit?
My detailed answer is below, but the general (i.e. real) answer to this kind of question is: 1) experiment, mess around, look at the data, you can't break the computer no matter what you do, so ... ex
How to interpret coefficients from a polynomial model fit? My detailed answer is below, but the general (i.e. real) answer to this kind of question is: 1) experiment, mess around, look at the data, you can't break the computer no matter what you do, so ... experiment; or 2) read the documentation. Here is some R code which replicates the problem identified in this question, more or less: # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/95939/ # # It is an exploration of why the result from lm(y_x+I(x^2)) # looks so different from the result from lm(y~poly(x,2)) library(ggplot2) epsilon <- 0.25*rnorm(100) x <- seq(from=1, to=5, length.out=100) y <- 4 - 0.6*x + 0.1*x^2 + epsilon # Minimum is at x=3, the expected y value there is 4 - 0.6*3 + 0.1*3^2 ggplot(data=NULL,aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ poly(x, 2)) summary(lm(y~x+I(x^2))) # Looks right summary(lm(y ~ poly(x, 2))) # Looks like garbage # What happened? # What do x and x^2 look like: head(cbind(x,x^2)) #What does poly(x,2) look like: head(poly(x,2)) The first lm returns the expected answer: Call: lm(formula = y ~ x + I(x^2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** x -0.53929 0.11221 -4.806 5.62e-06 *** I(x^2) 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 The second lm returns something odd: Call: lm(formula = y ~ poly(x, 2)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.24489 0.02241 144.765 < 2e-16 *** poly(x, 2)1 0.02853 0.22415 0.127 0.899 poly(x, 2)2 1.09835 0.22415 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 Since lm is the same in the two calls, it has to be the arguments of lm which are different. So, let's look at the arguments. Obviously, y is the same. It's the other parts. Let's look at the first few observations on the right-hand-side variables in the first call of lm. The return of head(cbind(x,x^2)) looks like: x [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 This is as expected. First column is x and second column is x^2. How about the second call of lm, the one with poly? The return of head(poly(x,2)) looks like: 1 2 [1,] -0.1714816 0.2169976 [2,] -0.1680173 0.2038462 [3,] -0.1645531 0.1909632 [4,] -0.1610888 0.1783486 [5,] -0.1576245 0.1660025 [6,] -0.1541602 0.1539247 OK, that's really different. First column is not x, and second column is not x^2. So, whatever poly(x,2) does, it does not return x and x^2. If we want to know what poly does, we might start by reading its help file. So we say help(poly). The description says: Returns or evaluates orthogonal polynomials of degree 1 to degree over the specified set of points x. These are all orthogonal to the constant polynomial of degree 0. Alternatively, evaluate raw polynomials. Now, either you know what "orthogonal polynomials" are or you don't. If you don't, then use Wikipedia or Bing (not Google, of course, because Google is evil---not as bad as Apple, naturally, but still bad). Or, you might decide you don't care what orthogonal polynomials are. You might notice the phrase "raw polynomials" and you might notice a little further down in the help file that poly has an option raw which is, by default, equal to FALSE. Those two considerations might inspire you to try out head(poly(x, 2, raw=TRUE)) which returns: 1 2 [1,] 1.000000 1.000000 [2,] 1.040404 1.082441 [3,] 1.080808 1.168146 [4,] 1.121212 1.257117 [5,] 1.161616 1.349352 [6,] 1.202020 1.444853 Excited by this discovery (it looks right, now, yes?), you might go on to try summary(lm(y ~ poly(x, 2, raw=TRUE))) This returns: Call: lm(formula = y ~ poly(x, 2, raw = TRUE)) Residuals: Min 1Q Median 3Q Max -0.53815 -0.13465 -0.01262 0.15369 0.61645 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.92734 0.15376 25.542 < 2e-16 *** poly(x, 2, raw = TRUE)1 -0.53929 0.11221 -4.806 5.62e-06 *** poly(x, 2, raw = TRUE)2 0.09029 0.01843 4.900 3.84e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2241 on 97 degrees of freedom Multiple R-squared: 0.1985, Adjusted R-squared: 0.182 F-statistic: 12.01 on 2 and 97 DF, p-value: 2.181e-05 There are at least two levels to the above answer. First, I answered your question. Second, and much more importantly, I illustrated how you are supposed to go about answering questions like this yourself. Every single person who "knows how to program" has gone through a sequence like the one above sixty million times. Even people as depressingly bad at programming as I am go through this sequence all the time. It's normal for code not to work. It's normal to misunderstand what functions do. The way to deal with it is to screw around, experiment, look at the data, and RTFM. Get yourself out of "mindlessly following a recipe" mode and into "detective" mode.
How to interpret coefficients from a polynomial model fit? My detailed answer is below, but the general (i.e. real) answer to this kind of question is: 1) experiment, mess around, look at the data, you can't break the computer no matter what you do, so ... ex
5,149
How to interpret coefficients from a polynomial model fit?
If you just want a nudge in the right direction without quite so much judgement: poly() creates orthogonal (not correlated) polynomials, as opposed to I(), which completely ignores correlation between the resultant polynomials. Correlation between predictor variables can be a problem in linear models (see here for more information on why correlation can be problematic), so it's probably better (in general) to use poly() instead of I(). Now, why do the results look so different? Well, both poly() and I() take x and convert it into a new x. In the case of I(), the new x is just x^1 or x^2. In the case of poly(), the new x's are much more complicated. If you want to know where they come from (and you probably don't), you can get started here or the aforementioned Wikipedia page or a textbook. The point is, when you're calculating (predicting) y based on a particular set of x values, you need to use the converted x values produced by either poly() or I() (depending which one was in your linear model). So: library(ggplot2) # set the seed to make the results reproducible. set.seed(3) #### simulate some data #### # epsilon = random error term epsilon <- 0.25*rnorm(100) # x values are just a sequence from 1 to 5 x <- seq(from=1, to=5, length.out=100) # y is a polynomial function of x (plus some error) y <- 4 - 0.6*x + 0.1*x^2 + epsilon # Minimum is at x=3, the expected y value there is 4 - 0.6*3 + 0.1*3^2 # visualize the data (with a polynomial best-fit line) ggplot(data=NULL,aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ poly(x, 2)) #### Model the data #### # first we'll try to model the data with just I() modI <- lm(y~x+I(x^2)) # the model summary looks right summary(modI) # next we'll try it with poly() modp <- lm(y ~ poly(x, 2)) # the model summary looks weird summary(modp) #### make predictions at x = 3 based on each model #### # predict y using modI # all we need to do is take the model coefficients and plug them into the formula: intercept + beta1 * x^1 + beta2 * x^2 coef(modI)[1] + coef(modI)[2] * 3^1 + coef(modI)[3] * 3^2 (Intercept) 3.122988 # predict y using modp # this takes an extra step. # first, calculate the new x values using predict.poly() x_poly <- stats:::predict.poly(object = poly(x,2), newdata = 3) # then use the same formula as above, but this time instead of the raw x value (3), use the polynomial x-value (x_poly) coef(modp)[1] + coef(modp)[2] * x_poly[1] + coef(modp)[3] * x_poly[2] (Intercept) 3.122988 In this case, both models return the same answer, which suggests that correlation among predictor variables is not influencing your results. If correlation were a problem, the two methods would predict different values.
How to interpret coefficients from a polynomial model fit?
If you just want a nudge in the right direction without quite so much judgement: poly() creates orthogonal (not correlated) polynomials, as opposed to I(), which completely ignores correlation between
How to interpret coefficients from a polynomial model fit? If you just want a nudge in the right direction without quite so much judgement: poly() creates orthogonal (not correlated) polynomials, as opposed to I(), which completely ignores correlation between the resultant polynomials. Correlation between predictor variables can be a problem in linear models (see here for more information on why correlation can be problematic), so it's probably better (in general) to use poly() instead of I(). Now, why do the results look so different? Well, both poly() and I() take x and convert it into a new x. In the case of I(), the new x is just x^1 or x^2. In the case of poly(), the new x's are much more complicated. If you want to know where they come from (and you probably don't), you can get started here or the aforementioned Wikipedia page or a textbook. The point is, when you're calculating (predicting) y based on a particular set of x values, you need to use the converted x values produced by either poly() or I() (depending which one was in your linear model). So: library(ggplot2) # set the seed to make the results reproducible. set.seed(3) #### simulate some data #### # epsilon = random error term epsilon <- 0.25*rnorm(100) # x values are just a sequence from 1 to 5 x <- seq(from=1, to=5, length.out=100) # y is a polynomial function of x (plus some error) y <- 4 - 0.6*x + 0.1*x^2 + epsilon # Minimum is at x=3, the expected y value there is 4 - 0.6*3 + 0.1*3^2 # visualize the data (with a polynomial best-fit line) ggplot(data=NULL,aes(x, y)) + geom_point() + geom_smooth(method = "lm", formula = y ~ poly(x, 2)) #### Model the data #### # first we'll try to model the data with just I() modI <- lm(y~x+I(x^2)) # the model summary looks right summary(modI) # next we'll try it with poly() modp <- lm(y ~ poly(x, 2)) # the model summary looks weird summary(modp) #### make predictions at x = 3 based on each model #### # predict y using modI # all we need to do is take the model coefficients and plug them into the formula: intercept + beta1 * x^1 + beta2 * x^2 coef(modI)[1] + coef(modI)[2] * 3^1 + coef(modI)[3] * 3^2 (Intercept) 3.122988 # predict y using modp # this takes an extra step. # first, calculate the new x values using predict.poly() x_poly <- stats:::predict.poly(object = poly(x,2), newdata = 3) # then use the same formula as above, but this time instead of the raw x value (3), use the polynomial x-value (x_poly) coef(modp)[1] + coef(modp)[2] * x_poly[1] + coef(modp)[3] * x_poly[2] (Intercept) 3.122988 In this case, both models return the same answer, which suggests that correlation among predictor variables is not influencing your results. If correlation were a problem, the two methods would predict different values.
How to interpret coefficients from a polynomial model fit? If you just want a nudge in the right direction without quite so much judgement: poly() creates orthogonal (not correlated) polynomials, as opposed to I(), which completely ignores correlation between
5,150
How to interpret coefficients from a polynomial model fit?
There's an interesting approach to interpretation of polynomial regression by Stimson et al. (1978). It involves rewriting $Y = \beta_{0} + \beta_{1} X + \beta_{2} X^{2} + u$ as $Y = m + \beta_{2} \left( f - X \right)^{2} + u$ where $m = \beta_{0} - \left. \beta_{1}^{2} \right/ 4 \beta_{2}$ is the minimum or maximum (depending on the sign of $\beta_{2}$) and $f = \left. -\beta_{1} \right/ 2 \beta_{2}$ is the focal value. It basically transforms the three-dimensional combination of slopes into a parabola in two dimensions. Their paper gives an example from political science.
How to interpret coefficients from a polynomial model fit?
There's an interesting approach to interpretation of polynomial regression by Stimson et al. (1978). It involves rewriting $Y = \beta_{0} + \beta_{1} X + \beta_{2} X^{2} + u$ as $Y = m + \beta_{2} \le
How to interpret coefficients from a polynomial model fit? There's an interesting approach to interpretation of polynomial regression by Stimson et al. (1978). It involves rewriting $Y = \beta_{0} + \beta_{1} X + \beta_{2} X^{2} + u$ as $Y = m + \beta_{2} \left( f - X \right)^{2} + u$ where $m = \beta_{0} - \left. \beta_{1}^{2} \right/ 4 \beta_{2}$ is the minimum or maximum (depending on the sign of $\beta_{2}$) and $f = \left. -\beta_{1} \right/ 2 \beta_{2}$ is the focal value. It basically transforms the three-dimensional combination of slopes into a parabola in two dimensions. Their paper gives an example from political science.
How to interpret coefficients from a polynomial model fit? There's an interesting approach to interpretation of polynomial regression by Stimson et al. (1978). It involves rewriting $Y = \beta_{0} + \beta_{1} X + \beta_{2} X^{2} + u$ as $Y = m + \beta_{2} \le
5,151
How to interpret coefficients from a polynomial model fit?
'poly' performs Graham-Schmidt ortho-normalization on the polynomials 1, x, x^2, ..., x^deg For example this function does the same thing as 'poly' without returning 'coef' attributes of course. MyPoly <- function(x, deg) { n <- length(x) ans <- NULL for(k in 1:deg) { v <- x^k cmps <- rep(0, n) if(k>0) for(j in 0:(k-1)) cmps <- cmps + c(v%*%ans[,j+1])*ans[,j+1] p <- v - cmps p <- p/sum(p^2)^0.5 ans <- cbind(ans, p) } ans[,-1] } I landed on this thread because I was interested in the functional form. So how do we express the result of 'poly' as an expression? Just invert the Graham-Schmidt procedure. You'll end up with a mess!
How to interpret coefficients from a polynomial model fit?
'poly' performs Graham-Schmidt ortho-normalization on the polynomials 1, x, x^2, ..., x^deg For example this function does the same thing as 'poly' without returning 'coef' attributes of course. MyPol
How to interpret coefficients from a polynomial model fit? 'poly' performs Graham-Schmidt ortho-normalization on the polynomials 1, x, x^2, ..., x^deg For example this function does the same thing as 'poly' without returning 'coef' attributes of course. MyPoly <- function(x, deg) { n <- length(x) ans <- NULL for(k in 1:deg) { v <- x^k cmps <- rep(0, n) if(k>0) for(j in 0:(k-1)) cmps <- cmps + c(v%*%ans[,j+1])*ans[,j+1] p <- v - cmps p <- p/sum(p^2)^0.5 ans <- cbind(ans, p) } ans[,-1] } I landed on this thread because I was interested in the functional form. So how do we express the result of 'poly' as an expression? Just invert the Graham-Schmidt procedure. You'll end up with a mess!
How to interpret coefficients from a polynomial model fit? 'poly' performs Graham-Schmidt ortho-normalization on the polynomials 1, x, x^2, ..., x^deg For example this function does the same thing as 'poly' without returning 'coef' attributes of course. MyPol
5,152
What is your favorite data visualization blog?
FlowingData | Data Visualization, Infographics, and Statistics
What is your favorite data visualization blog?
FlowingData | Data Visualization, Infographics, and Statistics
What is your favorite data visualization blog? FlowingData | Data Visualization, Infographics, and Statistics
What is your favorite data visualization blog? FlowingData | Data Visualization, Infographics, and Statistics
5,153
What is your favorite data visualization blog?
Information Is Beautiful | Ideas, issues, knowledge, data - visualized!
What is your favorite data visualization blog?
Information Is Beautiful | Ideas, issues, knowledge, data - visualized!
What is your favorite data visualization blog? Information Is Beautiful | Ideas, issues, knowledge, data - visualized!
What is your favorite data visualization blog? Information Is Beautiful | Ideas, issues, knowledge, data - visualized!
5,154
What is your favorite data visualization blog?
information aesthetics - Data Visualization & Information Design
What is your favorite data visualization blog?
information aesthetics - Data Visualization & Information Design
What is your favorite data visualization blog? information aesthetics - Data Visualization & Information Design
What is your favorite data visualization blog? information aesthetics - Data Visualization & Information Design
5,155
What is your favorite data visualization blog?
Junk Charts is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
What is your favorite data visualization blog?
Junk Charts is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
What is your favorite data visualization blog? Junk Charts is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
What is your favorite data visualization blog? Junk Charts is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
5,156
What is your favorite data visualization blog?
It's not a blog, but Edward Tufte has an interesting forum on information design including data visualization.
What is your favorite data visualization blog?
It's not a blog, but Edward Tufte has an interesting forum on information design including data visualization.
What is your favorite data visualization blog? It's not a blog, but Edward Tufte has an interesting forum on information design including data visualization.
What is your favorite data visualization blog? It's not a blog, but Edward Tufte has an interesting forum on information design including data visualization.
5,157
What is your favorite data visualization blog?
EagerEyes by Robert Kosara (~5 posts a month). This blog includes tutorials and discussion articles plus it has a great home page with lots of links to useful information.
What is your favorite data visualization blog?
EagerEyes by Robert Kosara (~5 posts a month). This blog includes tutorials and discussion articles plus it has a great home page with lots of links to useful information.
What is your favorite data visualization blog? EagerEyes by Robert Kosara (~5 posts a month). This blog includes tutorials and discussion articles plus it has a great home page with lots of links to useful information.
What is your favorite data visualization blog? EagerEyes by Robert Kosara (~5 posts a month). This blog includes tutorials and discussion articles plus it has a great home page with lots of links to useful information.
5,158
What is your favorite data visualization blog?
Chart Porn I find the blog name pretty humorous. Great dataviz.
What is your favorite data visualization blog?
Chart Porn I find the blog name pretty humorous. Great dataviz.
What is your favorite data visualization blog? Chart Porn I find the blog name pretty humorous. Great dataviz.
What is your favorite data visualization blog? Chart Porn I find the blog name pretty humorous. Great dataviz.
5,159
What is your favorite data visualization blog?
Andrew Gelman doesn't limit himself to visualization, but he comments on it frequently. Statistical Modeling, Causal Inference, and Social Science
What is your favorite data visualization blog?
Andrew Gelman doesn't limit himself to visualization, but he comments on it frequently. Statistical Modeling, Causal Inference, and Social Science
What is your favorite data visualization blog? Andrew Gelman doesn't limit himself to visualization, but he comments on it frequently. Statistical Modeling, Causal Inference, and Social Science
What is your favorite data visualization blog? Andrew Gelman doesn't limit himself to visualization, but he comments on it frequently. Statistical Modeling, Causal Inference, and Social Science
5,160
What is your favorite data visualization blog?
https://web.archive.org/web/20120102041205/https://datavisualization.ch/ by Benjamin Wiederkehr and others (~15 links a month). If you want heaps of links you can subscribe to their twitter feed twitter slash datavis (~5 links a day) ahhh... i'm a new member and so i can only post one link per post.
What is your favorite data visualization blog?
https://web.archive.org/web/20120102041205/https://datavisualization.ch/ by Benjamin Wiederkehr and others (~15 links a month). If you want heaps of links you can subscribe to their twitter feed twitt
What is your favorite data visualization blog? https://web.archive.org/web/20120102041205/https://datavisualization.ch/ by Benjamin Wiederkehr and others (~15 links a month). If you want heaps of links you can subscribe to their twitter feed twitter slash datavis (~5 links a day) ahhh... i'm a new member and so i can only post one link per post.
What is your favorite data visualization blog? https://web.archive.org/web/20120102041205/https://datavisualization.ch/ by Benjamin Wiederkehr and others (~15 links a month). If you want heaps of links you can subscribe to their twitter feed twitt
5,161
What is your favorite data visualization blog?
Check out the data visualization blog from Visual.ly.
What is your favorite data visualization blog?
Check out the data visualization blog from Visual.ly.
What is your favorite data visualization blog? Check out the data visualization blog from Visual.ly.
What is your favorite data visualization blog? Check out the data visualization blog from Visual.ly.
5,162
What is your favorite data visualization blog?
I can't pick just one :) Check out this great blog post by flowingdata: 37 Data-ish blogs you should know about
What is your favorite data visualization blog?
I can't pick just one :) Check out this great blog post by flowingdata: 37 Data-ish blogs you should know about
What is your favorite data visualization blog? I can't pick just one :) Check out this great blog post by flowingdata: 37 Data-ish blogs you should know about
What is your favorite data visualization blog? I can't pick just one :) Check out this great blog post by flowingdata: 37 Data-ish blogs you should know about
5,163
What is your favorite data visualization blog?
Light-hearted: Indexed Also, see older visualizations from the same creator at the original Indexed Blog.
What is your favorite data visualization blog?
Light-hearted: Indexed Also, see older visualizations from the same creator at the original Indexed Blog.
What is your favorite data visualization blog? Light-hearted: Indexed Also, see older visualizations from the same creator at the original Indexed Blog.
What is your favorite data visualization blog? Light-hearted: Indexed Also, see older visualizations from the same creator at the original Indexed Blog.
5,164
What is your favorite data visualization blog?
I see all my favorite blogs have been listed. So I'll give you this one: I Love Charts It's a bit light hearted.
What is your favorite data visualization blog?
I see all my favorite blogs have been listed. So I'll give you this one: I Love Charts It's a bit light hearted.
What is your favorite data visualization blog? I see all my favorite blogs have been listed. So I'll give you this one: I Love Charts It's a bit light hearted.
What is your favorite data visualization blog? I see all my favorite blogs have been listed. So I'll give you this one: I Love Charts It's a bit light hearted.
5,165
What is your favorite data visualization blog?
Dataspora, a data science blog.
What is your favorite data visualization blog?
Dataspora, a data science blog.
What is your favorite data visualization blog? Dataspora, a data science blog.
What is your favorite data visualization blog? Dataspora, a data science blog.
5,166
What is your favorite data visualization blog?
We Love Datavis, a data visualization tumblog.
What is your favorite data visualization blog?
We Love Datavis, a data visualization tumblog.
What is your favorite data visualization blog? We Love Datavis, a data visualization tumblog.
What is your favorite data visualization blog? We Love Datavis, a data visualization tumblog.
5,167
What is your favorite data visualization blog?
I only recent became aware of the chartsnthings blog, which is about (direct quote from site); A (personal) blog of data sketches from the New York Times Graphics Department. I believe I follow most of the blogs listed here so far in my feed reader, and this one is a bit different. It is more of a behind the scenes look at the development of the NYT Graphics department, whom produce a wide array of excellent graphics (in a wide array of mediums) for the New York Times. Just those related articles should be followed, and are frequently mentioned in the other "discussion" sections of blogs like flowingdata or chartporn. It is different because it describes some of the workflow of developing graphics, including initial brainstorming ideas (including ones that don't pan out). And it is certainly interesting to see the developmental stages and reasoning behind certain graphical choices. Note it isn't an instructional blog. They don't provide code R or Flash to replicate graphics, they don't give tutorials on how to use Illustrator.
What is your favorite data visualization blog?
I only recent became aware of the chartsnthings blog, which is about (direct quote from site); A (personal) blog of data sketches from the New York Times Graphics Department. I believe I follow most
What is your favorite data visualization blog? I only recent became aware of the chartsnthings blog, which is about (direct quote from site); A (personal) blog of data sketches from the New York Times Graphics Department. I believe I follow most of the blogs listed here so far in my feed reader, and this one is a bit different. It is more of a behind the scenes look at the development of the NYT Graphics department, whom produce a wide array of excellent graphics (in a wide array of mediums) for the New York Times. Just those related articles should be followed, and are frequently mentioned in the other "discussion" sections of blogs like flowingdata or chartporn. It is different because it describes some of the workflow of developing graphics, including initial brainstorming ideas (including ones that don't pan out). And it is certainly interesting to see the developmental stages and reasoning behind certain graphical choices. Note it isn't an instructional blog. They don't provide code R or Flash to replicate graphics, they don't give tutorials on how to use Illustrator.
What is your favorite data visualization blog? I only recent became aware of the chartsnthings blog, which is about (direct quote from site); A (personal) blog of data sketches from the New York Times Graphics Department. I believe I follow most
5,168
What is your favorite data visualization blog?
A daily dataviz feed for/by data nerds. The community is closed but you can request via twitter to have your account enabled.
What is your favorite data visualization blog?
A daily dataviz feed for/by data nerds. The community is closed but you can request via twitter to have your account enabled.
What is your favorite data visualization blog? A daily dataviz feed for/by data nerds. The community is closed but you can request via twitter to have your account enabled.
What is your favorite data visualization blog? A daily dataviz feed for/by data nerds. The community is closed but you can request via twitter to have your account enabled.
5,169
What is the difference between a loss function and an error function?
In the context of a predictive or inferential model, the term "error" generally refers to the deviation from an actual value by a prediction or expectation of that value. It is determined entirely by the prediction mechanism and the actual behaviour of the quantities under observation. The "loss" is a quantified measure of how bad it is to get an error of a particular size/direction, which is affected by the negative consequences that accrue for inaccurate prediction. An error function measures the deviation of an observable value from a prediction, whereas a loss function operates on the error to quantify the negative consequence of an error. For example, in some contexts it might be reasonable to posit that there is squared error loss, where the negative consequence of an error is quantified as being proportional to the square of the error. In other contexts we might be more negatively affected by an error in a particular direction (e.g., false positive vs. false negative) and therefore we might adopt a non-symmetric loss function. The error function is a purely statistical object, whereas the loss function is a decision-theoretic object that we are bringing in to quantify the negative consequences of error. The latter is used in decision theory and economics (usually through its opposite - a cardinal utility function). An example: You are a criminal racketeer running an illegal betting parlour for the Mob. Each week you have to pay 50% of the profits to the Mob boss, but since you run the place, the boss relies on you to give a true accounting of the profits. If you have a good week you might be able to stiff him out of some dough by underrepresenting your profit, but if you underpay the boss, relative to what he suspects is the real profit, you’re a dead man. So you want to predict how much he expects to get, and pay accordingly. Ideally you will give him exactly what he is expecting, and keep the rest, but you could potentially make a prediction error, and pay him too much, or (yikes!) too little. You have a good week and earn $\pi = \$40,000$ in profit, so the boss is owed $\tfrac{1}{2} \pi = \$20,000$. He doesn’t know what a good week you had, so his true expectation of his share is only $\theta = \$15,000$ (unknown to you). You decide to pay him $\hat{\theta}$. Then your error function is: $$\text{Error}(\hat{\theta}, \theta) = \hat{\theta} - \theta ,$$ and (if we assume that loss is linear in money) your loss function is: $$\text{Loss}(\hat{\theta}, \theta) = \begin{cases} \infty & & \text{if } \hat{\theta} < \theta \quad \text{(sleep wit' da fishes)} \\[6pt] \hat{\theta} - \pi & & \text{if } \hat{\theta} \geqslant \theta \quad \text{(live to spend another week)} \\ \end{cases} $$ This is an example of an asymmetric loss function (solution discussed in the comments below) which differs substantially from the error function. The asymmetric nature of the loss function in this case stresses the catastrophic outcome in the case where there is underestimation of the unknown parameter.
What is the difference between a loss function and an error function?
In the context of a predictive or inferential model, the term "error" generally refers to the deviation from an actual value by a prediction or expectation of that value. It is determined entirely by
What is the difference between a loss function and an error function? In the context of a predictive or inferential model, the term "error" generally refers to the deviation from an actual value by a prediction or expectation of that value. It is determined entirely by the prediction mechanism and the actual behaviour of the quantities under observation. The "loss" is a quantified measure of how bad it is to get an error of a particular size/direction, which is affected by the negative consequences that accrue for inaccurate prediction. An error function measures the deviation of an observable value from a prediction, whereas a loss function operates on the error to quantify the negative consequence of an error. For example, in some contexts it might be reasonable to posit that there is squared error loss, where the negative consequence of an error is quantified as being proportional to the square of the error. In other contexts we might be more negatively affected by an error in a particular direction (e.g., false positive vs. false negative) and therefore we might adopt a non-symmetric loss function. The error function is a purely statistical object, whereas the loss function is a decision-theoretic object that we are bringing in to quantify the negative consequences of error. The latter is used in decision theory and economics (usually through its opposite - a cardinal utility function). An example: You are a criminal racketeer running an illegal betting parlour for the Mob. Each week you have to pay 50% of the profits to the Mob boss, but since you run the place, the boss relies on you to give a true accounting of the profits. If you have a good week you might be able to stiff him out of some dough by underrepresenting your profit, but if you underpay the boss, relative to what he suspects is the real profit, you’re a dead man. So you want to predict how much he expects to get, and pay accordingly. Ideally you will give him exactly what he is expecting, and keep the rest, but you could potentially make a prediction error, and pay him too much, or (yikes!) too little. You have a good week and earn $\pi = \$40,000$ in profit, so the boss is owed $\tfrac{1}{2} \pi = \$20,000$. He doesn’t know what a good week you had, so his true expectation of his share is only $\theta = \$15,000$ (unknown to you). You decide to pay him $\hat{\theta}$. Then your error function is: $$\text{Error}(\hat{\theta}, \theta) = \hat{\theta} - \theta ,$$ and (if we assume that loss is linear in money) your loss function is: $$\text{Loss}(\hat{\theta}, \theta) = \begin{cases} \infty & & \text{if } \hat{\theta} < \theta \quad \text{(sleep wit' da fishes)} \\[6pt] \hat{\theta} - \pi & & \text{if } \hat{\theta} \geqslant \theta \quad \text{(live to spend another week)} \\ \end{cases} $$ This is an example of an asymmetric loss function (solution discussed in the comments below) which differs substantially from the error function. The asymmetric nature of the loss function in this case stresses the catastrophic outcome in the case where there is underestimation of the unknown parameter.
What is the difference between a loss function and an error function? In the context of a predictive or inferential model, the term "error" generally refers to the deviation from an actual value by a prediction or expectation of that value. It is determined entirely by
5,170
Difference between forecast and prediction?
Your distinction sounds reasonable. There was a similar discussion at the analyticbridge website, where several people make various distinctions but none of them seem to agree. The closest one was, "Forecasting would be a subset of prediction. Any time you predict into the future it is a forecast. All forecasts are predictions, but not all predictions are forecasts, as when you would use regression to explain the relationship between two variables." So as you say, "forecast" implies time series and future, while "prediction" does not. Note that there is also a term "projection" which is distinct from forecast or prediction, in some disciplines.
Difference between forecast and prediction?
Your distinction sounds reasonable. There was a similar discussion at the analyticbridge website, where several people make various distinctions but none of them seem to agree. The closest one was, "F
Difference between forecast and prediction? Your distinction sounds reasonable. There was a similar discussion at the analyticbridge website, where several people make various distinctions but none of them seem to agree. The closest one was, "Forecasting would be a subset of prediction. Any time you predict into the future it is a forecast. All forecasts are predictions, but not all predictions are forecasts, as when you would use regression to explain the relationship between two variables." So as you say, "forecast" implies time series and future, while "prediction" does not. Note that there is also a term "projection" which is distinct from forecast or prediction, in some disciplines.
Difference between forecast and prediction? Your distinction sounds reasonable. There was a similar discussion at the analyticbridge website, where several people make various distinctions but none of them seem to agree. The closest one was, "F
5,171
Difference between forecast and prediction?
There is also an etymological difference noted by Nate Silver in The Signal and the Noise: (...) an ancient idea of prediction—associating it with fatalism, fortune-telling, and superstition—it also introduced a more modern and altogether more radical idea: that we might interpret these signs so as to gain an advantage from them. (...) The term forecast came from English’s Germanic roots, unlike predict, which is from Latin. Forecasting reflected the new Protestant worldliness rather than the otherworldliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence, wisdom, and industriousness, more like the way we now use the word foresight. and - as Nate Silver notes - they do have a different meanings in certain fields: (...) The terms “prediction” and “forecast” are employed differently in different fields; in some cases, they are interchangeable, but other disciplines differentiate them. No field is more sensitive to the distinction than seismology. If you’re speaking with a seismologist: A prediction is a definitive and specific statement about when and where an earthquake will strike: a major earthquake will hit Kyoto, Japan, on June 28. Whereas a forecast is a probabilistic statement, usually over a longer time scale: there is a 60 percent chance of an earthquake in Southern California over the next thirty years. The USGS’s official position is that earthquakes cannot be predicted. They can, however, be forecasted.
Difference between forecast and prediction?
There is also an etymological difference noted by Nate Silver in The Signal and the Noise: (...) an ancient idea of prediction—associating it with fatalism, fortune-telling, and superstition—it a
Difference between forecast and prediction? There is also an etymological difference noted by Nate Silver in The Signal and the Noise: (...) an ancient idea of prediction—associating it with fatalism, fortune-telling, and superstition—it also introduced a more modern and altogether more radical idea: that we might interpret these signs so as to gain an advantage from them. (...) The term forecast came from English’s Germanic roots, unlike predict, which is from Latin. Forecasting reflected the new Protestant worldliness rather than the otherworldliness of the Holy Roman Empire. Making a forecast typically implied planning under conditions of uncertainty. It suggested having prudence, wisdom, and industriousness, more like the way we now use the word foresight. and - as Nate Silver notes - they do have a different meanings in certain fields: (...) The terms “prediction” and “forecast” are employed differently in different fields; in some cases, they are interchangeable, but other disciplines differentiate them. No field is more sensitive to the distinction than seismology. If you’re speaking with a seismologist: A prediction is a definitive and specific statement about when and where an earthquake will strike: a major earthquake will hit Kyoto, Japan, on June 28. Whereas a forecast is a probabilistic statement, usually over a longer time scale: there is a 60 percent chance of an earthquake in Southern California over the next thirty years. The USGS’s official position is that earthquakes cannot be predicted. They can, however, be forecasted.
Difference between forecast and prediction? There is also an etymological difference noted by Nate Silver in The Signal and the Noise: (...) an ancient idea of prediction—associating it with fatalism, fortune-telling, and superstition—it a
5,172
Difference between forecast and prediction?
There is only one difference between these two in time series. Forecasting pertains to out of sample observations, whereas prediction pertains to in sample observations. Predicted values (and by that I mean OLS predicted values) are calculated for observations in the sample used to estimate the regression. However, forecast is made for the some dates beyond the data used to estimate the regression, so the data on the actual value of the forecasted variable are not in the sample used to estimate the regression. Residuals: Difference between the actual value of Y and its predicted value for observations in the sample. Forecast error: Difference between future value of Y, which is not contained in the estimation sample, and the forecast of the future value. Note : This was extracted from Introduction to Econometrics by Stock and Watson (p. 527)
Difference between forecast and prediction?
There is only one difference between these two in time series. Forecasting pertains to out of sample observations, whereas prediction pertains to in sample observations. Predicted values (and by that
Difference between forecast and prediction? There is only one difference between these two in time series. Forecasting pertains to out of sample observations, whereas prediction pertains to in sample observations. Predicted values (and by that I mean OLS predicted values) are calculated for observations in the sample used to estimate the regression. However, forecast is made for the some dates beyond the data used to estimate the regression, so the data on the actual value of the forecasted variable are not in the sample used to estimate the regression. Residuals: Difference between the actual value of Y and its predicted value for observations in the sample. Forecast error: Difference between future value of Y, which is not contained in the estimation sample, and the forecast of the future value. Note : This was extracted from Introduction to Econometrics by Stock and Watson (p. 527)
Difference between forecast and prediction? There is only one difference between these two in time series. Forecasting pertains to out of sample observations, whereas prediction pertains to in sample observations. Predicted values (and by that
5,173
Difference between forecast and prediction?
[This was meant as a comment to Tim's answer, which I liked; but it's too long to be posted as a comment.] There's a comment by Rasch along the lines of Tim's answer: First a terminological remark. The "prediction" is suggestive of the statistician as a magician who can tell the future. Economists have an expression that is less pretentious: forecasting – not much more reliable than weather forecasting. To speak seriously: you do not really predict anything. What you do, is to calculate the distribution of the variate in question, possibly offering its mean value or the like as a likely event – but only on the assumption that the model – or a characteristic feature of it – on which you based this forecasting, still holds, i.e. confronted with what eventually does happen you are faced with a test of this hypothesis and nothing else – you were not telling what the future would be! on p. 268 of "Sufficiency, prediction and extreme models" by Lauritzen (Barndorff-Nielsen & al, eds: Conference on foundational questions in statistical inference, Aarhus 1973). Personally I prefer to use "prediction" when a hypothesis assigns probability 1 (or 0) to some statement, and to use "forecast" otherwise. Because that hypothesis is then acting as a sort of physical theory with regard to that statement. But also in that case the "prediction" is not guaranteed to be correct. Unit probabilities always come about from some simplification (which may be necessary for computational purposes) in our assumptions and beliefs.
Difference between forecast and prediction?
[This was meant as a comment to Tim's answer, which I liked; but it's too long to be posted as a comment.] There's a comment by Rasch along the lines of Tim's answer: First a terminological remark. T
Difference between forecast and prediction? [This was meant as a comment to Tim's answer, which I liked; but it's too long to be posted as a comment.] There's a comment by Rasch along the lines of Tim's answer: First a terminological remark. The "prediction" is suggestive of the statistician as a magician who can tell the future. Economists have an expression that is less pretentious: forecasting – not much more reliable than weather forecasting. To speak seriously: you do not really predict anything. What you do, is to calculate the distribution of the variate in question, possibly offering its mean value or the like as a likely event – but only on the assumption that the model – or a characteristic feature of it – on which you based this forecasting, still holds, i.e. confronted with what eventually does happen you are faced with a test of this hypothesis and nothing else – you were not telling what the future would be! on p. 268 of "Sufficiency, prediction and extreme models" by Lauritzen (Barndorff-Nielsen & al, eds: Conference on foundational questions in statistical inference, Aarhus 1973). Personally I prefer to use "prediction" when a hypothesis assigns probability 1 (or 0) to some statement, and to use "forecast" otherwise. Because that hypothesis is then acting as a sort of physical theory with regard to that statement. But also in that case the "prediction" is not guaranteed to be correct. Unit probabilities always come about from some simplification (which may be necessary for computational purposes) in our assumptions and beliefs.
Difference between forecast and prediction? [This was meant as a comment to Tim's answer, which I liked; but it's too long to be posted as a comment.] There's a comment by Rasch along the lines of Tim's answer: First a terminological remark. T
5,174
Is Amazon's "average rating" misleading?
Benefits of using the mean to summarise central tendency of a 5 point rating As @gung mentioned I think there are often very good reasons for taking the mean of a five-point item as an index of central tendency. I have already outlined these reasons here. To paraphrase: the mean is easy to calculate The mean is intuitive and well understood The mean is a single number Other indices often yield similar rank ordering of objects Why the mean is good for Amazon Think about the goals of Amazon in reporting the mean. They might be aiming to provide an intuitive and understandable rating for an item ensure user acceptance of the rating system ensure that people understand what the rating means so they can use it appropriately to inform purchasing decisions Amazon provides some sort of rounded mean, frequency counts for each rating option, and the sample size (i.e., number of ratings). This information presumably is enough for most people to appreciate both the general sentiment regarding the item and the confidence in such a rating (i.e., a 4.5 with 20 ratings is more likely to be accurate than a 4.5 with 2 ratings; an item with 10 5-star ratings, and one 1-star rating with no comments might still be a good item). You could even see the mean as a democratic option. Many elections are decided based on which candidate gets the highest mean on a two-point scale. Similarly, if you take the argument that each person who submits a review gets a vote, then you can see the mean as a form that weights each person's vote equally. Are differences in scale use really a problem? There are a wide range of rating biases known in the psychological literature (for a review, see Saal et al 1980), such as central tendency bias, leniency bias, strictness bias. Also, some raters will be more arbitrary and some will be more reliable. Some may even systematically lie giving fake positive or fake negative reviews. This will create various forms of error when trying to calculate the true mean rating for an item. However, if you were to take a random sample of the population, such biases would cancel out, and with a sufficient sample size of raters, you would still get the true mean. Of course, you don't get a random sample on Amazon, and there is the risk that the particular set of raters you get for an item is systematically biased to be more lenient or strict and so on. That said, I think users of Amazon would appreciate that user submitted ratings come from an imperfect sample. I also think that it's quite likely that with a reasonable sample size that in many cases, the majority of response bias differences would start to disappear. Possible advances beyond the mean In terms of improving the accuracy of the rating, I wouldn't challenge the general concept of the mean, but rather I think there are other ways of estimating the true population mean rating for an item (i.e., the mean rating that would be obtained were a large representative sample asked to rate the item). Weight raters based on their trustworthiness Use a Bayesian rating system that estimates the mean rating as a weighted sum of the average rating for all items and the mean from the specific item, and increase the weighting for the specific item as the number of ratings increases Adjust the information of a rater based on any general rating tendency across items (e.g., a 5 from someone who typically gives 3s would be worth more than someone who typically gives 4s). Thus, if accuracy in rating was the primary goal of Amazon, I think it should endeavour to increase the number of ratings per item and adopt some of the above strategies. Such approaches might be particularly relevant when creating "best-of" rankings. However, for the humble rating on the page, it may well be that the sample mean better meets the goals of simplicity and transparency. References Saal, F.E., Downey, R.G. & Lahey, M.A. (1980). Rating the ratings: Assessing the psychometric quality of rating data.. Psychological Bulletin, 88, 413.
Is Amazon's "average rating" misleading?
Benefits of using the mean to summarise central tendency of a 5 point rating As @gung mentioned I think there are often very good reasons for taking the mean of a five-point item as an index of centra
Is Amazon's "average rating" misleading? Benefits of using the mean to summarise central tendency of a 5 point rating As @gung mentioned I think there are often very good reasons for taking the mean of a five-point item as an index of central tendency. I have already outlined these reasons here. To paraphrase: the mean is easy to calculate The mean is intuitive and well understood The mean is a single number Other indices often yield similar rank ordering of objects Why the mean is good for Amazon Think about the goals of Amazon in reporting the mean. They might be aiming to provide an intuitive and understandable rating for an item ensure user acceptance of the rating system ensure that people understand what the rating means so they can use it appropriately to inform purchasing decisions Amazon provides some sort of rounded mean, frequency counts for each rating option, and the sample size (i.e., number of ratings). This information presumably is enough for most people to appreciate both the general sentiment regarding the item and the confidence in such a rating (i.e., a 4.5 with 20 ratings is more likely to be accurate than a 4.5 with 2 ratings; an item with 10 5-star ratings, and one 1-star rating with no comments might still be a good item). You could even see the mean as a democratic option. Many elections are decided based on which candidate gets the highest mean on a two-point scale. Similarly, if you take the argument that each person who submits a review gets a vote, then you can see the mean as a form that weights each person's vote equally. Are differences in scale use really a problem? There are a wide range of rating biases known in the psychological literature (for a review, see Saal et al 1980), such as central tendency bias, leniency bias, strictness bias. Also, some raters will be more arbitrary and some will be more reliable. Some may even systematically lie giving fake positive or fake negative reviews. This will create various forms of error when trying to calculate the true mean rating for an item. However, if you were to take a random sample of the population, such biases would cancel out, and with a sufficient sample size of raters, you would still get the true mean. Of course, you don't get a random sample on Amazon, and there is the risk that the particular set of raters you get for an item is systematically biased to be more lenient or strict and so on. That said, I think users of Amazon would appreciate that user submitted ratings come from an imperfect sample. I also think that it's quite likely that with a reasonable sample size that in many cases, the majority of response bias differences would start to disappear. Possible advances beyond the mean In terms of improving the accuracy of the rating, I wouldn't challenge the general concept of the mean, but rather I think there are other ways of estimating the true population mean rating for an item (i.e., the mean rating that would be obtained were a large representative sample asked to rate the item). Weight raters based on their trustworthiness Use a Bayesian rating system that estimates the mean rating as a weighted sum of the average rating for all items and the mean from the specific item, and increase the weighting for the specific item as the number of ratings increases Adjust the information of a rater based on any general rating tendency across items (e.g., a 5 from someone who typically gives 3s would be worth more than someone who typically gives 4s). Thus, if accuracy in rating was the primary goal of Amazon, I think it should endeavour to increase the number of ratings per item and adopt some of the above strategies. Such approaches might be particularly relevant when creating "best-of" rankings. However, for the humble rating on the page, it may well be that the sample mean better meets the goals of simplicity and transparency. References Saal, F.E., Downey, R.G. & Lahey, M.A. (1980). Rating the ratings: Assessing the psychometric quality of rating data.. Psychological Bulletin, 88, 413.
Is Amazon's "average rating" misleading? Benefits of using the mean to summarise central tendency of a 5 point rating As @gung mentioned I think there are often very good reasons for taking the mean of a five-point item as an index of centra
5,175
Is Amazon's "average rating" misleading?
Everyone has good opinions on this. I don't really think I can add very much more. However, I will post this:
Is Amazon's "average rating" misleading?
Everyone has good opinions on this. I don't really think I can add very much more. However, I will post this:
Is Amazon's "average rating" misleading? Everyone has good opinions on this. I don't really think I can add very much more. However, I will post this:
Is Amazon's "average rating" misleading? Everyone has good opinions on this. I don't really think I can add very much more. However, I will post this:
5,176
Is Amazon's "average rating" misleading?
To be somewhat technical here, those ratings aren't actually a Likert scale; they're just ordinal ratings. Now, having said that, your point is essentially correct. However, I often think too much is made of this issue. One thing to note is that it is typically understood that the average of a number of ordinal items can be approximately interval, and thus, when there are many ratings the mean becomes a more reasonable representation. I have found this answer by @JeromyAnglim to be excellent (really, the question and all attendant answers there are worth reading). For a more theoretical treatment, see here. On a different note, I like Amazon, but I see no reason to expect statistical sophistication from them, especially in terms of basic site design--the point is usability by consumers, not to impress stats professors.
Is Amazon's "average rating" misleading?
To be somewhat technical here, those ratings aren't actually a Likert scale; they're just ordinal ratings. Now, having said that, your point is essentially correct. However, I often think too much i
Is Amazon's "average rating" misleading? To be somewhat technical here, those ratings aren't actually a Likert scale; they're just ordinal ratings. Now, having said that, your point is essentially correct. However, I often think too much is made of this issue. One thing to note is that it is typically understood that the average of a number of ordinal items can be approximately interval, and thus, when there are many ratings the mean becomes a more reasonable representation. I have found this answer by @JeromyAnglim to be excellent (really, the question and all attendant answers there are worth reading). For a more theoretical treatment, see here. On a different note, I like Amazon, but I see no reason to expect statistical sophistication from them, especially in terms of basic site design--the point is usability by consumers, not to impress stats professors.
Is Amazon's "average rating" misleading? To be somewhat technical here, those ratings aren't actually a Likert scale; they're just ordinal ratings. Now, having said that, your point is essentially correct. However, I often think too much i
5,177
Is Amazon's "average rating" misleading?
In my experience, the mean of rating-scale data is often the most closely correlated with the level of real-world metrics we try to associate with the rating scale. We have found a lot of linear relationships, and the average is therefore one of the better ways to summarize the data. That being said, as Jeromy pointed out, most ways of analyzing the central tendency of a rating scale will give similar results (rank orders, etc) most of the time. Also, I suspect Amazon is probably not all that concerned with the scientific validity one way or the other. Amazon's goal, in the end, is to get people to shop more on Amazon.com, and the way reviews help achieve that will probably not vary with whatever one-number summary is used. Good products will be rewarded, really bad products punished, and nervous purchasers will have a chance to review pros and cons in more detail.
Is Amazon's "average rating" misleading?
In my experience, the mean of rating-scale data is often the most closely correlated with the level of real-world metrics we try to associate with the rating scale. We have found a lot of linear relat
Is Amazon's "average rating" misleading? In my experience, the mean of rating-scale data is often the most closely correlated with the level of real-world metrics we try to associate with the rating scale. We have found a lot of linear relationships, and the average is therefore one of the better ways to summarize the data. That being said, as Jeromy pointed out, most ways of analyzing the central tendency of a rating scale will give similar results (rank orders, etc) most of the time. Also, I suspect Amazon is probably not all that concerned with the scientific validity one way or the other. Amazon's goal, in the end, is to get people to shop more on Amazon.com, and the way reviews help achieve that will probably not vary with whatever one-number summary is used. Good products will be rewarded, really bad products punished, and nervous purchasers will have a chance to review pros and cons in more detail.
Is Amazon's "average rating" misleading? In my experience, the mean of rating-scale data is often the most closely correlated with the level of real-world metrics we try to associate with the rating scale. We have found a lot of linear relat
5,178
Is Amazon's "average rating" misleading?
Amazon ratings are misleading due to companies gaming the system. When customers are offered rebates and free merchandise in return for 5-star reviews, the "statistics" of what the ratings number is or means become moot.
Is Amazon's "average rating" misleading?
Amazon ratings are misleading due to companies gaming the system. When customers are offered rebates and free merchandise in return for 5-star reviews, the "statistics" of what the ratings number is o
Is Amazon's "average rating" misleading? Amazon ratings are misleading due to companies gaming the system. When customers are offered rebates and free merchandise in return for 5-star reviews, the "statistics" of what the ratings number is or means become moot.
Is Amazon's "average rating" misleading? Amazon ratings are misleading due to companies gaming the system. When customers are offered rebates and free merchandise in return for 5-star reviews, the "statistics" of what the ratings number is o
5,179
Is Amazon's "average rating" misleading?
You make a good point. Taking the mean of ordinal numbers is somewhat misleading. Any summary of several rankings would suffer from the fact that my subjective 3 may really equate to your 4. So combining different individual scores is probably the biggest problem. Interpreting the average of a 3 and a 4 as 3.5 is not nearly as egregious.
Is Amazon's "average rating" misleading?
You make a good point. Taking the mean of ordinal numbers is somewhat misleading. Any summary of several rankings would suffer from the fact that my subjective 3 may really equate to your 4. So comb
Is Amazon's "average rating" misleading? You make a good point. Taking the mean of ordinal numbers is somewhat misleading. Any summary of several rankings would suffer from the fact that my subjective 3 may really equate to your 4. So combining different individual scores is probably the biggest problem. Interpreting the average of a 3 and a 4 as 3.5 is not nearly as egregious.
Is Amazon's "average rating" misleading? You make a good point. Taking the mean of ordinal numbers is somewhat misleading. Any summary of several rankings would suffer from the fact that my subjective 3 may really equate to your 4. So comb
5,180
What are the factors that cause the posterior distributions to be intractable?
I had the opportunity to ask David Blei this question in person, and he told me that intractability in this context means one of two things: The integral has no closed-form solution. This might be when we're modeling some complex, real-world data and we simply cannot write the distribution down on paper. The integral is computationally intractable. He recommended that I sit down with a pen and paper and actually work out the marginal evidence for the Bayesian mixture of Gaussians. You'll see that it is computationally intractable, i.e. exponential. He gives a nice example of this in a recent paper (See 2.1 The problem of approximate inference). FWIW, I find this word choice confusing, since (1) it is overloaded in meaning and (2) it is already used widely in CS to refer only to computational intractability.
What are the factors that cause the posterior distributions to be intractable?
I had the opportunity to ask David Blei this question in person, and he told me that intractability in this context means one of two things: The integral has no closed-form solution. This might be wh
What are the factors that cause the posterior distributions to be intractable? I had the opportunity to ask David Blei this question in person, and he told me that intractability in this context means one of two things: The integral has no closed-form solution. This might be when we're modeling some complex, real-world data and we simply cannot write the distribution down on paper. The integral is computationally intractable. He recommended that I sit down with a pen and paper and actually work out the marginal evidence for the Bayesian mixture of Gaussians. You'll see that it is computationally intractable, i.e. exponential. He gives a nice example of this in a recent paper (See 2.1 The problem of approximate inference). FWIW, I find this word choice confusing, since (1) it is overloaded in meaning and (2) it is already used widely in CS to refer only to computational intractability.
What are the factors that cause the posterior distributions to be intractable? I had the opportunity to ask David Blei this question in person, and he told me that intractability in this context means one of two things: The integral has no closed-form solution. This might be wh
5,181
What are the factors that cause the posterior distributions to be intractable?
The issue is mainly that Bayesian analysis involves integrals, often multidimensional ones in realistic problems, and it's these integrals that are typically intractable analytically (except in a few special cases requiring the use of conjugate priors). By contrast, much of non-Bayesian statistics is based on maximum likelihood -- finding the maximum of a (usually multidimensional) function, which involves knowledge of its derivatives, i.e. differentiation. Even so numerical methods are used in many more complex problems, but it's possible to get further more often without them, and the numerical methods can be simpler (even if less simple ones may perform better in practice). So I'd say it comes down to the fact that differentiation is more tractable than integration.
What are the factors that cause the posterior distributions to be intractable?
The issue is mainly that Bayesian analysis involves integrals, often multidimensional ones in realistic problems, and it's these integrals that are typically intractable analytically (except in a few
What are the factors that cause the posterior distributions to be intractable? The issue is mainly that Bayesian analysis involves integrals, often multidimensional ones in realistic problems, and it's these integrals that are typically intractable analytically (except in a few special cases requiring the use of conjugate priors). By contrast, much of non-Bayesian statistics is based on maximum likelihood -- finding the maximum of a (usually multidimensional) function, which involves knowledge of its derivatives, i.e. differentiation. Even so numerical methods are used in many more complex problems, but it's possible to get further more often without them, and the numerical methods can be simpler (even if less simple ones may perform better in practice). So I'd say it comes down to the fact that differentiation is more tractable than integration.
What are the factors that cause the posterior distributions to be intractable? The issue is mainly that Bayesian analysis involves integrals, often multidimensional ones in realistic problems, and it's these integrals that are typically intractable analytically (except in a few
5,182
What are the factors that cause the posterior distributions to be intractable?
Tractability is related to closed-formness of an expression. Problems are said to be tractable if they can be solved in terms of a closed-form expression. In mathematics, a closed-form expression is a mathematical expression that can be evaluated in a finite number of operations. It may contain constants, variables, certain "well-known" operations (e.g., + − × ÷), and functions (e.g., nth root, exponent, logarithm, trigonometric functions, and inverse hyperbolic functions), but usually no limit. The set of operations and functions admitted in a closed-form expression may vary with author and context. So intractability means that there is some kind of limit/infinity involved (like infinite summation in integrals) which can not be evaluated in a finite number of operations and thus approximation techniques (like MCMC) must be used. The Wikipedia article points to Cobham's thesis which tries to formalize this "amount of operations", and thus tractability.
What are the factors that cause the posterior distributions to be intractable?
Tractability is related to closed-formness of an expression. Problems are said to be tractable if they can be solved in terms of a closed-form expression. In mathematics, a closed-form expression
What are the factors that cause the posterior distributions to be intractable? Tractability is related to closed-formness of an expression. Problems are said to be tractable if they can be solved in terms of a closed-form expression. In mathematics, a closed-form expression is a mathematical expression that can be evaluated in a finite number of operations. It may contain constants, variables, certain "well-known" operations (e.g., + − × ÷), and functions (e.g., nth root, exponent, logarithm, trigonometric functions, and inverse hyperbolic functions), but usually no limit. The set of operations and functions admitted in a closed-form expression may vary with author and context. So intractability means that there is some kind of limit/infinity involved (like infinite summation in integrals) which can not be evaluated in a finite number of operations and thus approximation techniques (like MCMC) must be used. The Wikipedia article points to Cobham's thesis which tries to formalize this "amount of operations", and thus tractability.
What are the factors that cause the posterior distributions to be intractable? Tractability is related to closed-formness of an expression. Problems are said to be tractable if they can be solved in terms of a closed-form expression. In mathematics, a closed-form expression
5,183
What are the factors that cause the posterior distributions to be intractable?
Actually, there are a range of possibilities: a closed form expression is available for the posterior (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\pi$: $\text{Beta}(a,b)$ and the posterior $p(\pi|Y=y)$ is a $\text{Beta}(a+y,b+n-y)$ distribution), the posterior is tractable up to the normalizing constant (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\log \pi$ is $N(\mu, \sigma^2)$ and $p(\pi|Y=y) \propto p(y|\pi) p(\pi)$) the data generating process is some complicated mechanism that is so complex that we cannot write down a likelihod (or if we can it takes forever to evaluate), but we can simulate from the data generating process (e.g. some kind of process for how certain properties develop over many generations in a population). To continue the example from above, in this case we would have no closed form expression for $p(y|\pi)$, but could simulate realizations of $Y$ given a specific value of $\pi$ (let's not even talk about the case where we have no idea how the data arises...). People usually mean something like (2) when they talk about an (analytically) non-tractable posterior and something like (3) when they talk about a non-tractable likelihood. It is the third case when approximate Bayesian computation is one of the options, while in the second case MCMC methods are usually feasible (which you may argue are in some sense approximate). I am not entirely sure, which of these two the quote your provided refers to.
What are the factors that cause the posterior distributions to be intractable?
Actually, there are a range of possibilities: a closed form expression is available for the posterior (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\pi$: $\text{Beta}(a,b)$ and the posterior $p(\pi
What are the factors that cause the posterior distributions to be intractable? Actually, there are a range of possibilities: a closed form expression is available for the posterior (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\pi$: $\text{Beta}(a,b)$ and the posterior $p(\pi|Y=y)$ is a $\text{Beta}(a+y,b+n-y)$ distribution), the posterior is tractable up to the normalizing constant (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\log \pi$ is $N(\mu, \sigma^2)$ and $p(\pi|Y=y) \propto p(y|\pi) p(\pi)$) the data generating process is some complicated mechanism that is so complex that we cannot write down a likelihod (or if we can it takes forever to evaluate), but we can simulate from the data generating process (e.g. some kind of process for how certain properties develop over many generations in a population). To continue the example from above, in this case we would have no closed form expression for $p(y|\pi)$, but could simulate realizations of $Y$ given a specific value of $\pi$ (let's not even talk about the case where we have no idea how the data arises...). People usually mean something like (2) when they talk about an (analytically) non-tractable posterior and something like (3) when they talk about a non-tractable likelihood. It is the third case when approximate Bayesian computation is one of the options, while in the second case MCMC methods are usually feasible (which you may argue are in some sense approximate). I am not entirely sure, which of these two the quote your provided refers to.
What are the factors that cause the posterior distributions to be intractable? Actually, there are a range of possibilities: a closed form expression is available for the posterior (example: $Y\sim \text{Bin}(n,\pi)$, prior for $\pi$: $\text{Beta}(a,b)$ and the posterior $p(\pi
5,184
How can I test if given samples are taken from a Poisson distribution?
First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particular dataset or a phenomenon. Once you have established this, the next question is whether the distribution is homogeneous or not. This means whether all parts of the data are handled by the same poisson distribution or is there a variation in this based on some aspect like time or space. Once you have convinced of these aspects, try the following three tests: likelihood ratio test using a chi square variable use of conditional chi-square statistic; also called poisson dispersion test or variance test use of the neyman-scott statistic, that is based on a variance stabilizing transformation of the poisson variable search for these and you will find them easily on the net.
How can I test if given samples are taken from a Poisson distribution?
First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particul
How can I test if given samples are taken from a Poisson distribution? First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particular dataset or a phenomenon. Once you have established this, the next question is whether the distribution is homogeneous or not. This means whether all parts of the data are handled by the same poisson distribution or is there a variation in this based on some aspect like time or space. Once you have convinced of these aspects, try the following three tests: likelihood ratio test using a chi square variable use of conditional chi-square statistic; also called poisson dispersion test or variance test use of the neyman-scott statistic, that is based on a variance stabilizing transformation of the poisson variable search for these and you will find them easily on the net.
How can I test if given samples are taken from a Poisson distribution? First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particul
5,185
How can I test if given samples are taken from a Poisson distribution?
Here is a sequence of R commands that may be helpful. Feel free to comment or edit if you spot any mistakes. set.seed(1) x.poi<-rpois(n=200,lambda=2.5) # a vector of random variables from the Poisson distr. hist(x.poi,main="Poisson distribution") lambda.est <- mean(x.poi) ## estimate of parameter lambda (tab.os<-table(x.poi)) ## table with empirical frequencies freq.os<-vector() for(i in 1: length(tab.os)) freq.os[i]<-tab.os[[i]] ## vector of emprical frequencies freq.ex<-(dpois(0:max(x.poi),lambda=lambda.est)*200) ## vector of fitted (expected) frequencies acc <- mean(abs(freq.os-trunc(freq.ex))) ## absolute goodness of fit index acc acc/mean(freq.os)*100 ## relative (percent) goodness of fit index h <- hist(x.poi ,breaks=length(tab.os)) xhist <- c(min(h$breaks),h$breaks) yhist <- c(0,h$density,0) xfit <- min(x.poi):max(x.poi) yfit <- dpois(xfit,lambda=lambda.est) plot(xhist,yhist,type="s",ylim=c(0,max(yhist,yfit)), main="Poison density and histogram") lines(xfit,yfit, col="red") #Perform the chi-square goodness of fit test #In case of count data we can use goodfit() included in vcd package library(vcd) ## loading vcd package gf <- goodfit(x.poi,type= "poisson",method= "MinChisq") summary(gf) plot(gf,main="Count data vs Poisson distribution")
How can I test if given samples are taken from a Poisson distribution?
Here is a sequence of R commands that may be helpful. Feel free to comment or edit if you spot any mistakes. set.seed(1) x.poi<-rpois(n=200,lambda=2.5) # a vector of random variables from the Poisson
How can I test if given samples are taken from a Poisson distribution? Here is a sequence of R commands that may be helpful. Feel free to comment or edit if you spot any mistakes. set.seed(1) x.poi<-rpois(n=200,lambda=2.5) # a vector of random variables from the Poisson distr. hist(x.poi,main="Poisson distribution") lambda.est <- mean(x.poi) ## estimate of parameter lambda (tab.os<-table(x.poi)) ## table with empirical frequencies freq.os<-vector() for(i in 1: length(tab.os)) freq.os[i]<-tab.os[[i]] ## vector of emprical frequencies freq.ex<-(dpois(0:max(x.poi),lambda=lambda.est)*200) ## vector of fitted (expected) frequencies acc <- mean(abs(freq.os-trunc(freq.ex))) ## absolute goodness of fit index acc acc/mean(freq.os)*100 ## relative (percent) goodness of fit index h <- hist(x.poi ,breaks=length(tab.os)) xhist <- c(min(h$breaks),h$breaks) yhist <- c(0,h$density,0) xfit <- min(x.poi):max(x.poi) yfit <- dpois(xfit,lambda=lambda.est) plot(xhist,yhist,type="s",ylim=c(0,max(yhist,yfit)), main="Poison density and histogram") lines(xfit,yfit, col="red") #Perform the chi-square goodness of fit test #In case of count data we can use goodfit() included in vcd package library(vcd) ## loading vcd package gf <- goodfit(x.poi,type= "poisson",method= "MinChisq") summary(gf) plot(gf,main="Count data vs Poisson distribution")
How can I test if given samples are taken from a Poisson distribution? Here is a sequence of R commands that may be helpful. Feel free to comment or edit if you spot any mistakes. set.seed(1) x.poi<-rpois(n=200,lambda=2.5) # a vector of random variables from the Poisson
5,186
How can I test if given samples are taken from a Poisson distribution?
For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned here is a formalization of that notion. If your variance is much larger than your mean, as is commonly the case, you might want to try a negative binomial distribution next.
How can I test if given samples are taken from a Poisson distribution?
For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned he
How can I test if given samples are taken from a Poisson distribution? For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned here is a formalization of that notion. If your variance is much larger than your mean, as is commonly the case, you might want to try a negative binomial distribution next.
How can I test if given samples are taken from a Poisson distribution? For a Poisson distribution, the mean equals the variance. If your sample mean is very different from your sample variance, you probably don't have Poisson data. The dispersion test also mentioned he
5,187
How can I test if given samples are taken from a Poisson distribution?
You can use the dispersion (ratio of variance to the mean) as a test statistic, since the Poisson should give a dispersion of 1. Here is a link to how to use it as a model test.
How can I test if given samples are taken from a Poisson distribution?
You can use the dispersion (ratio of variance to the mean) as a test statistic, since the Poisson should give a dispersion of 1. Here is a link to how to use it as a model test.
How can I test if given samples are taken from a Poisson distribution? You can use the dispersion (ratio of variance to the mean) as a test statistic, since the Poisson should give a dispersion of 1. Here is a link to how to use it as a model test.
How can I test if given samples are taken from a Poisson distribution? You can use the dispersion (ratio of variance to the mean) as a test statistic, since the Poisson should give a dispersion of 1. Here is a link to how to use it as a model test.
5,188
How can I test if given samples are taken from a Poisson distribution?
I suppose the easiest way is just to do a chi-squared Goodness of fit test. In fact here's nice java applet that will do just that!
How can I test if given samples are taken from a Poisson distribution?
I suppose the easiest way is just to do a chi-squared Goodness of fit test. In fact here's nice java applet that will do just that!
How can I test if given samples are taken from a Poisson distribution? I suppose the easiest way is just to do a chi-squared Goodness of fit test. In fact here's nice java applet that will do just that!
How can I test if given samples are taken from a Poisson distribution? I suppose the easiest way is just to do a chi-squared Goodness of fit test. In fact here's nice java applet that will do just that!
5,189
How can I test if given samples are taken from a Poisson distribution?
You can draw a single figure in which the observed and expected frequencies are drawn side by side. If the distributions are very different and you also have a variance-mean ratio bigger than one, then a good candidate is the negative binomial. Read the section Frequency Distributions from The R Book. It deals with a very similar problem.
How can I test if given samples are taken from a Poisson distribution?
You can draw a single figure in which the observed and expected frequencies are drawn side by side. If the distributions are very different and you also have a variance-mean ratio bigger than one, the
How can I test if given samples are taken from a Poisson distribution? You can draw a single figure in which the observed and expected frequencies are drawn side by side. If the distributions are very different and you also have a variance-mean ratio bigger than one, then a good candidate is the negative binomial. Read the section Frequency Distributions from The R Book. It deals with a very similar problem.
How can I test if given samples are taken from a Poisson distribution? You can draw a single figure in which the observed and expected frequencies are drawn side by side. If the distributions are very different and you also have a variance-mean ratio bigger than one, the
5,190
How can I test if given samples are taken from a Poisson distribution?
I think the main point is the one sidmaestro raises...does the experimental setup or data generation mechanism support the premise that the data might arise from a Poisson distribution. I'm not a big fan of testing for distributional assumptions, since those tests typically aren't very useful. What seems more useful to me is to make distributional or model assumptions that are flexible and reasonably robust to deviations from the model, typically for purposes of inference. In my experience, it is not that common to see mean=variance, so often the negative binomial model seems more appropriate, and includes the Poisson as a special case. Another point that is important in going for distributional testing, if that's what you want to do, is to make sure that there aren't strata involved which would make your observed distribution a mixture of other distributions. Individual stratum-specific distributions might appear Poisson, but the observed mixture might not be. An analogous situation from regression only assumes that the conditional distribution of Y|X is normally distributed, and not really the distribution of Y itself.
How can I test if given samples are taken from a Poisson distribution?
I think the main point is the one sidmaestro raises...does the experimental setup or data generation mechanism support the premise that the data might arise from a Poisson distribution. I'm not a big
How can I test if given samples are taken from a Poisson distribution? I think the main point is the one sidmaestro raises...does the experimental setup or data generation mechanism support the premise that the data might arise from a Poisson distribution. I'm not a big fan of testing for distributional assumptions, since those tests typically aren't very useful. What seems more useful to me is to make distributional or model assumptions that are flexible and reasonably robust to deviations from the model, typically for purposes of inference. In my experience, it is not that common to see mean=variance, so often the negative binomial model seems more appropriate, and includes the Poisson as a special case. Another point that is important in going for distributional testing, if that's what you want to do, is to make sure that there aren't strata involved which would make your observed distribution a mixture of other distributions. Individual stratum-specific distributions might appear Poisson, but the observed mixture might not be. An analogous situation from regression only assumes that the conditional distribution of Y|X is normally distributed, and not really the distribution of Y itself.
How can I test if given samples are taken from a Poisson distribution? I think the main point is the one sidmaestro raises...does the experimental setup or data generation mechanism support the premise that the data might arise from a Poisson distribution. I'm not a big
5,191
How can I test if given samples are taken from a Poisson distribution?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Yet another way to test this is with a quantile quantile plot. In R, there is qqplot. This directly plots your values against a normal distribution with similar mean and sd
How can I test if given samples are taken from a Poisson distribution?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How can I test if given samples are taken from a Poisson distribution? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Yet another way to test this is with a quantile quantile plot. In R, there is qqplot. This directly plots your values against a normal distribution with similar mean and sd
How can I test if given samples are taken from a Poisson distribution? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
5,192
Negative values for AIC in General Mixed Model [duplicate]
The AIC is defined as $$\text{AIC} = 2k - 2\ln(L)$$ where $k$ denotes the number of parameters and $L$ denotes the maximized value of the likelihood function. For model comparison, the model with the lowest AIC score is preferred. The absolute values of the AIC scores do not matter. These scores can be negative or positive. In your example, the model with $\text{AIC} = -237.847$ is preferred over the model with $\text{AIC} = -201.928$. You should not care for the absolute values and the sign of AIC scores when comparing models. A good reference is Model Selection and Multi-model Inference: A Practical Information-theoretic Approach (Burnham and Anderson, 2004), particularly on page 62 (section 2.2): In application, one computes AIC for each of the candidate models and selects the model with the smallest value of AIC. as well as on page 63: Usually, AIC is positive; however, it can be shifted by any additive constant, and some shifts can result in negative values of AIC. [...] It is not the absolute size of the AIC value, it is the relative values over the set of models considered, and particularly the differences between AIC values, that are important.
Negative values for AIC in General Mixed Model [duplicate]
The AIC is defined as $$\text{AIC} = 2k - 2\ln(L)$$ where $k$ denotes the number of parameters and $L$ denotes the maximized value of the likelihood function. For model comparison, the model with the
Negative values for AIC in General Mixed Model [duplicate] The AIC is defined as $$\text{AIC} = 2k - 2\ln(L)$$ where $k$ denotes the number of parameters and $L$ denotes the maximized value of the likelihood function. For model comparison, the model with the lowest AIC score is preferred. The absolute values of the AIC scores do not matter. These scores can be negative or positive. In your example, the model with $\text{AIC} = -237.847$ is preferred over the model with $\text{AIC} = -201.928$. You should not care for the absolute values and the sign of AIC scores when comparing models. A good reference is Model Selection and Multi-model Inference: A Practical Information-theoretic Approach (Burnham and Anderson, 2004), particularly on page 62 (section 2.2): In application, one computes AIC for each of the candidate models and selects the model with the smallest value of AIC. as well as on page 63: Usually, AIC is positive; however, it can be shifted by any additive constant, and some shifts can result in negative values of AIC. [...] It is not the absolute size of the AIC value, it is the relative values over the set of models considered, and particularly the differences between AIC values, that are important.
Negative values for AIC in General Mixed Model [duplicate] The AIC is defined as $$\text{AIC} = 2k - 2\ln(L)$$ where $k$ denotes the number of parameters and $L$ denotes the maximized value of the likelihood function. For model comparison, the model with the
5,193
Percentage of overlapping regions of two normal distributions
This is also often called the "overlapping coefficient" (OVL). Googling for this will give you lots of hits. You can find a nomogram for the bi-normal case here. A useful paper may be: Henry F. Inman; Edwin L. Bradley Jr (1989). The overlapping coefficient as a measure of agreement between probability distributions and point estimation of the overlap of two normal densities. Communications in Statistics - Theory and Methods, 18(10), 3851-3874. (Link) Edit Now you got me interested in this more, so I went ahead and created R code to compute this (it's a simple integration). I threw in a plot of the two distributions, including the shading of the overlapping region: min.f1f2 <- function(x, mu1, mu2, sd1, sd2) { f1 <- dnorm(x, mean=mu1, sd=sd1) f2 <- dnorm(x, mean=mu2, sd=sd2) pmin(f1, f2) } mu1 <- 2; sd1 <- 2 mu2 <- 1; sd2 <- 1 xs <- seq(min(mu1 - 3*sd1, mu2 - 3*sd2), max(mu1 + 3*sd1, mu2 + 3*sd2), .01) f1 <- dnorm(xs, mean=mu1, sd=sd1) f2 <- dnorm(xs, mean=mu2, sd=sd2) plot(xs, f1, type="l", ylim=c(0, max(f1,f2)), ylab="density") lines(xs, f2, lty="dotted") ys <- min.f1f2(xs, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) xs <- c(xs, xs[1]) ys <- c(ys, ys[1]) polygon(xs, ys, col="gray") ### only works for sd1 = sd2 SMD <- (mu1-mu2)/sd1 2 * pnorm(-abs(SMD)/2) ### this works in general integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) For this example, the result is: 0.6099324 with absolute error < 1e-04. Figure below.
Percentage of overlapping regions of two normal distributions
This is also often called the "overlapping coefficient" (OVL). Googling for this will give you lots of hits. You can find a nomogram for the bi-normal case here. A useful paper may be: Henry F. Inma
Percentage of overlapping regions of two normal distributions This is also often called the "overlapping coefficient" (OVL). Googling for this will give you lots of hits. You can find a nomogram for the bi-normal case here. A useful paper may be: Henry F. Inman; Edwin L. Bradley Jr (1989). The overlapping coefficient as a measure of agreement between probability distributions and point estimation of the overlap of two normal densities. Communications in Statistics - Theory and Methods, 18(10), 3851-3874. (Link) Edit Now you got me interested in this more, so I went ahead and created R code to compute this (it's a simple integration). I threw in a plot of the two distributions, including the shading of the overlapping region: min.f1f2 <- function(x, mu1, mu2, sd1, sd2) { f1 <- dnorm(x, mean=mu1, sd=sd1) f2 <- dnorm(x, mean=mu2, sd=sd2) pmin(f1, f2) } mu1 <- 2; sd1 <- 2 mu2 <- 1; sd2 <- 1 xs <- seq(min(mu1 - 3*sd1, mu2 - 3*sd2), max(mu1 + 3*sd1, mu2 + 3*sd2), .01) f1 <- dnorm(xs, mean=mu1, sd=sd1) f2 <- dnorm(xs, mean=mu2, sd=sd2) plot(xs, f1, type="l", ylim=c(0, max(f1,f2)), ylab="density") lines(xs, f2, lty="dotted") ys <- min.f1f2(xs, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) xs <- c(xs, xs[1]) ys <- c(ys, ys[1]) polygon(xs, ys, col="gray") ### only works for sd1 = sd2 SMD <- (mu1-mu2)/sd1 2 * pnorm(-abs(SMD)/2) ### this works in general integrate(min.f1f2, -Inf, Inf, mu1=mu1, mu2=mu2, sd1=sd1, sd2=sd2) For this example, the result is: 0.6099324 with absolute error < 1e-04. Figure below.
Percentage of overlapping regions of two normal distributions This is also often called the "overlapping coefficient" (OVL). Googling for this will give you lots of hits. You can find a nomogram for the bi-normal case here. A useful paper may be: Henry F. Inma
5,194
Percentage of overlapping regions of two normal distributions
This is given by the Bhattacharyya coefficient. For other distributions, see also the generalised version, the Hellinger distance between two distributions. I don't know of any libraries to compute this, but given the explicit formulation in terms of Mahalanobis distances and determinant of variance matrices, implementation should not be an issue.
Percentage of overlapping regions of two normal distributions
This is given by the Bhattacharyya coefficient. For other distributions, see also the generalised version, the Hellinger distance between two distributions. I don't know of any libraries to compute th
Percentage of overlapping regions of two normal distributions This is given by the Bhattacharyya coefficient. For other distributions, see also the generalised version, the Hellinger distance between two distributions. I don't know of any libraries to compute this, but given the explicit formulation in terms of Mahalanobis distances and determinant of variance matrices, implementation should not be an issue.
Percentage of overlapping regions of two normal distributions This is given by the Bhattacharyya coefficient. For other distributions, see also the generalised version, the Hellinger distance between two distributions. I don't know of any libraries to compute th
5,195
Percentage of overlapping regions of two normal distributions
I don't know if there is an obvious standard way of doing this, but: First, you find the intersection points between the two densities. This can be easily achieved by equating both densities, which, for the normal distribution, should result in a quadratic equation for x. Something close to: $$ \frac{(x-\mu_2)^2}{2\sigma_2^2} - \frac{(x-\mu_1)^2}{2\sigma_1^2} = \log{\frac{\sigma_1}{\sigma_2}} $$ This can be solved with basic calculus. Thus you have either zero, one or two intersection points. Now, these intersection points divide the real line into 1, 2 or three parts, where either of the two densities is the lowest one. If nothing more mathematical comes to mind, just try any point within one of the parts to find which one is the lowest. Your value of interest is now the sum of the areas under the lowest density curve in each part. This area can now be found from the cumulative distribution function (just subtract the value in both edges of the 'part'.
Percentage of overlapping regions of two normal distributions
I don't know if there is an obvious standard way of doing this, but: First, you find the intersection points between the two densities. This can be easily achieved by equating both densities, which, f
Percentage of overlapping regions of two normal distributions I don't know if there is an obvious standard way of doing this, but: First, you find the intersection points between the two densities. This can be easily achieved by equating both densities, which, for the normal distribution, should result in a quadratic equation for x. Something close to: $$ \frac{(x-\mu_2)^2}{2\sigma_2^2} - \frac{(x-\mu_1)^2}{2\sigma_1^2} = \log{\frac{\sigma_1}{\sigma_2}} $$ This can be solved with basic calculus. Thus you have either zero, one or two intersection points. Now, these intersection points divide the real line into 1, 2 or three parts, where either of the two densities is the lowest one. If nothing more mathematical comes to mind, just try any point within one of the parts to find which one is the lowest. Your value of interest is now the sum of the areas under the lowest density curve in each part. This area can now be found from the cumulative distribution function (just subtract the value in both edges of the 'part'.
Percentage of overlapping regions of two normal distributions I don't know if there is an obvious standard way of doing this, but: First, you find the intersection points between the two densities. This can be easily achieved by equating both densities, which, f
5,196
Percentage of overlapping regions of two normal distributions
For posterity, wolfgang's solution didn't work for me—I ran into bugs in the integrate function. So I combined it with Nick Staubbe's answer to develop the following little function. Should be quicker and less buggy than using numerical integration: get_overlap_coef <- function(mu1, mu2, sd1, sd2){ xs <- seq(min(mu1 - 4*sd1, mu2 - 4*sd2), max(mu1 + 4*sd1, mu2 + 4*sd2), length.out = 500) f1 <- dnorm(xs, mean=mu1, sd=sd1) f2 <- dnorm(xs, mean=mu2, sd=sd2) int <- xs[which.max(pmin(f1, f2))] l <- pnorm(int, mu1, sd1, lower.tail = mu1>mu2) r <- pnorm(int, mu2, sd2, lower.tail = mu1<mu2) l+r }
Percentage of overlapping regions of two normal distributions
For posterity, wolfgang's solution didn't work for me—I ran into bugs in the integrate function. So I combined it with Nick Staubbe's answer to develop the following little function. Should be quicke
Percentage of overlapping regions of two normal distributions For posterity, wolfgang's solution didn't work for me—I ran into bugs in the integrate function. So I combined it with Nick Staubbe's answer to develop the following little function. Should be quicker and less buggy than using numerical integration: get_overlap_coef <- function(mu1, mu2, sd1, sd2){ xs <- seq(min(mu1 - 4*sd1, mu2 - 4*sd2), max(mu1 + 4*sd1, mu2 + 4*sd2), length.out = 500) f1 <- dnorm(xs, mean=mu1, sd=sd1) f2 <- dnorm(xs, mean=mu2, sd=sd2) int <- xs[which.max(pmin(f1, f2))] l <- pnorm(int, mu1, sd1, lower.tail = mu1>mu2) r <- pnorm(int, mu2, sd2, lower.tail = mu1<mu2) l+r }
Percentage of overlapping regions of two normal distributions For posterity, wolfgang's solution didn't work for me—I ran into bugs in the integrate function. So I combined it with Nick Staubbe's answer to develop the following little function. Should be quicke
5,197
Percentage of overlapping regions of two normal distributions
Here is the Java version, Apache Commons Mathematics Library: import org.apache.commons.math3.distribution.NormalDistribution; public static double overlapArea(double mean1, double sd1, double mean2, double sd2) { NormalDistribution normalDistribution1 = new NormalDistribution(mean1, sd1); NormalDistribution normalDistribution2 = new NormalDistribution(mean2, sd2); double min = Math.min(mean1 - 6 * sd1, mean2 - 6 * sd2); double max = Math.max(mean1 + 6 * sd1, mean2 + 6 * sd2); double range = max - min; int resolution = (int) (range/Math.min(sd1, sd2)); double partwidth = range / resolution; double intersectionArea = 0; int begin = (int)((Math.max(mean1 - 6 * sd1, mean2 - 6 * sd2)-min)/partwidth); int end = (int)((Math.min(mean1 + 6 * sd1, mean2 + 6 * sd2)-min)/partwidth); /// Divide the range into N partitions for (int ii = begin; ii < end; ii++) { double partMin = partwidth * ii; double partMax = partwidth * (ii + 1); double areaOfDist1 = normalDistribution1.probability(partMin, partMax); double areaOfDist2 = normalDistribution2.probability(partMin, partMax); intersectionArea += Math.min(areaOfDist1, areaOfDist2); } return intersectionArea; }
Percentage of overlapping regions of two normal distributions
Here is the Java version, Apache Commons Mathematics Library: import org.apache.commons.math3.distribution.NormalDistribution; public static double overlapArea(double mean1, double sd1, double mean2,
Percentage of overlapping regions of two normal distributions Here is the Java version, Apache Commons Mathematics Library: import org.apache.commons.math3.distribution.NormalDistribution; public static double overlapArea(double mean1, double sd1, double mean2, double sd2) { NormalDistribution normalDistribution1 = new NormalDistribution(mean1, sd1); NormalDistribution normalDistribution2 = new NormalDistribution(mean2, sd2); double min = Math.min(mean1 - 6 * sd1, mean2 - 6 * sd2); double max = Math.max(mean1 + 6 * sd1, mean2 + 6 * sd2); double range = max - min; int resolution = (int) (range/Math.min(sd1, sd2)); double partwidth = range / resolution; double intersectionArea = 0; int begin = (int)((Math.max(mean1 - 6 * sd1, mean2 - 6 * sd2)-min)/partwidth); int end = (int)((Math.min(mean1 + 6 * sd1, mean2 + 6 * sd2)-min)/partwidth); /// Divide the range into N partitions for (int ii = begin; ii < end; ii++) { double partMin = partwidth * ii; double partMax = partwidth * (ii + 1); double areaOfDist1 = normalDistribution1.probability(partMin, partMax); double areaOfDist2 = normalDistribution2.probability(partMin, partMax); intersectionArea += Math.min(areaOfDist1, areaOfDist2); } return intersectionArea; }
Percentage of overlapping regions of two normal distributions Here is the Java version, Apache Commons Mathematics Library: import org.apache.commons.math3.distribution.NormalDistribution; public static double overlapArea(double mean1, double sd1, double mean2,
5,198
Percentage of overlapping regions of two normal distributions
I think something like this could be the solution in MATLAB: [overlap] = calc_overlap_twonormal(2,2,0,1,-20,20,0.01) % numerical integral of the overlapping area of two normal distributions: % s1,s2...sigma of the normal distributions 1 and 2 % mu1,mu2...center of the normal distributions 1 and 2 % xstart,xend,xinterval...defines start, end and interval width % example: [overlap] = calc_overlap_twonormal(2,2,0,1,-10,10,0.01) function [overlap2] = calc_overlap_twonormal(s1,s2,mu1,mu2,xstart,xend,xinterval) clf x_range=xstart:xinterval:xend; plot(x_range,[normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']); hold on area(x_range,min([normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']')); overlap=cumtrapz(x_range,min([normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']')); overlap2 = overlap(end); [overlap] = calc_overlap_twonormal(2,2,0,1,-10,10,0.01) At least I could reproduce the value 0.8026 given below Fig.1 in this pdf. You just need to adapt the start and end and interval values to be precise as this is only a numerical solution.
Percentage of overlapping regions of two normal distributions
I think something like this could be the solution in MATLAB: [overlap] = calc_overlap_twonormal(2,2,0,1,-20,20,0.01) % numerical integral of the overlapping area of two normal distributions: % s1,s
Percentage of overlapping regions of two normal distributions I think something like this could be the solution in MATLAB: [overlap] = calc_overlap_twonormal(2,2,0,1,-20,20,0.01) % numerical integral of the overlapping area of two normal distributions: % s1,s2...sigma of the normal distributions 1 and 2 % mu1,mu2...center of the normal distributions 1 and 2 % xstart,xend,xinterval...defines start, end and interval width % example: [overlap] = calc_overlap_twonormal(2,2,0,1,-10,10,0.01) function [overlap2] = calc_overlap_twonormal(s1,s2,mu1,mu2,xstart,xend,xinterval) clf x_range=xstart:xinterval:xend; plot(x_range,[normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']); hold on area(x_range,min([normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']')); overlap=cumtrapz(x_range,min([normpdf(x_range,mu1,s1)' normpdf(x_range,mu2,s2)']')); overlap2 = overlap(end); [overlap] = calc_overlap_twonormal(2,2,0,1,-10,10,0.01) At least I could reproduce the value 0.8026 given below Fig.1 in this pdf. You just need to adapt the start and end and interval values to be precise as this is only a numerical solution.
Percentage of overlapping regions of two normal distributions I think something like this could be the solution in MATLAB: [overlap] = calc_overlap_twonormal(2,2,0,1,-20,20,0.01) % numerical integral of the overlapping area of two normal distributions: % s1,s
5,199
Intuitive difference between hidden Markov models and conditional random fields
From McCallum's introduction to CRFs:
Intuitive difference between hidden Markov models and conditional random fields
From McCallum's introduction to CRFs:
Intuitive difference between hidden Markov models and conditional random fields From McCallum's introduction to CRFs:
Intuitive difference between hidden Markov models and conditional random fields From McCallum's introduction to CRFs:
5,200
Intuitive difference between hidden Markov models and conditional random fields
As a side note: I would kindly ask you to maintain this (incomplete) list so that interested users have an easily accessible resource. The status quo still requires individuals to investigate a lot of papers and/or long technical reports for finding answers related to CRFs and HMMs. In addition to the other, already good answers, I want to point out the distinctive features I find most noteworthy: HMMs are generative models which try to model the joint distribution P(y,x). Therefore, such models try to model the distribution of the data P(x) which in turn might impose highly dependent features. These dependencies are sometimes undesirable (e.g. in NLP's POS tagging) and very often intractable to model/compute. CRFs are discriminative models which model P(y|x). As such, they do not require to explicitly model P(x) and depending on the task, might therefore yield higher performance, in part because they need fewer parameters to be learned, e.g. in settings when generating samples is not desired. Discriminative models are often more suitable when complex and overlapping features are used (since modelling their distribution is often hard). If you have such overlapping/complex features (as in POS tagging) you might want to consider CRFs since they can model these with their feature functions (keep in mind that you will usually have to feature-engineer these functions). In general, CRFs are more powerful than HMMs due to their application of feature functions. For example, you can model functions like 1($y_t$=NN, $x_t$=Smith, $cap(x_{t-1})$=true) whereas in (first-order) HMMs you use the Markov assumption, imposing a dependency only to the previous element. I therefore see CRFs as a generalization of HMMs. Also note the difference between between linear and general CRFs. Linear CRFs, like HMMs, only impose dependencies on the previous element whereas with general CRFs you can impose dependencies to arbitrary elements (e.g. the first element is accessed in the very end of a sequence). In practice, you will see linear CRFs more often than general CRFs since they usually allow easier inference. In general, CRF inference is often intractable, leaving you with the only tractable option of approximate inference). Inference in linear CRFs is done with the Viterbi algorithm as in HMMs. Both HMMs and linear CRFs are typically trained with Maximum Likelihood techniques such as gradient descent, Quasi-Newton methods or for HMMs with Expectation Maximization techniques (Baum-Welch algorithm). If the optimization problems are convex, these methods all yield the optimal parameter set. According to [1], the optimization problem for learning the linear CRF parameters is convex if all nodes have exponential family distributions and are observed during training. [1] Sutton, Charles; McCallum, Andrew (2010), "An Introduction to Conditional Random Fields"
Intuitive difference between hidden Markov models and conditional random fields
As a side note: I would kindly ask you to maintain this (incomplete) list so that interested users have an easily accessible resource. The status quo still requires individuals to investigate a lot of
Intuitive difference between hidden Markov models and conditional random fields As a side note: I would kindly ask you to maintain this (incomplete) list so that interested users have an easily accessible resource. The status quo still requires individuals to investigate a lot of papers and/or long technical reports for finding answers related to CRFs and HMMs. In addition to the other, already good answers, I want to point out the distinctive features I find most noteworthy: HMMs are generative models which try to model the joint distribution P(y,x). Therefore, such models try to model the distribution of the data P(x) which in turn might impose highly dependent features. These dependencies are sometimes undesirable (e.g. in NLP's POS tagging) and very often intractable to model/compute. CRFs are discriminative models which model P(y|x). As such, they do not require to explicitly model P(x) and depending on the task, might therefore yield higher performance, in part because they need fewer parameters to be learned, e.g. in settings when generating samples is not desired. Discriminative models are often more suitable when complex and overlapping features are used (since modelling their distribution is often hard). If you have such overlapping/complex features (as in POS tagging) you might want to consider CRFs since they can model these with their feature functions (keep in mind that you will usually have to feature-engineer these functions). In general, CRFs are more powerful than HMMs due to their application of feature functions. For example, you can model functions like 1($y_t$=NN, $x_t$=Smith, $cap(x_{t-1})$=true) whereas in (first-order) HMMs you use the Markov assumption, imposing a dependency only to the previous element. I therefore see CRFs as a generalization of HMMs. Also note the difference between between linear and general CRFs. Linear CRFs, like HMMs, only impose dependencies on the previous element whereas with general CRFs you can impose dependencies to arbitrary elements (e.g. the first element is accessed in the very end of a sequence). In practice, you will see linear CRFs more often than general CRFs since they usually allow easier inference. In general, CRF inference is often intractable, leaving you with the only tractable option of approximate inference). Inference in linear CRFs is done with the Viterbi algorithm as in HMMs. Both HMMs and linear CRFs are typically trained with Maximum Likelihood techniques such as gradient descent, Quasi-Newton methods or for HMMs with Expectation Maximization techniques (Baum-Welch algorithm). If the optimization problems are convex, these methods all yield the optimal parameter set. According to [1], the optimization problem for learning the linear CRF parameters is convex if all nodes have exponential family distributions and are observed during training. [1] Sutton, Charles; McCallum, Andrew (2010), "An Introduction to Conditional Random Fields"
Intuitive difference between hidden Markov models and conditional random fields As a side note: I would kindly ask you to maintain this (incomplete) list so that interested users have an easily accessible resource. The status quo still requires individuals to investigate a lot of