idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
2,701
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
It's wrong in part because it's based on a mathematical fallacy. (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) The implicit context is one in which an election looks like it's on the fence. One reasonable model is that there will be $n$ voters (not including you) of whom approximately $m_1\lt n/2$ will definitely vote for one candidate and approximately $m_2\approx m_1$ will vote for the other, leaving $n-(m_1+m_2)$ "undecideds" who will make up their minds on the spot randomly, as if they were flipping coins. Most people--including those with strong mathematical backgrounds--will guess that the chance of a perfect tie in this model is astronomically small. (I have tested this assertion by actually asking undergraduate math majors.) The correct answer is surprising. First, figure there's about a $1/2$ chance $n$ is odd, which means a tie is impossible. To account for this, we'll throw in a factor of $1/2$ in the end. Let's consider the remaining situation where $n=2k$ is even. The chance of a tie in this model is given by the Binomial distribution as $$\Pr(\text{Tie}) = \binom{n - m_1 - m_2}{k - m_1} 2^{m_1+m_2-n}.$$ When $m_1\approx m_2,$ let $m = (m_1+m_2)/2$ (and round it if necessary). The chances don't depend much on small deviations between the $m_i$ and $m,$ so writing $N=k-m,$ an excellent approximation of the Binomial coefficient is $$\binom{n - m_1-m_2}{k - m_1} \approx \binom{2(k-m)}{k-m} = \binom{2N}{N} \approx \frac{2^{2N}}{\sqrt{N\pi}}.$$ The last approximation, due to Stirling's Formula, works well even when $N$ is small (larger than $10$ will do). Putting these results together, and remembering to multiply by $1/2$ at the outset, gives a good estimate of the chance of tie as $$\Pr(\text{Tie}) \approx \frac{1}{2\sqrt{N\pi}}.$$ In such a case, your vote will tip the election. What are the chances? In the most extreme case, imagine a direct popular vote involving, say, $10^8$ people (close to the number who vote in a US presidential election). Typically about 90% of people's minds a clearly decided, so we might take $N$ to be on the order of $10^7.$ Now $$\frac{1}{2\sqrt{10^7\pi}} \approx 10^{-4}.$$ That is, your participation in a close election involving one hundred million people still has about a $0.01\%$ chance of changing the outcome! In practice, most elections involve between a few dozen and a few million voters. Over this range, your chance of affecting the results (under the foregoing assumptions, of course) ranges from about $10\%$ (with just ten undecided voters) to $1\%$ (with a thousand undecided voters) to $0.1\%$ (with a hundred thousand undecided voters). In summary, the chance that your vote swings a closely-contested election tends to be inversely proportional to the square root of the number of undecided voters. Consequently, voting is important even when the electorate is large. The history of US state and national elections supports this analysis. Remember, for just one recent example, how the 2000 US presidential election was decided by a plurality in the state of Florida (with several million voters) that could not have exceeded a few hundred--and probably, if it had been checked more closely, would have been even narrower. If (based on recent election outcomes) it appears there is, say, a few percent chance that an election involving a few million people will be decided by at most a few hundred votes, then the chance that the next such election is decided by just one vote (intuitively) must be at least a hundredth of one percent. That is about one-tenth of what this inverse square root law predicts. But that means the history of voting and this analysis are in good agreement, because this analysis applies only to close races--and most are not close. For more (anecdotal) examples of this type, across the world, see the Wikipedia article on close election results. It includes a table of about 200 examples. Unfortunately, it reports the margin of victory as a proportion of the total. As we have seen, regardless of whether all (or even most) assumptions of this analysis hold, a more meaningful measure of the closeness of an election would be the margin divided by the square root of the total. By the way, your chance of an injury due to driving to the ballot box (if you need to drive at all) can be estimated as the rate of injuries annually (about one percent) divided by the average number of trips (or distance-weighted trips) annually, which is several hundred. We obtain a number well below $0.01\%.$ Your chance of winning the lottery grand prize? Depending on the lottery, one in a million or less. The quotation in the question is not only scurrilous, it is outright false.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
It's wrong in part because it's based on a mathematical fallacy. (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) Th
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? It's wrong in part because it's based on a mathematical fallacy. (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) The implicit context is one in which an election looks like it's on the fence. One reasonable model is that there will be $n$ voters (not including you) of whom approximately $m_1\lt n/2$ will definitely vote for one candidate and approximately $m_2\approx m_1$ will vote for the other, leaving $n-(m_1+m_2)$ "undecideds" who will make up their minds on the spot randomly, as if they were flipping coins. Most people--including those with strong mathematical backgrounds--will guess that the chance of a perfect tie in this model is astronomically small. (I have tested this assertion by actually asking undergraduate math majors.) The correct answer is surprising. First, figure there's about a $1/2$ chance $n$ is odd, which means a tie is impossible. To account for this, we'll throw in a factor of $1/2$ in the end. Let's consider the remaining situation where $n=2k$ is even. The chance of a tie in this model is given by the Binomial distribution as $$\Pr(\text{Tie}) = \binom{n - m_1 - m_2}{k - m_1} 2^{m_1+m_2-n}.$$ When $m_1\approx m_2,$ let $m = (m_1+m_2)/2$ (and round it if necessary). The chances don't depend much on small deviations between the $m_i$ and $m,$ so writing $N=k-m,$ an excellent approximation of the Binomial coefficient is $$\binom{n - m_1-m_2}{k - m_1} \approx \binom{2(k-m)}{k-m} = \binom{2N}{N} \approx \frac{2^{2N}}{\sqrt{N\pi}}.$$ The last approximation, due to Stirling's Formula, works well even when $N$ is small (larger than $10$ will do). Putting these results together, and remembering to multiply by $1/2$ at the outset, gives a good estimate of the chance of tie as $$\Pr(\text{Tie}) \approx \frac{1}{2\sqrt{N\pi}}.$$ In such a case, your vote will tip the election. What are the chances? In the most extreme case, imagine a direct popular vote involving, say, $10^8$ people (close to the number who vote in a US presidential election). Typically about 90% of people's minds a clearly decided, so we might take $N$ to be on the order of $10^7.$ Now $$\frac{1}{2\sqrt{10^7\pi}} \approx 10^{-4}.$$ That is, your participation in a close election involving one hundred million people still has about a $0.01\%$ chance of changing the outcome! In practice, most elections involve between a few dozen and a few million voters. Over this range, your chance of affecting the results (under the foregoing assumptions, of course) ranges from about $10\%$ (with just ten undecided voters) to $1\%$ (with a thousand undecided voters) to $0.1\%$ (with a hundred thousand undecided voters). In summary, the chance that your vote swings a closely-contested election tends to be inversely proportional to the square root of the number of undecided voters. Consequently, voting is important even when the electorate is large. The history of US state and national elections supports this analysis. Remember, for just one recent example, how the 2000 US presidential election was decided by a plurality in the state of Florida (with several million voters) that could not have exceeded a few hundred--and probably, if it had been checked more closely, would have been even narrower. If (based on recent election outcomes) it appears there is, say, a few percent chance that an election involving a few million people will be decided by at most a few hundred votes, then the chance that the next such election is decided by just one vote (intuitively) must be at least a hundredth of one percent. That is about one-tenth of what this inverse square root law predicts. But that means the history of voting and this analysis are in good agreement, because this analysis applies only to close races--and most are not close. For more (anecdotal) examples of this type, across the world, see the Wikipedia article on close election results. It includes a table of about 200 examples. Unfortunately, it reports the margin of victory as a proportion of the total. As we have seen, regardless of whether all (or even most) assumptions of this analysis hold, a more meaningful measure of the closeness of an election would be the margin divided by the square root of the total. By the way, your chance of an injury due to driving to the ballot box (if you need to drive at all) can be estimated as the rate of injuries annually (about one percent) divided by the average number of trips (or distance-weighted trips) annually, which is several hundred. We obtain a number well below $0.01\%.$ Your chance of winning the lottery grand prize? Depending on the lottery, one in a million or less. The quotation in the question is not only scurrilous, it is outright false.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? It's wrong in part because it's based on a mathematical fallacy. (It's even more wrong because it's such blatant voter-suppression propaganda, but that's not a suitable topic for discussion here.) Th
2,702
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
I must disappoint you: current economic theory cannot explain why people keep showing up in elections, because it appears to be irrational. See a survey of literature on this subject on pages 16-35 of Geys, Benny (2006) - "‘Rational’ Theories of Voter Turnout: A Review". The voter turnout is a percentage of voters that showed up at the poll of a total voting eligible pool. In layman's words it appears that indeed your vote won't make a difference. As in @whuber answer the analysis is closely related to the probability of casting a pivotal vote, i.e. making or breaking a tie. However, I think @whuber is making the question look simpler than it is, and also suggesting much higher probability of pivotal vote than US and European election data analysis suggests. A voter turnout is a paradox indeed. It must be zero according to theory, yet it's in close to 50% range in USA. The answer cannot be derived from pure statistics point of view in my opinion. It belongs to behavioral aspects of human actions, which rational choice models explore, albeit in unsatisfactory way because people keep voting while the theory says they shouldn't. Instrumental Voting The instrumental voting approach that I mentioned earlier (see earlier reference) is the idea that your vote becomes tie breaking, and thus deices whether you gain benefits from electinng your favorite candidate. It is described with an equation for the expected utility R: $$R=PB-C>0$$ Here, P is the probability your vote is tie breaking, B benefits you get from you candidate and C associated with voting. The costs C vary and are split into roughly two categories: research of candidates and things dealing with voter registration, driving to polling stations etc. People looked at these components and came to conclusion that P is so low that any positive cost C outweighs the product PB. Probability P has been considered by many researchers, e,g, see the authorative treatment by Gelman here: Gelman, A., King, G. and Boscardin, J. W. (1998) ‘Estimating the Probability of Events That Have Never Occurred:When Is Your Vote Decisive?' You can find a calculation similar to the setup in @whuber's answer here in NBER paper: THE EMPIRICAL FREQUENCY OF A PIVOTAL VOTE, Casey B. Mulligan, Charles G. Hunter. Note, that this is the empirical research of voting bulletins. However, they have the independent binomial voter setup in theoretical part, see Eq.3. Their estimate is drastically different from @whuber, who came up with $\sim 1/\sqrt{n}$ while this paper derives $P=O(\frac 1 n)$, which renders very low probabilities. The treatment of probabilities is very interesting, and takes into account many non obvious considerations such whether a voter realizes what are the tie probabilities or not A simple intuitive explanation follows, from Edlin, Aaron, Andrew Gelman, and Noah Kaplan. "Voting as a rational choice: Why and how people vote to improve the well-being of others." Rationality and society 19.3 (2007): 293-314. Let f(d) be the predictive or forecast uncertainty distribution of the vote differential d (the difference in the vote proportions received by the two leading candidates). If n is not tiny, f(d) can be written, in practice, as a continuous distribution (e.g., a normal distribution with mean 0.04 and standard deviation 0.03). The probability of a decisive vote is then half the probability that a single vote can make or break an exact tie, or f(0)/n. The assumption here is that an exact tie vote will be decided by a coin flip. Empirical results Empirical results suggest that for 20,000 voters, the probability of a tie is $\frac 1 {6000}$, which is significantly lower than @whuber's model results $\frac 1 {2\sqrt{20000\pi}}=\frac 1 {500}$ Another empirical study is Gelman, Andrew, Katz, Jonathan and Bafumi, Joseph, (2004), Standard Voting Power Indexes Do Not Work: An Empirical Analysis, British Journal of Political Science, 34, issue 4, p. 657-674. Its main conclusion was first cited in @user76284's answer. Authors show that $O(1/\sqrt{n}$ doesn't fit the reality. They analyzed a massive amount of electoral data, election held on many different levels in USA and outside. For instance, here's the plot from US presidential elections, 1960-2000, state vote data. They show the square root n fit vs. lowes (non-parametric) fit. It's clear that square root doesn't fit the data. Here's another plot which also includes European election data. Again square root of n relation doesn't fit the data. Section 2.2.2 in the paper explains the basic underlying assumption of square root result, which helps understand @whuber's approach. Section 5.1 has theoretical discussion.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
I must disappoint you: current economic theory cannot explain why people keep showing up in elections, because it appears to be irrational. See a survey of literature on this subject on pages 16-35 of
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? I must disappoint you: current economic theory cannot explain why people keep showing up in elections, because it appears to be irrational. See a survey of literature on this subject on pages 16-35 of Geys, Benny (2006) - "‘Rational’ Theories of Voter Turnout: A Review". The voter turnout is a percentage of voters that showed up at the poll of a total voting eligible pool. In layman's words it appears that indeed your vote won't make a difference. As in @whuber answer the analysis is closely related to the probability of casting a pivotal vote, i.e. making or breaking a tie. However, I think @whuber is making the question look simpler than it is, and also suggesting much higher probability of pivotal vote than US and European election data analysis suggests. A voter turnout is a paradox indeed. It must be zero according to theory, yet it's in close to 50% range in USA. The answer cannot be derived from pure statistics point of view in my opinion. It belongs to behavioral aspects of human actions, which rational choice models explore, albeit in unsatisfactory way because people keep voting while the theory says they shouldn't. Instrumental Voting The instrumental voting approach that I mentioned earlier (see earlier reference) is the idea that your vote becomes tie breaking, and thus deices whether you gain benefits from electinng your favorite candidate. It is described with an equation for the expected utility R: $$R=PB-C>0$$ Here, P is the probability your vote is tie breaking, B benefits you get from you candidate and C associated with voting. The costs C vary and are split into roughly two categories: research of candidates and things dealing with voter registration, driving to polling stations etc. People looked at these components and came to conclusion that P is so low that any positive cost C outweighs the product PB. Probability P has been considered by many researchers, e,g, see the authorative treatment by Gelman here: Gelman, A., King, G. and Boscardin, J. W. (1998) ‘Estimating the Probability of Events That Have Never Occurred:When Is Your Vote Decisive?' You can find a calculation similar to the setup in @whuber's answer here in NBER paper: THE EMPIRICAL FREQUENCY OF A PIVOTAL VOTE, Casey B. Mulligan, Charles G. Hunter. Note, that this is the empirical research of voting bulletins. However, they have the independent binomial voter setup in theoretical part, see Eq.3. Their estimate is drastically different from @whuber, who came up with $\sim 1/\sqrt{n}$ while this paper derives $P=O(\frac 1 n)$, which renders very low probabilities. The treatment of probabilities is very interesting, and takes into account many non obvious considerations such whether a voter realizes what are the tie probabilities or not A simple intuitive explanation follows, from Edlin, Aaron, Andrew Gelman, and Noah Kaplan. "Voting as a rational choice: Why and how people vote to improve the well-being of others." Rationality and society 19.3 (2007): 293-314. Let f(d) be the predictive or forecast uncertainty distribution of the vote differential d (the difference in the vote proportions received by the two leading candidates). If n is not tiny, f(d) can be written, in practice, as a continuous distribution (e.g., a normal distribution with mean 0.04 and standard deviation 0.03). The probability of a decisive vote is then half the probability that a single vote can make or break an exact tie, or f(0)/n. The assumption here is that an exact tie vote will be decided by a coin flip. Empirical results Empirical results suggest that for 20,000 voters, the probability of a tie is $\frac 1 {6000}$, which is significantly lower than @whuber's model results $\frac 1 {2\sqrt{20000\pi}}=\frac 1 {500}$ Another empirical study is Gelman, Andrew, Katz, Jonathan and Bafumi, Joseph, (2004), Standard Voting Power Indexes Do Not Work: An Empirical Analysis, British Journal of Political Science, 34, issue 4, p. 657-674. Its main conclusion was first cited in @user76284's answer. Authors show that $O(1/\sqrt{n}$ doesn't fit the reality. They analyzed a massive amount of electoral data, election held on many different levels in USA and outside. For instance, here's the plot from US presidential elections, 1960-2000, state vote data. They show the square root n fit vs. lowes (non-parametric) fit. It's clear that square root doesn't fit the data. Here's another plot which also includes European election data. Again square root of n relation doesn't fit the data. Section 2.2.2 in the paper explains the basic underlying assumption of square root result, which helps understand @whuber's approach. Section 5.1 has theoretical discussion.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? I must disappoint you: current economic theory cannot explain why people keep showing up in elections, because it appears to be irrational. See a survey of literature on this subject on pages 16-35 of
2,703
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
I'm going to take a different tack than the other answers, and argue both sides of the question. First, let's show that voting is a pointless waste of time. The function of an election is to derive a single outcome, called "the will of the electorate", from many samples of the individual wills of individual electors. Presumably that number of electors is large; we're not concerned here with cases of dozens or hundreds of electors. When deciding whether you should vote, there are two possibilities. Either, as you note, there is a strong preference -- say, 51% or better -- in the electorate for one outcome. In such a scenario the probability that you will cast the "deciding" vote is minuscule, and so no matter which side of the issue you are on, you're better off staying home and not entailing all the costs of voting. Now suppose the other possibility: the electorate is so narrowly divided that even a small number of voters choosing to vote or not vote could completely change the outcome. But in this scenario, there is no "will of the electorate" at all! In this scenario you might as well call off the election and flip a coin, saving the expense of the election entirely. It seems like on rational grounds there is no reason to vote. Suppose a large fraction of the electorate reasons this way -- and, why shouldn't they? I live in the 43rd district of Washington State, one of the most "blue" districts in the United States. No matter which candidate I support in the district election, I can tell you right now what the party affiliation of the winner will be in my district, so why should I vote? The reason to vote is to consider the strategic consequences of "a large fraction of the electorate considers it pointless and does not vote" upon small groups of ideologues. This attitude hands power to comparatively small, well-organized blocs who may show up en masse when not expected; if the number of voters is greatly reduced by a large fraction "rationally" deciding to stay home and not vote, then the size of a bloc required to swing an election against the clear will of the majority is greatly reduced. Voting when "not rationally necessary" decreases the probability that an effort to swing the election by a relatively small group will succeed, and thereby increases the probability that the actual will of the majority can be determined.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
I'm going to take a different tack than the other answers, and argue both sides of the question. First, let's show that voting is a pointless waste of time. The function of an election is to derive a
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? I'm going to take a different tack than the other answers, and argue both sides of the question. First, let's show that voting is a pointless waste of time. The function of an election is to derive a single outcome, called "the will of the electorate", from many samples of the individual wills of individual electors. Presumably that number of electors is large; we're not concerned here with cases of dozens or hundreds of electors. When deciding whether you should vote, there are two possibilities. Either, as you note, there is a strong preference -- say, 51% or better -- in the electorate for one outcome. In such a scenario the probability that you will cast the "deciding" vote is minuscule, and so no matter which side of the issue you are on, you're better off staying home and not entailing all the costs of voting. Now suppose the other possibility: the electorate is so narrowly divided that even a small number of voters choosing to vote or not vote could completely change the outcome. But in this scenario, there is no "will of the electorate" at all! In this scenario you might as well call off the election and flip a coin, saving the expense of the election entirely. It seems like on rational grounds there is no reason to vote. Suppose a large fraction of the electorate reasons this way -- and, why shouldn't they? I live in the 43rd district of Washington State, one of the most "blue" districts in the United States. No matter which candidate I support in the district election, I can tell you right now what the party affiliation of the winner will be in my district, so why should I vote? The reason to vote is to consider the strategic consequences of "a large fraction of the electorate considers it pointless and does not vote" upon small groups of ideologues. This attitude hands power to comparatively small, well-organized blocs who may show up en masse when not expected; if the number of voters is greatly reduced by a large fraction "rationally" deciding to stay home and not vote, then the size of a bloc required to swing an election against the clear will of the majority is greatly reduced. Voting when "not rationally necessary" decreases the probability that an effort to swing the election by a relatively small group will succeed, and thereby increases the probability that the actual will of the majority can be determined.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? I'm going to take a different tack than the other answers, and argue both sides of the question. First, let's show that voting is a pointless waste of time. The function of an election is to derive a
2,704
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
The analysis presented in whuber's answer reflects the Penrose square root law, which states that, under certain assumptions, the probability that a given vote is decisive scales like $1/\sqrt{N}$. The assumptions underlying that analysis, however, are too strong to be realistic in most real-world scenarios. In particular, it assumes that the fractions of decided voters for each outcome are virtually identical, as we'll see below. Below is a graph showing the probability of a tie against the fraction of decided voters for one outcome, given the fraction of decided voters for the other outcome (assuming the rest vote uniformly at random) and the total number of voters: The Mathematica code used to create the graph was fractionYes = 0.45; total = 1000000; Plot[ With[ { y = Round[fractionYes*total], n = Round[fractionNo*total], u = Round[(1 - fractionYes - fractionNo)*total] }, NProbability[y + yu == n + u - yu, yu \[Distributed] BinomialDistribution[u, 1/2]] ], {fractionNo, 0, 1 - fractionYes}, AxesLabel -> {"fraction decided no", "probability of tie"}, PlotLabel -> StringForm["total = ``, fraction decided yes = ``", total, fractionYes], PlotRange -> All, ImageSize -> Large ] As the graph shows, whuber's analysis (like the Penrose square root law) is a knife-edge phenomenon: In the limit of growing population size, it requires the fractions of decided voters for each outcome to be exactly equal. Even tiny deviations from this assumption make the probability of a tie very close to zero. This might explain its discrepancy with the empirical results presented in Aksakal's answer. For example, Standard voting power indexes do not work: An empirical analysis (Cambridge University Press, 2004) by Gelman, Katz, and Bafumi says: Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being decisive in a jurisdiction with $n$ voters is proportional to $1/\sqrt{n}$. In this article the authors show how this hypothesis has been empirically tested and rejected using data from various US and European elections. They find that the probability of a decisive vote is approximately proportional to $1/n$. The random voting model (and, more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of their result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to $n$) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to $\sqrt{n}$. See also Why the square-root rule for vote allocation is a bad idea by Gelman.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
The analysis presented in whuber's answer reflects the Penrose square root law, which states that, under certain assumptions, the probability that a given vote is decisive scales like $1/\sqrt{N}$. Th
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? The analysis presented in whuber's answer reflects the Penrose square root law, which states that, under certain assumptions, the probability that a given vote is decisive scales like $1/\sqrt{N}$. The assumptions underlying that analysis, however, are too strong to be realistic in most real-world scenarios. In particular, it assumes that the fractions of decided voters for each outcome are virtually identical, as we'll see below. Below is a graph showing the probability of a tie against the fraction of decided voters for one outcome, given the fraction of decided voters for the other outcome (assuming the rest vote uniformly at random) and the total number of voters: The Mathematica code used to create the graph was fractionYes = 0.45; total = 1000000; Plot[ With[ { y = Round[fractionYes*total], n = Round[fractionNo*total], u = Round[(1 - fractionYes - fractionNo)*total] }, NProbability[y + yu == n + u - yu, yu \[Distributed] BinomialDistribution[u, 1/2]] ], {fractionNo, 0, 1 - fractionYes}, AxesLabel -> {"fraction decided no", "probability of tie"}, PlotLabel -> StringForm["total = ``, fraction decided yes = ``", total, fractionYes], PlotRange -> All, ImageSize -> Large ] As the graph shows, whuber's analysis (like the Penrose square root law) is a knife-edge phenomenon: In the limit of growing population size, it requires the fractions of decided voters for each outcome to be exactly equal. Even tiny deviations from this assumption make the probability of a tie very close to zero. This might explain its discrepancy with the empirical results presented in Aksakal's answer. For example, Standard voting power indexes do not work: An empirical analysis (Cambridge University Press, 2004) by Gelman, Katz, and Bafumi says: Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being decisive in a jurisdiction with $n$ voters is proportional to $1/\sqrt{n}$. In this article the authors show how this hypothesis has been empirically tested and rejected using data from various US and European elections. They find that the probability of a decisive vote is approximately proportional to $1/n$. The random voting model (and, more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are. The most important political implication of their result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to $n$) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to $\sqrt{n}$. See also Why the square-root rule for vote allocation is a bad idea by Gelman.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? The analysis presented in whuber's answer reflects the Penrose square root law, which states that, under certain assumptions, the probability that a given vote is decisive scales like $1/\sqrt{N}$. Th
2,705
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
It is easy to construct situations, where voting matters, e.g. the population consists of 3 people (including myself), one votes red, one votes blue, then clearly my vote matters. Of course in your quote, not such trivial quotes are meant, but real-life situations with maybe millions of voters. So let us extend my trivial example: Let $X=1$ indicates, if the count of every other voter results in a tie (thus $X=0$ means no tie ). $Y=1$ indicates, if my vote "matters". My vote only matters all the other votes result in a tie. Otherwise it does not matter. Therefore $P\left(Y=1 \vert X = 1\right) = 1$ and $P\left(Y=1 \vert X = 0\right) = 0$. This means, there is no universal answer. If your vote "matters", completely depends on the actions of all other voters. Your question is already solved (with the answer: it depends how the others act), but you can ask follow-up questions: Across different elections, how often does my vote matter on average? Or in mathematical terms: $P\left(Y=1 \right) = ?$ $P\left(Y=1 \right) = P\left(Y=1 \vert X = 1\right) P\left( X = 1\right) + P\left(Y=1 \vert X = 0\right) P\left( X = 0\right) = P\left( X= 1\right)$. $P\left( X= 1\right)$ depends on the election and the situation, which I denote as $\theta$: $P\left( X= 1\right) = \int P\left( X= 1 \vert \Theta = \theta \right) f \left(\theta\right)\,d\theta$, where $f$ is the sampling distribution of the election. Realistically, for the overwhelming majority of $\theta$, $P\left( X= 1 \vert \Theta = \theta \right)$ will be very close to zero. Now comes my critique to whuber's solution: $f$ represents the votes, you might participate in your whole lifetime. It will include elections on different candidates, different years different topics and so on. This variability is underrepresented in whuber's solution because it implicitely assumes, there are only elections with a supporters tie (meaning $f$ is a point mass on an unbelievebly improbable event) and $P\left( X= 1 \vert \theta \right)$ is simply a binomial probability of a tie from voters, that are undecided. $f$ should reflect the whole election variability. To say it is deterministic at the particular situation of equality between the parties is clearly an under-complex representation of reality, and even in this artificial case the probability is $\frac{1}{10000}$. If I vote 10 times in a lifetime, I need 1000 lifes, that finally my vote matters. PS: I strongly believe, that voting matters, but not in a statistically describable way. It is a different discussions on a philosophical topic, not a statistical one.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
It is easy to construct situations, where voting matters, e.g. the population consists of 3 people (including myself), one votes red, one votes blue, then clearly my vote matters. Of course in your q
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? It is easy to construct situations, where voting matters, e.g. the population consists of 3 people (including myself), one votes red, one votes blue, then clearly my vote matters. Of course in your quote, not such trivial quotes are meant, but real-life situations with maybe millions of voters. So let us extend my trivial example: Let $X=1$ indicates, if the count of every other voter results in a tie (thus $X=0$ means no tie ). $Y=1$ indicates, if my vote "matters". My vote only matters all the other votes result in a tie. Otherwise it does not matter. Therefore $P\left(Y=1 \vert X = 1\right) = 1$ and $P\left(Y=1 \vert X = 0\right) = 0$. This means, there is no universal answer. If your vote "matters", completely depends on the actions of all other voters. Your question is already solved (with the answer: it depends how the others act), but you can ask follow-up questions: Across different elections, how often does my vote matter on average? Or in mathematical terms: $P\left(Y=1 \right) = ?$ $P\left(Y=1 \right) = P\left(Y=1 \vert X = 1\right) P\left( X = 1\right) + P\left(Y=1 \vert X = 0\right) P\left( X = 0\right) = P\left( X= 1\right)$. $P\left( X= 1\right)$ depends on the election and the situation, which I denote as $\theta$: $P\left( X= 1\right) = \int P\left( X= 1 \vert \Theta = \theta \right) f \left(\theta\right)\,d\theta$, where $f$ is the sampling distribution of the election. Realistically, for the overwhelming majority of $\theta$, $P\left( X= 1 \vert \Theta = \theta \right)$ will be very close to zero. Now comes my critique to whuber's solution: $f$ represents the votes, you might participate in your whole lifetime. It will include elections on different candidates, different years different topics and so on. This variability is underrepresented in whuber's solution because it implicitely assumes, there are only elections with a supporters tie (meaning $f$ is a point mass on an unbelievebly improbable event) and $P\left( X= 1 \vert \theta \right)$ is simply a binomial probability of a tie from voters, that are undecided. $f$ should reflect the whole election variability. To say it is deterministic at the particular situation of equality between the parties is clearly an under-complex representation of reality, and even in this artificial case the probability is $\frac{1}{10000}$. If I vote 10 times in a lifetime, I need 1000 lifes, that finally my vote matters. PS: I strongly believe, that voting matters, but not in a statistically describable way. It is a different discussions on a philosophical topic, not a statistical one.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? It is easy to construct situations, where voting matters, e.g. the population consists of 3 people (including myself), one votes red, one votes blue, then clearly my vote matters. Of course in your q
2,706
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
You can consider the probability that the voting result is a tie when there are an even number of total voters (in which case the vote of an individual matters). We consider for simplicity even values of $n$ but this can be extended to odd values of $n$. Assumption case 1 Let's consider the vote $X_i$ of each voter $i$ as a Bernoulli distributed variable (where $X_i$ is either $1$ or $-1$): $$P(X_i = x_i) \begin{cases} p & \quad \text{if $x_i = -1$}\\ 1-p & \quad\text{if $x_i = 1$} \end{cases}$$ and the sum for $n$ people, $Y = \sum_{i=1}^n X_i$, relates to the election result. Note that $Y=0$ means that the result is a tie (the same amount of +1 and -1 votes). Approximate solution case 1 This sum can be approximated with a normal distribution: $ P(Y_n = y) \to \frac{1}{\sqrt{n}} \frac{1}{\sqrt{2 \pi p (1-p) }} e^{-\frac{1}{2} \frac{(y-(p-0.5)n)^2}{p(1-p)n}}$ and the probability for a tie is: $P(Y_n = 0) \to \frac{1}{\sqrt{n}} \frac{1}{\sqrt{2 \pi p (1-p) }} e^{-\frac{1}{2} \frac{(p-0.5))^2}{p(1-p)}n}$ This simplifies for $p=0.5$ to the results shown in other answers (the exponential term will be equal to one): $ P(Y_n = 0 \vert p = 0.5) \to \sqrt{\frac{2}{n\pi}} $ But for other probabilities, $p \neq 0.5$ the function will behave similar to a function like $\frac{e^{-x}}{\sqrt{x}}$ and the drop due to the exponential term will become dominant at some point. Assumption case 2 You can also consider a problem like case 1 but now the probability for the votes $X_i$ is not a constant value $p$ but it is itself some variable drawn from a distribution (this expresses sort of mathematically that the random vote for each voter is not fifty-fifty each election and we do not really know what it is, hence we model $p$ as a variable). Let's for simplicity say that $p$ follows some distribution $f(p)$ between 0 and 1. For each election the odds will be different for a candidate. What is happening here is that with growing $n$ the random behaviour of the different $X_i$ will even out and the distribution of the sum $Y_n$ will be more and more resembling the distribution of the value $p$. $\begin{array}{} P(Y_n = y) \to P(\frac{y+n-1}{2n} < p < \frac{y+n+1}{2n}) &=& \int_{\frac{y+n-1}{2n}}^{\frac{y+n+1}{2n}} f(p) dp \\ &\approx& f(\frac{y+n}{2n}) \frac{1}{n} \end{array}$ and for the probability of a tie you get $P(Y_n=0) \to \frac{f(0.5)}{n}$ this expresses better the experimental results and the $\frac{1}{n}$ relationship that Aksakal mentions in his answer. So, this relationship $\frac{1}{n}$ does not stem from the randomness in the Binomial distribution and the probabilities that the different voters $X_i$, who are considered behaving randomly, sum up to a tie. But instead it is derived from the distribution in the parameter $p$ which describes the voting behavior from election to election, and the $\frac{1}{n}$ term is derived from the probability, $0.5 - \frac{1}{2n} < p < 0.5 + \frac{1}{2n}$, that $p$ is very close to fifty-fifty. Example plot The different cases are plotted in the graph below. For the case 1 there is a variation depending on whether $p=0.5$ or $p\neq 0.5$. In the example we plotted $p=0.52$ along with $p=0.5$. You can see that this already makes a large difference. You could say that for a $p \neq 0.5$ the probability that the vote matters is very tiny and drops dramatically for already $n>100$. In the plot you see the example with $p=0.52$. However, it is not realistic that this probability is fixed. Consider for instance swing states in the US presidential elections. From year to year you see a variation in the tendencies how states vote. That variation is not due to the random behaviour of the $X_i$ according to some Bernoulli distribution, but instead it is due to the random behaviour of $p$ (ie. the changes in the political climate). In the plot you can see what would happen for a beta-binomial distributed variable where the mean of $p$ is equal to 0.52. Now you can see that, for higher values of $n$, the probability for a tie is a bit higher. Also the actual value of the mean of $p$ is not so much important, but instead much more important is how much it is dispersed. R-Code to replicate the image: p = 0.52 q = 1-p ## compute probability of a tie n <- 2 ^ c(1:16) y <- dbinom(n/2,n,0.5) y2 <- dbinom(n/2,n,p) y3 <- dbetabinom(n/2,n,0.5,1000) y4 <- dbetabinom(n/2,n,0.52,1000) # plotting plot(n,y, ylim = c(0.0001,1), xlim=c(1,max(n)), log = "xy", yaxt="n", xaxt = "n", ylab = bquote(P(X[n]==0)),cex.lab=0.9,cex.axis=0.7, cex=0.8) axis(1 ,c(1,10,100,1000,10000),cex.axis=0.7) axis(2,las=2,c(1,0.1,0.01,0.001),cex.axis=0.7) points(n,y2, col=2, cex = 0.8) points(n,y3, col=1, pch=2, cex = 0.8) points(n,y4, col=2, pch=2, cex = 0.8) x <- seq(1,max(n),1) ## compare with estimates # binomial distribution with equal probability lines(x,sqrt(2/pi/x) ,col=1,lty=2) # binomial distribution with probability p lines(x,1/sqrt(2*pi*p*q)/sqrt(x) * exp(-0.5*(p-0.5)^2/(p*q)*x),col=2,lty=2) # betabinomial distribution with dispersion parameter 1000 lines(x, dbeta(0.5,0.5*1000,0.5*1000)/x ,col=1) # betabinomial distribution with dispersion parameter 1000 lines(x, dbeta(0.5,0.52*1000,0.48*1000)/x ,col=2) legend(1,10^-2, c("p=0.5", "p=0.52", "betabinomial with mu=0.5", "betabinomial with mu=0.52"), col=c(1,2,1,2), lty=c(2,2,1,1), pch=c(1,1,2,2), box.col=0, cex= 0.7) Assumption case 3 A different way to look at it is to consider that you have two pools of voters (with fixed or variable size) out of which the voters randomly decide to show up for the election or not. Then the difference of these two variables is a binomial distributed variable and you can handle the situation like the problems above. You get something like case 1 if the probabilities to show up are considered fixed and you get something like case 2 if the probabilities to show up are not fixed. The expression will be a bit more difficult now (the difference between two binomial distributed variables is not easy to express) but you could use the normal approximation to solve this. Assumption case 4 You consider the case that the number of voters is not known ("unknown number of voters"). If this is relevant then you could integrate/average the above solutions over some distribution of the number of voters that are expected. If this distribution is narrow then the result will not be much different.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
You can consider the probability that the voting result is a tie when there are an even number of total voters (in which case the vote of an individual matters). We consider for simplicity even values
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? You can consider the probability that the voting result is a tie when there are an even number of total voters (in which case the vote of an individual matters). We consider for simplicity even values of $n$ but this can be extended to odd values of $n$. Assumption case 1 Let's consider the vote $X_i$ of each voter $i$ as a Bernoulli distributed variable (where $X_i$ is either $1$ or $-1$): $$P(X_i = x_i) \begin{cases} p & \quad \text{if $x_i = -1$}\\ 1-p & \quad\text{if $x_i = 1$} \end{cases}$$ and the sum for $n$ people, $Y = \sum_{i=1}^n X_i$, relates to the election result. Note that $Y=0$ means that the result is a tie (the same amount of +1 and -1 votes). Approximate solution case 1 This sum can be approximated with a normal distribution: $ P(Y_n = y) \to \frac{1}{\sqrt{n}} \frac{1}{\sqrt{2 \pi p (1-p) }} e^{-\frac{1}{2} \frac{(y-(p-0.5)n)^2}{p(1-p)n}}$ and the probability for a tie is: $P(Y_n = 0) \to \frac{1}{\sqrt{n}} \frac{1}{\sqrt{2 \pi p (1-p) }} e^{-\frac{1}{2} \frac{(p-0.5))^2}{p(1-p)}n}$ This simplifies for $p=0.5$ to the results shown in other answers (the exponential term will be equal to one): $ P(Y_n = 0 \vert p = 0.5) \to \sqrt{\frac{2}{n\pi}} $ But for other probabilities, $p \neq 0.5$ the function will behave similar to a function like $\frac{e^{-x}}{\sqrt{x}}$ and the drop due to the exponential term will become dominant at some point. Assumption case 2 You can also consider a problem like case 1 but now the probability for the votes $X_i$ is not a constant value $p$ but it is itself some variable drawn from a distribution (this expresses sort of mathematically that the random vote for each voter is not fifty-fifty each election and we do not really know what it is, hence we model $p$ as a variable). Let's for simplicity say that $p$ follows some distribution $f(p)$ between 0 and 1. For each election the odds will be different for a candidate. What is happening here is that with growing $n$ the random behaviour of the different $X_i$ will even out and the distribution of the sum $Y_n$ will be more and more resembling the distribution of the value $p$. $\begin{array}{} P(Y_n = y) \to P(\frac{y+n-1}{2n} < p < \frac{y+n+1}{2n}) &=& \int_{\frac{y+n-1}{2n}}^{\frac{y+n+1}{2n}} f(p) dp \\ &\approx& f(\frac{y+n}{2n}) \frac{1}{n} \end{array}$ and for the probability of a tie you get $P(Y_n=0) \to \frac{f(0.5)}{n}$ this expresses better the experimental results and the $\frac{1}{n}$ relationship that Aksakal mentions in his answer. So, this relationship $\frac{1}{n}$ does not stem from the randomness in the Binomial distribution and the probabilities that the different voters $X_i$, who are considered behaving randomly, sum up to a tie. But instead it is derived from the distribution in the parameter $p$ which describes the voting behavior from election to election, and the $\frac{1}{n}$ term is derived from the probability, $0.5 - \frac{1}{2n} < p < 0.5 + \frac{1}{2n}$, that $p$ is very close to fifty-fifty. Example plot The different cases are plotted in the graph below. For the case 1 there is a variation depending on whether $p=0.5$ or $p\neq 0.5$. In the example we plotted $p=0.52$ along with $p=0.5$. You can see that this already makes a large difference. You could say that for a $p \neq 0.5$ the probability that the vote matters is very tiny and drops dramatically for already $n>100$. In the plot you see the example with $p=0.52$. However, it is not realistic that this probability is fixed. Consider for instance swing states in the US presidential elections. From year to year you see a variation in the tendencies how states vote. That variation is not due to the random behaviour of the $X_i$ according to some Bernoulli distribution, but instead it is due to the random behaviour of $p$ (ie. the changes in the political climate). In the plot you can see what would happen for a beta-binomial distributed variable where the mean of $p$ is equal to 0.52. Now you can see that, for higher values of $n$, the probability for a tie is a bit higher. Also the actual value of the mean of $p$ is not so much important, but instead much more important is how much it is dispersed. R-Code to replicate the image: p = 0.52 q = 1-p ## compute probability of a tie n <- 2 ^ c(1:16) y <- dbinom(n/2,n,0.5) y2 <- dbinom(n/2,n,p) y3 <- dbetabinom(n/2,n,0.5,1000) y4 <- dbetabinom(n/2,n,0.52,1000) # plotting plot(n,y, ylim = c(0.0001,1), xlim=c(1,max(n)), log = "xy", yaxt="n", xaxt = "n", ylab = bquote(P(X[n]==0)),cex.lab=0.9,cex.axis=0.7, cex=0.8) axis(1 ,c(1,10,100,1000,10000),cex.axis=0.7) axis(2,las=2,c(1,0.1,0.01,0.001),cex.axis=0.7) points(n,y2, col=2, cex = 0.8) points(n,y3, col=1, pch=2, cex = 0.8) points(n,y4, col=2, pch=2, cex = 0.8) x <- seq(1,max(n),1) ## compare with estimates # binomial distribution with equal probability lines(x,sqrt(2/pi/x) ,col=1,lty=2) # binomial distribution with probability p lines(x,1/sqrt(2*pi*p*q)/sqrt(x) * exp(-0.5*(p-0.5)^2/(p*q)*x),col=2,lty=2) # betabinomial distribution with dispersion parameter 1000 lines(x, dbeta(0.5,0.5*1000,0.5*1000)/x ,col=1) # betabinomial distribution with dispersion parameter 1000 lines(x, dbeta(0.5,0.52*1000,0.48*1000)/x ,col=2) legend(1,10^-2, c("p=0.5", "p=0.52", "betabinomial with mu=0.5", "betabinomial with mu=0.52"), col=c(1,2,1,2), lty=c(2,2,1,1), pch=c(1,1,2,2), box.col=0, cex= 0.7) Assumption case 3 A different way to look at it is to consider that you have two pools of voters (with fixed or variable size) out of which the voters randomly decide to show up for the election or not. Then the difference of these two variables is a binomial distributed variable and you can handle the situation like the problems above. You get something like case 1 if the probabilities to show up are considered fixed and you get something like case 2 if the probabilities to show up are not fixed. The expression will be a bit more difficult now (the difference between two binomial distributed variables is not easy to express) but you could use the normal approximation to solve this. Assumption case 4 You consider the case that the number of voters is not known ("unknown number of voters"). If this is relevant then you could integrate/average the above solutions over some distribution of the number of voters that are expected. If this distribution is narrow then the result will not be much different.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? You can consider the probability that the voting result is a tie when there are an even number of total voters (in which case the vote of an individual matters). We consider for simplicity even values
2,707
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
A simple model. New captain has to be chosen on a ship. There are 6 voters. Two candidates agreed to compete for the office - audacious Mr. Zero and brilliant Mr. One. Nobody on the deck is obliged to vote. We don't know how many voters will take part in the election. Simulation The number of voters participating in the voting will be indicated by the dice roll {1,2,3,4,5,6} The choice of candidate by each voter will be indicated by a coin flip {0,1} The strong decisive vote is that our candidate receives one more vote from a competitor - this is only possible if an odd number of voters take part in the election. The weak decisive vote is that our candidate receives one more vote (odd number of voters) or leads to a tie (even number of voters). We calculate decisive vote in favor of Mr. One. So we have the following potential events. +-----+------+----------+------------+---------+---------+------------+------------+ | | sub | election | number | votes | votes | strong | week | | # | case | result | of voters | for 1 | for 0 | decisive | decisive | +-----+------+----------+------------+---------+---------+------------+------------+ +-----+------+----------+------------+---------+---------+------------+------------+ | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | | 2 | 2 | 1 | 1 | 1 | 0 | 1 | 1 | +-----+------+----------+------------+---------+---------+------------+------------+ | 3 | 1 | 00 | 2 | 0 | 2 | 0 | 0 | | 4 | 2 | 01 | 2 | 1 | 1 | 0 | 1 | | 5 | 3 | 10 | 2 | 1 | 1 | 0 | 1 | | 6 | 4 | 11 | 2 | 2 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 7 | 1 | 000 | 3 | 0 | 3 | 0 | 0 | | 8 | 2 | 001 | 3 | 1 | 2 | 0 | 0 | | 9 | 3 | 010 | 3 | 1 | 2 | 0 | 0 | | 10 | 4 | 011 | 3 | 2 | 1 | 1 | 1 | | 11 | 5 | 100 | 3 | 1 | 2 | 0 | 0 | | 12 | 6 | 101 | 3 | 2 | 1 | 1 | 1 | | 13 | 7 | 110 | 3 | 2 | 1 | 1 | 1 | | 14 | 8 | 111 | 3 | 3 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 15 | 1 | 0000 | 4 | 0 | 4 | 0 | 0 | | 16 | 2 | 0001 | 4 | 1 | 3 | 0 | 0 | | 17 | 3 | 0010 | 4 | 1 | 3 | 0 | 0 | | 18 | 4 | 0011 | 4 | 2 | 2 | 0 | 1 | | 19 | 5 | 0100 | 4 | 1 | 3 | 0 | 0 | | 20 | 6 | 0101 | 4 | 2 | 2 | 0 | 1 | | 21 | 7 | 0110 | 4 | 2 | 2 | 0 | 1 | | 22 | 8 | 0111 | 4 | 3 | 1 | 0 | 0 | | 23 | 9 | 1000 | 4 | 1 | 3 | 0 | 0 | | 24 | 10 | 1001 | 4 | 2 | 2 | 0 | 1 | | 25 | 11 | 1010 | 4 | 2 | 2 | 0 | 1 | | 26 | 12 | 1011 | 4 | 3 | 1 | 0 | 0 | | 27 | 13 | 1100 | 4 | 2 | 2 | 0 | 1 | | 28 | 14 | 1101 | 4 | 3 | 1 | 0 | 0 | | 29 | 15 | 1110 | 4 | 3 | 1 | 0 | 0 | | 30 | 16 | 1111 | 4 | 4 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 31 | 1 | 00000 | 5 | 0 | 5 | 0 | 0 | | 32 | 2 | 00001 | 5 | 1 | 4 | 0 | 0 | | 33 | 3 | 00010 | 5 | 1 | 4 | 0 | 0 | | 34 | 4 | 00011 | 5 | 2 | 3 | 0 | 0 | | 35 | 5 | 00100 | 5 | 1 | 4 | 0 | 0 | | 36 | 6 | 00101 | 5 | 2 | 3 | 0 | 0 | | 37 | 7 | 00110 | 5 | 2 | 3 | 0 | 0 | | 38 | 8 | 00111 | 5 | 3 | 2 | 1 | 1 | | 39 | 9 | 01000 | 5 | 1 | 4 | 0 | 0 | | 40 | 10 | 01001 | 5 | 2 | 3 | 0 | 0 | | 41 | 11 | 01010 | 5 | 2 | 3 | 0 | 0 | | 42 | 12 | 01011 | 5 | 3 | 2 | 1 | 1 | | 43 | 13 | 01100 | 5 | 2 | 3 | 0 | 0 | | 44 | 14 | 01101 | 5 | 3 | 2 | 1 | 1 | | 45 | 15 | 01110 | 5 | 3 | 2 | 1 | 1 | | 46 | 16 | 01111 | 5 | 4 | 1 | 0 | 0 | | 47 | 17 | 10000 | 5 | 1 | 4 | 0 | 0 | | 48 | 18 | 10001 | 5 | 2 | 3 | 0 | 0 | | 49 | 19 | 10010 | 5 | 2 | 3 | 0 | 0 | | 50 | 20 | 10011 | 5 | 3 | 2 | 1 | 1 | | 51 | 21 | 10100 | 5 | 2 | 3 | 0 | 0 | | 52 | 22 | 10101 | 5 | 3 | 2 | 1 | 1 | | 53 | 23 | 10110 | 5 | 3 | 2 | 1 | 1 | | 54 | 24 | 10111 | 5 | 4 | 1 | 0 | 0 | | 55 | 25 | 11000 | 5 | 2 | 3 | 0 | 0 | | 56 | 26 | 11001 | 5 | 3 | 2 | 1 | 1 | | 57 | 27 | 11010 | 5 | 3 | 2 | 1 | 1 | | 58 | 28 | 11011 | 5 | 4 | 1 | 0 | 0 | | 59 | 29 | 11100 | 5 | 3 | 2 | 1 | 1 | | 60 | 30 | 11101 | 5 | 4 | 1 | 0 | 0 | | 61 | 31 | 11110 | 5 | 4 | 1 | 0 | 0 | | 62 | 32 | 11111 | 5 | 5 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 63 | 1 | 000000 | 6 | 0 | 6 | 0 | 0 | | 64 | 2 | 000001 | 6 | 1 | 5 | 0 | 0 | | 65 | 3 | 000010 | 6 | 1 | 5 | 0 | 0 | | 66 | 4 | 000011 | 6 | 2 | 4 | 0 | 0 | | 67 | 5 | 000100 | 6 | 1 | 5 | 0 | 0 | | 68 | 6 | 000101 | 6 | 2 | 4 | 0 | 0 | | 69 | 7 | 000110 | 6 | 2 | 4 | 0 | 0 | | 70 | 8 | 000111 | 6 | 3 | 3 | 0 | 1 | | 71 | 9 | 001000 | 6 | 1 | 5 | 0 | 0 | | 72 | 10 | 001001 | 6 | 2 | 4 | 0 | 0 | | 73 | 11 | 001010 | 6 | 2 | 4 | 0 | 0 | | 74 | 12 | 001011 | 6 | 3 | 3 | 0 | 1 | | 75 | 13 | 001100 | 6 | 2 | 4 | 0 | 0 | | 76 | 14 | 001101 | 6 | 3 | 3 | 0 | 1 | | 77 | 15 | 001110 | 6 | 3 | 3 | 0 | 1 | | 78 | 16 | 001111 | 6 | 4 | 2 | 0 | 0 | | 79 | 17 | 010000 | 6 | 1 | 5 | 0 | 0 | | 80 | 18 | 010001 | 6 | 2 | 4 | 0 | 0 | | 81 | 19 | 010010 | 6 | 2 | 4 | 0 | 0 | | 82 | 20 | 010011 | 6 | 3 | 3 | 0 | 1 | | 83 | 21 | 010100 | 6 | 2 | 4 | 0 | 0 | | 84 | 22 | 010101 | 6 | 3 | 3 | 0 | 1 | | 85 | 23 | 010110 | 6 | 3 | 3 | 0 | 1 | | 86 | 24 | 010111 | 6 | 4 | 2 | 0 | 0 | | 87 | 25 | 011000 | 6 | 2 | 4 | 0 | 0 | | 88 | 26 | 011001 | 6 | 3 | 3 | 0 | 1 | | 89 | 27 | 011010 | 6 | 3 | 3 | 0 | 1 | | 90 | 28 | 011011 | 6 | 4 | 2 | 0 | 0 | | 91 | 29 | 011100 | 6 | 3 | 3 | 0 | 1 | | 92 | 30 | 011101 | 6 | 4 | 2 | 0 | 0 | | 93 | 31 | 011110 | 6 | 4 | 2 | 0 | 0 | | 94 | 32 | 011111 | 6 | 5 | 1 | 0 | 0 | | 95 | 33 | 100000 | 6 | 1 | 5 | 0 | 0 | | 96 | 34 | 100001 | 6 | 2 | 4 | 0 | 0 | | 97 | 35 | 100010 | 6 | 2 | 4 | 0 | 0 | | 98 | 36 | 100011 | 6 | 3 | 3 | 0 | 1 | | 99 | 37 | 100100 | 6 | 2 | 4 | 0 | 0 | | 100 | 38 | 100101 | 6 | 3 | 3 | 0 | 1 | | 101 | 39 | 100110 | 6 | 3 | 3 | 0 | 1 | | 102 | 40 | 100111 | 6 | 4 | 2 | 0 | 0 | | 103 | 41 | 101000 | 6 | 2 | 4 | 0 | 0 | | 104 | 42 | 101001 | 6 | 3 | 3 | 0 | 1 | | 105 | 43 | 101010 | 6 | 3 | 3 | 0 | 1 | | 106 | 44 | 101011 | 6 | 4 | 2 | 0 | 0 | | 107 | 45 | 101100 | 6 | 3 | 3 | 0 | 1 | | 108 | 46 | 101101 | 6 | 4 | 2 | 0 | 0 | | 109 | 47 | 101110 | 6 | 4 | 2 | 0 | 0 | | 110 | 48 | 101111 | 6 | 5 | 1 | 0 | 0 | | 111 | 49 | 110000 | 6 | 2 | 4 | 0 | 0 | | 112 | 50 | 110001 | 6 | 3 | 3 | 0 | 1 | | 113 | 51 | 110010 | 6 | 3 | 3 | 0 | 1 | | 114 | 52 | 110011 | 6 | 4 | 2 | 0 | 0 | | 115 | 53 | 110100 | 6 | 3 | 3 | 0 | 1 | | 116 | 54 | 110101 | 6 | 4 | 2 | 0 | 0 | | 117 | 55 | 110110 | 6 | 4 | 2 | 0 | 0 | | 118 | 56 | 110111 | 6 | 5 | 1 | 0 | 0 | | 119 | 57 | 111000 | 6 | 3 | 3 | 0 | 1 | | 120 | 58 | 111001 | 6 | 4 | 2 | 0 | 0 | | 121 | 59 | 111010 | 6 | 4 | 2 | 0 | 0 | | 122 | 60 | 111011 | 6 | 5 | 1 | 0 | 0 | | 123 | 61 | 111100 | 6 | 4 | 2 | 0 | 0 | | 124 | 62 | 111101 | 6 | 5 | 1 | 0 | 0 | | 125 | 63 | 111110 | 6 | 5 | 1 | 0 | 0 | | 126 | 64 | 111111 | 6 | 6 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | | | | | | | 14 | 42 | +-----+------+----------+------------+---------+---------+------------+------------+ So for 126 possible cases of election result. There are 14 cases when we cast a strong decisive vote and 42 cases when we cast a week decisive vote. So the probability that we cast a decisive vote is: 14/126=11.11% (strong decisive vote) 42/126=33.33% (week decisive vote) Here is a summary table: +--------+-------+--------+------+--------+-------+--------+-------+--------+ | # of | | sum | cumulative sum | probability | | | voters | cases | strong | weak | strong | weak | strong | weak | approx | +--------+-------+--------+------+--------+-------+--------+-------+--------+ | 1 | 2 | 1 | 1 | 1 | 1 | 50.0% | 50.0% | 28.2% | | 2 | 4 | 0 | 2 | 1 | 3 | 16.7% | 50.0% | 19.9% | | 3 | 8 | 3 | 3 | 4 | 6 | 28.6% | 42.9% | 16.3% | | 4 | 16 | 0 | 6 | 4 | 12 | 13.3% | 40.0% | 14.1% | | 5 | 32 | 10 | 10 | 14 | 22 | 22.6% | 35.5% | 12.6% | | 6 | 64 | 0 | 20 | 14 | 42 | 11.1% | 33.3% | 11.5% | +--------+-------+--------+------+--------+-------+--------+-------+--------+ approx has been calculated according to the formula suggested by whuber: $\displaystyle{P}{\left({t}{i}{e}\right)}=\frac{1}{{{2}\sqrt{{{n}\cdot\pi}}}}$ Maybe this approximation works for higher number of voters, but I am not sure yet. For small number of voters this approximation is far from theoretical truth. Please consider this answer as the extension to the question. I would be grateful if anybody could post an equation for decisive vote probability as a function of unknown voters taking part in the election. For larger numbers already >10 voters we see that the probability of a difference equal to 1 or less is already approaching the theoretical value (based on the binomial distribution with $p=0.5$) very quickly. But we need to use $\sqrt{\frac{2}{\pi n}}$ The image below demonstrates this.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning?
A simple model. New captain has to be chosen on a ship. There are 6 voters. Two candidates agreed to compete for the office - audacious Mr. Zero and brilliant Mr. One. Nobody on the deck is obliged to
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? A simple model. New captain has to be chosen on a ship. There are 6 voters. Two candidates agreed to compete for the office - audacious Mr. Zero and brilliant Mr. One. Nobody on the deck is obliged to vote. We don't know how many voters will take part in the election. Simulation The number of voters participating in the voting will be indicated by the dice roll {1,2,3,4,5,6} The choice of candidate by each voter will be indicated by a coin flip {0,1} The strong decisive vote is that our candidate receives one more vote from a competitor - this is only possible if an odd number of voters take part in the election. The weak decisive vote is that our candidate receives one more vote (odd number of voters) or leads to a tie (even number of voters). We calculate decisive vote in favor of Mr. One. So we have the following potential events. +-----+------+----------+------------+---------+---------+------------+------------+ | | sub | election | number | votes | votes | strong | week | | # | case | result | of voters | for 1 | for 0 | decisive | decisive | +-----+------+----------+------------+---------+---------+------------+------------+ +-----+------+----------+------------+---------+---------+------------+------------+ | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | | 2 | 2 | 1 | 1 | 1 | 0 | 1 | 1 | +-----+------+----------+------------+---------+---------+------------+------------+ | 3 | 1 | 00 | 2 | 0 | 2 | 0 | 0 | | 4 | 2 | 01 | 2 | 1 | 1 | 0 | 1 | | 5 | 3 | 10 | 2 | 1 | 1 | 0 | 1 | | 6 | 4 | 11 | 2 | 2 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 7 | 1 | 000 | 3 | 0 | 3 | 0 | 0 | | 8 | 2 | 001 | 3 | 1 | 2 | 0 | 0 | | 9 | 3 | 010 | 3 | 1 | 2 | 0 | 0 | | 10 | 4 | 011 | 3 | 2 | 1 | 1 | 1 | | 11 | 5 | 100 | 3 | 1 | 2 | 0 | 0 | | 12 | 6 | 101 | 3 | 2 | 1 | 1 | 1 | | 13 | 7 | 110 | 3 | 2 | 1 | 1 | 1 | | 14 | 8 | 111 | 3 | 3 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 15 | 1 | 0000 | 4 | 0 | 4 | 0 | 0 | | 16 | 2 | 0001 | 4 | 1 | 3 | 0 | 0 | | 17 | 3 | 0010 | 4 | 1 | 3 | 0 | 0 | | 18 | 4 | 0011 | 4 | 2 | 2 | 0 | 1 | | 19 | 5 | 0100 | 4 | 1 | 3 | 0 | 0 | | 20 | 6 | 0101 | 4 | 2 | 2 | 0 | 1 | | 21 | 7 | 0110 | 4 | 2 | 2 | 0 | 1 | | 22 | 8 | 0111 | 4 | 3 | 1 | 0 | 0 | | 23 | 9 | 1000 | 4 | 1 | 3 | 0 | 0 | | 24 | 10 | 1001 | 4 | 2 | 2 | 0 | 1 | | 25 | 11 | 1010 | 4 | 2 | 2 | 0 | 1 | | 26 | 12 | 1011 | 4 | 3 | 1 | 0 | 0 | | 27 | 13 | 1100 | 4 | 2 | 2 | 0 | 1 | | 28 | 14 | 1101 | 4 | 3 | 1 | 0 | 0 | | 29 | 15 | 1110 | 4 | 3 | 1 | 0 | 0 | | 30 | 16 | 1111 | 4 | 4 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 31 | 1 | 00000 | 5 | 0 | 5 | 0 | 0 | | 32 | 2 | 00001 | 5 | 1 | 4 | 0 | 0 | | 33 | 3 | 00010 | 5 | 1 | 4 | 0 | 0 | | 34 | 4 | 00011 | 5 | 2 | 3 | 0 | 0 | | 35 | 5 | 00100 | 5 | 1 | 4 | 0 | 0 | | 36 | 6 | 00101 | 5 | 2 | 3 | 0 | 0 | | 37 | 7 | 00110 | 5 | 2 | 3 | 0 | 0 | | 38 | 8 | 00111 | 5 | 3 | 2 | 1 | 1 | | 39 | 9 | 01000 | 5 | 1 | 4 | 0 | 0 | | 40 | 10 | 01001 | 5 | 2 | 3 | 0 | 0 | | 41 | 11 | 01010 | 5 | 2 | 3 | 0 | 0 | | 42 | 12 | 01011 | 5 | 3 | 2 | 1 | 1 | | 43 | 13 | 01100 | 5 | 2 | 3 | 0 | 0 | | 44 | 14 | 01101 | 5 | 3 | 2 | 1 | 1 | | 45 | 15 | 01110 | 5 | 3 | 2 | 1 | 1 | | 46 | 16 | 01111 | 5 | 4 | 1 | 0 | 0 | | 47 | 17 | 10000 | 5 | 1 | 4 | 0 | 0 | | 48 | 18 | 10001 | 5 | 2 | 3 | 0 | 0 | | 49 | 19 | 10010 | 5 | 2 | 3 | 0 | 0 | | 50 | 20 | 10011 | 5 | 3 | 2 | 1 | 1 | | 51 | 21 | 10100 | 5 | 2 | 3 | 0 | 0 | | 52 | 22 | 10101 | 5 | 3 | 2 | 1 | 1 | | 53 | 23 | 10110 | 5 | 3 | 2 | 1 | 1 | | 54 | 24 | 10111 | 5 | 4 | 1 | 0 | 0 | | 55 | 25 | 11000 | 5 | 2 | 3 | 0 | 0 | | 56 | 26 | 11001 | 5 | 3 | 2 | 1 | 1 | | 57 | 27 | 11010 | 5 | 3 | 2 | 1 | 1 | | 58 | 28 | 11011 | 5 | 4 | 1 | 0 | 0 | | 59 | 29 | 11100 | 5 | 3 | 2 | 1 | 1 | | 60 | 30 | 11101 | 5 | 4 | 1 | 0 | 0 | | 61 | 31 | 11110 | 5 | 4 | 1 | 0 | 0 | | 62 | 32 | 11111 | 5 | 5 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | 63 | 1 | 000000 | 6 | 0 | 6 | 0 | 0 | | 64 | 2 | 000001 | 6 | 1 | 5 | 0 | 0 | | 65 | 3 | 000010 | 6 | 1 | 5 | 0 | 0 | | 66 | 4 | 000011 | 6 | 2 | 4 | 0 | 0 | | 67 | 5 | 000100 | 6 | 1 | 5 | 0 | 0 | | 68 | 6 | 000101 | 6 | 2 | 4 | 0 | 0 | | 69 | 7 | 000110 | 6 | 2 | 4 | 0 | 0 | | 70 | 8 | 000111 | 6 | 3 | 3 | 0 | 1 | | 71 | 9 | 001000 | 6 | 1 | 5 | 0 | 0 | | 72 | 10 | 001001 | 6 | 2 | 4 | 0 | 0 | | 73 | 11 | 001010 | 6 | 2 | 4 | 0 | 0 | | 74 | 12 | 001011 | 6 | 3 | 3 | 0 | 1 | | 75 | 13 | 001100 | 6 | 2 | 4 | 0 | 0 | | 76 | 14 | 001101 | 6 | 3 | 3 | 0 | 1 | | 77 | 15 | 001110 | 6 | 3 | 3 | 0 | 1 | | 78 | 16 | 001111 | 6 | 4 | 2 | 0 | 0 | | 79 | 17 | 010000 | 6 | 1 | 5 | 0 | 0 | | 80 | 18 | 010001 | 6 | 2 | 4 | 0 | 0 | | 81 | 19 | 010010 | 6 | 2 | 4 | 0 | 0 | | 82 | 20 | 010011 | 6 | 3 | 3 | 0 | 1 | | 83 | 21 | 010100 | 6 | 2 | 4 | 0 | 0 | | 84 | 22 | 010101 | 6 | 3 | 3 | 0 | 1 | | 85 | 23 | 010110 | 6 | 3 | 3 | 0 | 1 | | 86 | 24 | 010111 | 6 | 4 | 2 | 0 | 0 | | 87 | 25 | 011000 | 6 | 2 | 4 | 0 | 0 | | 88 | 26 | 011001 | 6 | 3 | 3 | 0 | 1 | | 89 | 27 | 011010 | 6 | 3 | 3 | 0 | 1 | | 90 | 28 | 011011 | 6 | 4 | 2 | 0 | 0 | | 91 | 29 | 011100 | 6 | 3 | 3 | 0 | 1 | | 92 | 30 | 011101 | 6 | 4 | 2 | 0 | 0 | | 93 | 31 | 011110 | 6 | 4 | 2 | 0 | 0 | | 94 | 32 | 011111 | 6 | 5 | 1 | 0 | 0 | | 95 | 33 | 100000 | 6 | 1 | 5 | 0 | 0 | | 96 | 34 | 100001 | 6 | 2 | 4 | 0 | 0 | | 97 | 35 | 100010 | 6 | 2 | 4 | 0 | 0 | | 98 | 36 | 100011 | 6 | 3 | 3 | 0 | 1 | | 99 | 37 | 100100 | 6 | 2 | 4 | 0 | 0 | | 100 | 38 | 100101 | 6 | 3 | 3 | 0 | 1 | | 101 | 39 | 100110 | 6 | 3 | 3 | 0 | 1 | | 102 | 40 | 100111 | 6 | 4 | 2 | 0 | 0 | | 103 | 41 | 101000 | 6 | 2 | 4 | 0 | 0 | | 104 | 42 | 101001 | 6 | 3 | 3 | 0 | 1 | | 105 | 43 | 101010 | 6 | 3 | 3 | 0 | 1 | | 106 | 44 | 101011 | 6 | 4 | 2 | 0 | 0 | | 107 | 45 | 101100 | 6 | 3 | 3 | 0 | 1 | | 108 | 46 | 101101 | 6 | 4 | 2 | 0 | 0 | | 109 | 47 | 101110 | 6 | 4 | 2 | 0 | 0 | | 110 | 48 | 101111 | 6 | 5 | 1 | 0 | 0 | | 111 | 49 | 110000 | 6 | 2 | 4 | 0 | 0 | | 112 | 50 | 110001 | 6 | 3 | 3 | 0 | 1 | | 113 | 51 | 110010 | 6 | 3 | 3 | 0 | 1 | | 114 | 52 | 110011 | 6 | 4 | 2 | 0 | 0 | | 115 | 53 | 110100 | 6 | 3 | 3 | 0 | 1 | | 116 | 54 | 110101 | 6 | 4 | 2 | 0 | 0 | | 117 | 55 | 110110 | 6 | 4 | 2 | 0 | 0 | | 118 | 56 | 110111 | 6 | 5 | 1 | 0 | 0 | | 119 | 57 | 111000 | 6 | 3 | 3 | 0 | 1 | | 120 | 58 | 111001 | 6 | 4 | 2 | 0 | 0 | | 121 | 59 | 111010 | 6 | 4 | 2 | 0 | 0 | | 122 | 60 | 111011 | 6 | 5 | 1 | 0 | 0 | | 123 | 61 | 111100 | 6 | 4 | 2 | 0 | 0 | | 124 | 62 | 111101 | 6 | 5 | 1 | 0 | 0 | | 125 | 63 | 111110 | 6 | 5 | 1 | 0 | 0 | | 126 | 64 | 111111 | 6 | 6 | 0 | 0 | 0 | +-----+------+----------+------------+---------+---------+------------+------------+ | | | | | | | 14 | 42 | +-----+------+----------+------------+---------+---------+------------+------------+ So for 126 possible cases of election result. There are 14 cases when we cast a strong decisive vote and 42 cases when we cast a week decisive vote. So the probability that we cast a decisive vote is: 14/126=11.11% (strong decisive vote) 42/126=33.33% (week decisive vote) Here is a summary table: +--------+-------+--------+------+--------+-------+--------+-------+--------+ | # of | | sum | cumulative sum | probability | | | voters | cases | strong | weak | strong | weak | strong | weak | approx | +--------+-------+--------+------+--------+-------+--------+-------+--------+ | 1 | 2 | 1 | 1 | 1 | 1 | 50.0% | 50.0% | 28.2% | | 2 | 4 | 0 | 2 | 1 | 3 | 16.7% | 50.0% | 19.9% | | 3 | 8 | 3 | 3 | 4 | 6 | 28.6% | 42.9% | 16.3% | | 4 | 16 | 0 | 6 | 4 | 12 | 13.3% | 40.0% | 14.1% | | 5 | 32 | 10 | 10 | 14 | 22 | 22.6% | 35.5% | 12.6% | | 6 | 64 | 0 | 20 | 14 | 42 | 11.1% | 33.3% | 11.5% | +--------+-------+--------+------+--------+-------+--------+-------+--------+ approx has been calculated according to the formula suggested by whuber: $\displaystyle{P}{\left({t}{i}{e}\right)}=\frac{1}{{{2}\sqrt{{{n}\cdot\pi}}}}$ Maybe this approximation works for higher number of voters, but I am not sure yet. For small number of voters this approximation is far from theoretical truth. Please consider this answer as the extension to the question. I would be grateful if anybody could post an equation for decisive vote probability as a function of unknown voters taking part in the election. For larger numbers already >10 voters we see that the probability of a difference equal to 1 or less is already approaching the theoretical value (based on the binomial distribution with $p=0.5$) very quickly. But we need to use $\sqrt{\frac{2}{\pi n}}$ The image below demonstrates this.
Do not vote, one vote will not reverse election results. What is wrong with this reasoning? A simple model. New captain has to be chosen on a ship. There are 6 voters. Two candidates agreed to compete for the office - audacious Mr. Zero and brilliant Mr. One. Nobody on the deck is obliged to
2,708
How to interpret F-measure values?
I cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall. But using two values, we often cannot determine if one algorithm is superior to another. For example, if one algorithm has higher precision but lower recall than other, how can you tell which algorithm is better? If you have a specific goal in your mind like 'Precision is the king. I don't care much about recall', then there's no problem. Higher precision is better. But if you don't have such a strong goal, you will want a combined metric. That's F-measure. By using it, you will compare some of precision and some of recall. The ROC curve is often drawn stating the F-measure. You may find this article interesting as it contains explanation on several measures including ROC curves: http://binf.gmu.edu/mmasso/ROC101.pdf
How to interpret F-measure values?
I cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall. But using two values, we often c
How to interpret F-measure values? I cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall. But using two values, we often cannot determine if one algorithm is superior to another. For example, if one algorithm has higher precision but lower recall than other, how can you tell which algorithm is better? If you have a specific goal in your mind like 'Precision is the king. I don't care much about recall', then there's no problem. Higher precision is better. But if you don't have such a strong goal, you will want a combined metric. That's F-measure. By using it, you will compare some of precision and some of recall. The ROC curve is often drawn stating the F-measure. You may find this article interesting as it contains explanation on several measures including ROC curves: http://binf.gmu.edu/mmasso/ROC101.pdf
How to interpret F-measure values? I cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall. But using two values, we often c
2,709
How to interpret F-measure values?
The importance of the F1 score differs based on the distribution of the target variable. Lets assume the target variable is a binary label. Balanced class: In this situation, the F1 score can effectively be ignored, the mis-classification rate is key. Unbalanced class, but both classes are important: If the class distribution is highly skewed (such as 80:20 or 90:10), then a classifier can get a low mis-classification rate simply by choosing the majority class. In such a situation, I would choose the classifier that gets high F1 scores on both classes, as well as low mis-classification rate. A classifier that gets low F1-scores should be overlooked. Unbalanced class, but one class if more important that the other. For e.g. in Fraud detection, it is more important to correctly label an instance as fraudulent, as opposed to labeling the non-fraudulent one. In this case, I would pick the classifier that has a good F1 score only on the important class. Recall that the F1-score is available per class.
How to interpret F-measure values?
The importance of the F1 score differs based on the distribution of the target variable. Lets assume the target variable is a binary label. Balanced class: In this situation, the F1 score can effecti
How to interpret F-measure values? The importance of the F1 score differs based on the distribution of the target variable. Lets assume the target variable is a binary label. Balanced class: In this situation, the F1 score can effectively be ignored, the mis-classification rate is key. Unbalanced class, but both classes are important: If the class distribution is highly skewed (such as 80:20 or 90:10), then a classifier can get a low mis-classification rate simply by choosing the majority class. In such a situation, I would choose the classifier that gets high F1 scores on both classes, as well as low mis-classification rate. A classifier that gets low F1-scores should be overlooked. Unbalanced class, but one class if more important that the other. For e.g. in Fraud detection, it is more important to correctly label an instance as fraudulent, as opposed to labeling the non-fraudulent one. In this case, I would pick the classifier that has a good F1 score only on the important class. Recall that the F1-score is available per class.
How to interpret F-measure values? The importance of the F1 score differs based on the distribution of the target variable. Lets assume the target variable is a binary label. Balanced class: In this situation, the F1 score can effecti
2,710
How to interpret F-measure values?
F-measure has an intuitive meaning. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances). With high precision but low recall, you classifier is extremely accurate, but it misses a significant number of instances that are difficult to classify. This is not very useful. Take a look at this histogram. Ignore its original purpose. Towards the right, you get high precision, but low recall. If I only select instances with a score above 0.9, my classified instances will be extremely precise, however I will have missed a significant number of instances. Experiments indicate that the sweet spot here is around 0.76, where the F-measure is 0.87.
How to interpret F-measure values?
F-measure has an intuitive meaning. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of inst
How to interpret F-measure values? F-measure has an intuitive meaning. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances). With high precision but low recall, you classifier is extremely accurate, but it misses a significant number of instances that are difficult to classify. This is not very useful. Take a look at this histogram. Ignore its original purpose. Towards the right, you get high precision, but low recall. If I only select instances with a score above 0.9, my classified instances will be extremely precise, however I will have missed a significant number of instances. Experiments indicate that the sweet spot here is around 0.76, where the F-measure is 0.87.
How to interpret F-measure values? F-measure has an intuitive meaning. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of inst
2,711
How to interpret F-measure values?
The F-measure is the harmonic mean of your precision and recall. In most situations, you have a trade-off between precision and recall. If you optimize your classifier to increase one and disfavor the other, the harmonic mean quickly decreases. It is greatest however, when both precision and recall are equal. Given F-measures of 0.4 and 0.8 for your classifiers, you can expect that these where the maximum values achieved when weighing out precision against recall. For visual reference take a look at this figure from Wikipedia: The F-measure is H, A and B are recall and precision. You can increase one, but then the other decreases.
How to interpret F-measure values?
The F-measure is the harmonic mean of your precision and recall. In most situations, you have a trade-off between precision and recall. If you optimize your classifier to increase one and disfavor the
How to interpret F-measure values? The F-measure is the harmonic mean of your precision and recall. In most situations, you have a trade-off between precision and recall. If you optimize your classifier to increase one and disfavor the other, the harmonic mean quickly decreases. It is greatest however, when both precision and recall are equal. Given F-measures of 0.4 and 0.8 for your classifiers, you can expect that these where the maximum values achieved when weighing out precision against recall. For visual reference take a look at this figure from Wikipedia: The F-measure is H, A and B are recall and precision. You can increase one, but then the other decreases.
How to interpret F-measure values? The F-measure is the harmonic mean of your precision and recall. In most situations, you have a trade-off between precision and recall. If you optimize your classifier to increase one and disfavor the
2,712
How to interpret F-measure values?
With precision on the y-axis and recall on the x-axis, the slope of the level curve $F_{\beta}$ at (1, 1) is $-1/\beta^2$. Given $$P = \frac{TP}{TP+FP}$$ and $$R = \frac{TP}{TP+FN}$$, let $\alpha$ be the ratio of the cost of false negatives to false positives. Then total cost of error is proportional to $$\alpha \frac{1-R}{R} + \frac{1-P}{P}.$$ So the slope of the level curve at (1, 1) is $-\alpha$. Therefore, for good models using the $F_{\beta}$ implies you consider false negatives $\beta^2$ times more costly than false positives.
How to interpret F-measure values?
With precision on the y-axis and recall on the x-axis, the slope of the level curve $F_{\beta}$ at (1, 1) is $-1/\beta^2$. Given $$P = \frac{TP}{TP+FP}$$ and $$R = \frac{TP}{TP+FN}$$, let $\alpha$ be
How to interpret F-measure values? With precision on the y-axis and recall on the x-axis, the slope of the level curve $F_{\beta}$ at (1, 1) is $-1/\beta^2$. Given $$P = \frac{TP}{TP+FP}$$ and $$R = \frac{TP}{TP+FN}$$, let $\alpha$ be the ratio of the cost of false negatives to false positives. Then total cost of error is proportional to $$\alpha \frac{1-R}{R} + \frac{1-P}{P}.$$ So the slope of the level curve at (1, 1) is $-\alpha$. Therefore, for good models using the $F_{\beta}$ implies you consider false negatives $\beta^2$ times more costly than false positives.
How to interpret F-measure values? With precision on the y-axis and recall on the x-axis, the slope of the level curve $F_{\beta}$ at (1, 1) is $-1/\beta^2$. Given $$P = \frac{TP}{TP+FP}$$ and $$R = \frac{TP}{TP+FN}$$, let $\alpha$ be
2,713
How to interpret F-measure values?
I just want to note the following paper, published this year, that proposes "a simple transformation of the F-measure, which [the authors] call $F^*$ (F-star), which has an immediate practical interpretation." It even cited this very discussion on Cross Validated. Specifically, $F^* = F/(2-F)$ "is the proportion of the relevant classifications which are correct, where a relevant classification is one which is either really class 1 or classified as class 1". REFERENCES: Hand, D.J., Christen, P. & Kirielle, N., "F*: an interpretable transformation of the F-measure", Machine Learning 110, 451–456 (2021). https://doi.org/10.1007/s10994-021-05964-1
How to interpret F-measure values?
I just want to note the following paper, published this year, that proposes "a simple transformation of the F-measure, which [the authors] call $F^*$ (F-star), which has an immediate practical interpr
How to interpret F-measure values? I just want to note the following paper, published this year, that proposes "a simple transformation of the F-measure, which [the authors] call $F^*$ (F-star), which has an immediate practical interpretation." It even cited this very discussion on Cross Validated. Specifically, $F^* = F/(2-F)$ "is the proportion of the relevant classifications which are correct, where a relevant classification is one which is either really class 1 or classified as class 1". REFERENCES: Hand, D.J., Christen, P. & Kirielle, N., "F*: an interpretable transformation of the F-measure", Machine Learning 110, 451–456 (2021). https://doi.org/10.1007/s10994-021-05964-1
How to interpret F-measure values? I just want to note the following paper, published this year, that proposes "a simple transformation of the F-measure, which [the authors] call $F^*$ (F-star), which has an immediate practical interpr
2,714
How to interpret F-measure values?
The formula for F-measure (F1, with beta=1) is the same as the formula giving the equivalent resistance composed of two resistances placed in parallel in physics (forgetting about the factor 2). This could give you a possible interpretation, and you can think about both electronic or thermal resistances. This analogy would define F-measure as the equivalent resistance formed by sensitivity and precision placed in parallel. For F-measure, the maximum possible is 1, and you loose resistance as soon as one among he two looses resistance as well (that is too say, get a value below 1). If you want to understand better this quantity and its dynamic, think about the physic phenomenon. For example, it appears that the F-measure <= max(sensitivity, precision).
How to interpret F-measure values?
The formula for F-measure (F1, with beta=1) is the same as the formula giving the equivalent resistance composed of two resistances placed in parallel in physics (forgetting about the factor 2). This
How to interpret F-measure values? The formula for F-measure (F1, with beta=1) is the same as the formula giving the equivalent resistance composed of two resistances placed in parallel in physics (forgetting about the factor 2). This could give you a possible interpretation, and you can think about both electronic or thermal resistances. This analogy would define F-measure as the equivalent resistance formed by sensitivity and precision placed in parallel. For F-measure, the maximum possible is 1, and you loose resistance as soon as one among he two looses resistance as well (that is too say, get a value below 1). If you want to understand better this quantity and its dynamic, think about the physic phenomenon. For example, it appears that the F-measure <= max(sensitivity, precision).
How to interpret F-measure values? The formula for F-measure (F1, with beta=1) is the same as the formula giving the equivalent resistance composed of two resistances placed in parallel in physics (forgetting about the factor 2). This
2,715
How to interpret F-measure values?
The closest intuitive meaning of the f1-score is being perceived as the mean of the recall and the precision. Let's clear it for you : In a classification task, you may be planning to build a classifier with high precision AND recall. For example, a classifier that tells if a person is honest or not. For precision, you are able to usually tell accurately how many honest people out there in a given group. In this case, when caring about high precision, you assume that you can misclassify a liar person as honest but not often. In other words, here you are trying to identify liar from honest as a whole group. However, for recall, you will be really concerned if you think a liar person to be honest. For you, this will be a great loss and a big mistake and you don't want to do it again. Also, it's okay if you classified someone honest as a liar but your model should never (or mostly not to) claim a liar person as honest. In other words, here you are focusing on a specific class and you are trying not to make a mistake about it. Now, let take the case where you want your model to (1) precisely identify honest from a liar (precision) (2) identify each person from both classes (recall). Which means that you will select the model that will perform well on both metric. You model selection decision will then try to evaluate each model based on the mean of the two metrics. F-Score is the best one that can describe this. Let's have a look on the formula: $$ Recall: \text{r}=\frac{tp}{tp+fn}$$ $$ Precision: \text{p}=\frac{tp}{tp+fp}$$ $$Fscore: \text{f1} = \frac{2}{\frac{1}{r}+\frac{1}{p}}$$ As you see, the higher recall AND precision, the higher the F-score.
How to interpret F-measure values?
The closest intuitive meaning of the f1-score is being perceived as the mean of the recall and the precision. Let's clear it for you : In a classification task, you may be planning to build a classif
How to interpret F-measure values? The closest intuitive meaning of the f1-score is being perceived as the mean of the recall and the precision. Let's clear it for you : In a classification task, you may be planning to build a classifier with high precision AND recall. For example, a classifier that tells if a person is honest or not. For precision, you are able to usually tell accurately how many honest people out there in a given group. In this case, when caring about high precision, you assume that you can misclassify a liar person as honest but not often. In other words, here you are trying to identify liar from honest as a whole group. However, for recall, you will be really concerned if you think a liar person to be honest. For you, this will be a great loss and a big mistake and you don't want to do it again. Also, it's okay if you classified someone honest as a liar but your model should never (or mostly not to) claim a liar person as honest. In other words, here you are focusing on a specific class and you are trying not to make a mistake about it. Now, let take the case where you want your model to (1) precisely identify honest from a liar (precision) (2) identify each person from both classes (recall). Which means that you will select the model that will perform well on both metric. You model selection decision will then try to evaluate each model based on the mean of the two metrics. F-Score is the best one that can describe this. Let's have a look on the formula: $$ Recall: \text{r}=\frac{tp}{tp+fn}$$ $$ Precision: \text{p}=\frac{tp}{tp+fp}$$ $$Fscore: \text{f1} = \frac{2}{\frac{1}{r}+\frac{1}{p}}$$ As you see, the higher recall AND precision, the higher the F-score.
How to interpret F-measure values? The closest intuitive meaning of the f1-score is being perceived as the mean of the recall and the precision. Let's clear it for you : In a classification task, you may be planning to build a classif
2,716
How to interpret F-measure values?
you can write the F-measure equation http://e.hiphotos.baidu.com/baike/s%3D118/sign=e8083e4396dda144de0968b38ab6d009/f2deb48f8c5494ee14c095492cf5e0fe98257e84.jpg in another way, like $$F_\beta=1/((\beta^2/(\beta^2+1))1/r+(1/(\beta^2+1))1/p)$$ so, when $β^2<1$, $p$ should be more important (or, larger, to get a higher $F_\beta$).
How to interpret F-measure values?
you can write the F-measure equation http://e.hiphotos.baidu.com/baike/s%3D118/sign=e8083e4396dda144de0968b38ab6d009/f2deb48f8c5494ee14c095492cf5e0fe98257e84.jpg in another way, like $$F_\be
How to interpret F-measure values? you can write the F-measure equation http://e.hiphotos.baidu.com/baike/s%3D118/sign=e8083e4396dda144de0968b38ab6d009/f2deb48f8c5494ee14c095492cf5e0fe98257e84.jpg in another way, like $$F_\beta=1/((\beta^2/(\beta^2+1))1/r+(1/(\beta^2+1))1/p)$$ so, when $β^2<1$, $p$ should be more important (or, larger, to get a higher $F_\beta$).
How to interpret F-measure values? you can write the F-measure equation http://e.hiphotos.baidu.com/baike/s%3D118/sign=e8083e4396dda144de0968b38ab6d009/f2deb48f8c5494ee14c095492cf5e0fe98257e84.jpg in another way, like $$F_\be
2,717
How to interpret F-measure values?
Knowing that F1 score is harmonic mean of precision and recall, below is a little brief about them. I would say Recall is more about false negatives .i.e, Having a higher Recall means there are less FALSE NEGATIVES. $$\text{Recall}=\frac{tp}{tp+fn}$$ As much as less FN or Zero FN means, your model prediction is really good. Whereas having higher Precision means, there are less FALSE POSITIVES $$\text{Precision}=\frac{tp}{tp+fp}$$ Same here, Less or Zero False Positives means Model prediction is really good.
How to interpret F-measure values?
Knowing that F1 score is harmonic mean of precision and recall, below is a little brief about them. I would say Recall is more about false negatives .i.e, Having a higher Recall means there are less F
How to interpret F-measure values? Knowing that F1 score is harmonic mean of precision and recall, below is a little brief about them. I would say Recall is more about false negatives .i.e, Having a higher Recall means there are less FALSE NEGATIVES. $$\text{Recall}=\frac{tp}{tp+fn}$$ As much as less FN or Zero FN means, your model prediction is really good. Whereas having higher Precision means, there are less FALSE POSITIVES $$\text{Precision}=\frac{tp}{tp+fp}$$ Same here, Less or Zero False Positives means Model prediction is really good.
How to interpret F-measure values? Knowing that F1 score is harmonic mean of precision and recall, below is a little brief about them. I would say Recall is more about false negatives .i.e, Having a higher Recall means there are less F
2,718
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Yes there is. Generally it is termed base rate fallacy or more specific false positive paradox. There is even a wikipedia article about it: see here
2,719
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare. I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite. When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is. Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %. Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare. I live in
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare. I live in Germany and whilst everyone has read about the plague in their history books, everyone knows that as a German doctor I will never diagnose a true plague case nor take care of a shark bite. When you tell people, that there is a test for shark bites that is positive in one of a hundred healthy people everyone will agree, that that test does not make sense, no matter how well its positive predictive value is. Depending on where in the world you are and who your audience is, possible examples may be the plague, mad cow disease (BSE), progeria, being struck by lightning. There are many known risks, that people are well aware of their risk being far less then 1 %. Edit/Addition: So far this has attracted 3 downvotes and no comments. Defending myself against the most likely objection: The original poster wrote Failing that has anyone got a good, terse, jargon free intuition/example that would help me explain it to a lay person And I think that I did exactly that. Mr Pi posted his better answer later than I posted my lay person explanation and I upvoted his as soon as I saw it.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Unfortunately I have no name for this fallacy. When I need to explain this I have found it usefull to refer to diseases that are commonly known amongst laypersons but are ridiculously rare. I live in
2,720
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates. In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall. In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate: Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not. A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself. This is key: Does the test give us new information, or not? Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives. Edit: Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative r
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative rates. In addressing the conundrum of high accuracy with a high false positive rate, I find it impossible to go beyond very superficial, hand-wavy and inaccurate explanations without introducing people to the concepts of precision and recall. In laymen's terms, one can simply write out two values of interest instead of the over-simplified "accuracy" rate: Of those people who have condition X, what proportion does the test indicate have condition X? This is the recall rate. Incorrect determinations are false negatives--people who should have been diagnosed as having the condition but were not. Of those people whom the test said have condition X, what proportion actually have condition X? This is the precision rate. Incorrect determinations here are false positives--people we said have the condition but do not. A diagnostic test is only useful if it imparts new information. You can show them that for the diagnosis of any rare condition (say, <1% of cases), it is trivially easy to construct a test that is highly accurate (>99% accuracy!), while telling us nothing we didn't already know about who does or does not actually have it: simply tell everyone they don't have it. An infinite number of tests have the same accuracy but trade precision for recall and vice-versa. One can get 100% precision or 100% accuracy by doing nothing, but only a discriminating test will maximize both. Actually computing and showing them the precision and recall rates can inform them and help them to think intelligently about the tradeoffs and the need for a more discerning test. Combining tests that offer different information can lead to a more accurate diagnosis even when the result of one test or the other is unacceptably inaccurate by itself. This is key: Does the test give us new information, or not? Then there is also the dimension of risk aversion: How many false positives is it worth incurring to find one true positive? That is, how many people are you willing to mislead into thinking they have something they might not have in order to find one who does have it? This will depend on the danger of misdiagnosis, which usually differs for false positives and false negatives. Edit: Further beneficial would be a confirming test or tests that are more and more precise, perhaps held out until later because they are more expensive. Diagnoses with a bias towards false positives can thus be used in concert to construct a sieve that is a cost-effective discriminator, eliminating most true negatives early on. However, this too comes at a cost of increased danger for true positives: You want cancer patients to get treatment as soon as possible, and having them jump through three or five hoops each requiring two weeks to a month of advance scheduling before they can even get access to treatment can worsen their prognosis by an order of magnitude. Therefore it is helpful to take other less expensive tests into consideration jointly when doing triage for follow-up to prioritize those patients have the greatest likelihood of having the condition, and perform multiple tests simultaneously where possible.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives The base rate fallacy has to do with specialization to different populations, which does not capture a broader misconception that high accuracy implies both low false positive and low false negative r
2,721
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Just draw yourself a simple decision tree, and it becomes obvious. See attached. I can also send an ultra simple spreadsheet that illustrates the impact precisely.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Just draw yourself a simple decision tree, and it becomes obvious. See attached. I can also send an ultra simple spreadsheet that illustrates the impact precisely.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Just draw yourself a simple decision tree, and it becomes obvious. See attached. I can also send an ultra simple spreadsheet that illustrates the impact precisely.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Just draw yourself a simple decision tree, and it becomes obvious. See attached. I can also send an ultra simple spreadsheet that illustrates the impact precisely.
2,722
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Late to the game, but here are some things others haven't mentioned. 1) Firstly there is a statistic called Kappa or Cohen's Kappa which measures how much a method improves over random guessing. For a test with two outcomes, random guessing is just guessing the majority class. For example if a disease is carried by 1% of the population, a test that says 'you do not have the disease' to everyone is 99% accurate. Useless, but 99% accurate. Kappa measures how much a test improves over random guessing. See wikipedia for the formula, but roughly speaking it measures what percentage of the improvement over random guessing your method captures. So in my example a test that was 99.5% accurate would have a kappa of .5 that being 50% of the best case 1% improvement. 2) All this is also related to Bayes/Bayes theorem. Suppose a condition is rare- occurs in .01% of the population and that the test for the condition is 99% accurate (and always catches the condition). Bayes says your prior chance of having the disease is .01%. However the probability of having the disease, given a positive test is only (.0001/.01) = 1%. The formula is P(Cond|test=Y) = P(Cond)/P(test=Y). This is Bayes theorem. 3) Finally this sort of seeming paradox amounts, imho, to the fact that probability is not intuitive. Things like this have different names. But examples of this phenomena under different guises have been called, among other things, 'The prosecutor paradox' and 'The Monty Hall' problem. I think i am already at tldnr, so look them up in Wikipedia if not already bored.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Late to the game, but here are some things others haven't mentioned. 1) Firstly there is a statistic called Kappa or Cohen's Kappa which measures how much a method improves over random guessing. For a
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Late to the game, but here are some things others haven't mentioned. 1) Firstly there is a statistic called Kappa or Cohen's Kappa which measures how much a method improves over random guessing. For a test with two outcomes, random guessing is just guessing the majority class. For example if a disease is carried by 1% of the population, a test that says 'you do not have the disease' to everyone is 99% accurate. Useless, but 99% accurate. Kappa measures how much a test improves over random guessing. See wikipedia for the formula, but roughly speaking it measures what percentage of the improvement over random guessing your method captures. So in my example a test that was 99.5% accurate would have a kappa of .5 that being 50% of the best case 1% improvement. 2) All this is also related to Bayes/Bayes theorem. Suppose a condition is rare- occurs in .01% of the population and that the test for the condition is 99% accurate (and always catches the condition). Bayes says your prior chance of having the disease is .01%. However the probability of having the disease, given a positive test is only (.0001/.01) = 1%. The formula is P(Cond|test=Y) = P(Cond)/P(test=Y). This is Bayes theorem. 3) Finally this sort of seeming paradox amounts, imho, to the fact that probability is not intuitive. Things like this have different names. But examples of this phenomena under different guises have been called, among other things, 'The prosecutor paradox' and 'The Monty Hall' problem. I think i am already at tldnr, so look them up in Wikipedia if not already bored.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Late to the game, but here are some things others haven't mentioned. 1) Firstly there is a statistic called Kappa or Cohen's Kappa which measures how much a method improves over random guessing. For a
2,723
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
As is true of many questions and answers, it depends... In the case of cancer screening (mammogram, colonoscopy, etc.) and many other screening tests for a disease or condition, this is almost always the case. For a screening test to have some value, it must be "sensitive" enough to detect the relatively rare cases (say 1% or sometimes much less) of the condition being screened. The true positive fraction (TPF) is almost always less than the false positive fraction (FPF). That is why there is always a retest (applying the same test again) or follow up tests (likely more expensive but higher "specificity"), to then eliminate the false positives. So in a sense the name you are asking for is "screening test"! The term "accuracy" has a very particular technical meaning, which is not necessarily the common meaning, or commonly thought of situation. Most "common sense" is related to a 50% 50% chance, you have cancer or you don't. From the wiki page: https://en.wikipedia.org/wiki/Receiver_operating_characteristic Another way of putting it is that a test is accurate if it gets most cases correct. Which is the common definition. But if the condition is rare, and the test is "sensitive" it can (and in fact should and must) still give false positives. 1% prevalence, 1000 tests, 10 true positives, 20 false positives accuracy = (10 + (1000 - 10 - 20))/1000 = 98% Yet another technical way of saying this is that screening tests tend to operate at the high sensitivity (high false positive) side of the so called receiver operating characteristic (ROC). One wants to catch all the true positives, at the expense of false positives, which will be retested and largely eliminated.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
As is true of many questions and answers, it depends... In the case of cancer screening (mammogram, colonoscopy, etc.) and many other screening tests for a disease or condition, this is almost always
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives As is true of many questions and answers, it depends... In the case of cancer screening (mammogram, colonoscopy, etc.) and many other screening tests for a disease or condition, this is almost always the case. For a screening test to have some value, it must be "sensitive" enough to detect the relatively rare cases (say 1% or sometimes much less) of the condition being screened. The true positive fraction (TPF) is almost always less than the false positive fraction (FPF). That is why there is always a retest (applying the same test again) or follow up tests (likely more expensive but higher "specificity"), to then eliminate the false positives. So in a sense the name you are asking for is "screening test"! The term "accuracy" has a very particular technical meaning, which is not necessarily the common meaning, or commonly thought of situation. Most "common sense" is related to a 50% 50% chance, you have cancer or you don't. From the wiki page: https://en.wikipedia.org/wiki/Receiver_operating_characteristic Another way of putting it is that a test is accurate if it gets most cases correct. Which is the common definition. But if the condition is rare, and the test is "sensitive" it can (and in fact should and must) still give false positives. 1% prevalence, 1000 tests, 10 true positives, 20 false positives accuracy = (10 + (1000 - 10 - 20))/1000 = 98% Yet another technical way of saying this is that screening tests tend to operate at the high sensitivity (high false positive) side of the so called receiver operating characteristic (ROC). One wants to catch all the true positives, at the expense of false positives, which will be retested and largely eliminated.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives As is true of many questions and answers, it depends... In the case of cancer screening (mammogram, colonoscopy, etc.) and many other screening tests for a disease or condition, this is almost always
2,724
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Look at this shiny app tool https://kennis-research.shinyapps.io/Bayes-App/ that explains the relationship between sensitivity, specificity and prevalence. In essence, the ability of the test to discover true positives is a function of both the effectiveness of the test (sensitivity and specificity) and the prevalence of the condition being tested for.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Look at this shiny app tool https://kennis-research.shinyapps.io/Bayes-App/ that explains the relationship between sensitivity, specificity and prevalence. In essence, the ability of the test to disco
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Look at this shiny app tool https://kennis-research.shinyapps.io/Bayes-App/ that explains the relationship between sensitivity, specificity and prevalence. In essence, the ability of the test to discover true positives is a function of both the effectiveness of the test (sensitivity and specificity) and the prevalence of the condition being tested for.
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Look at this shiny app tool https://kennis-research.shinyapps.io/Bayes-App/ that explains the relationship between sensitivity, specificity and prevalence. In essence, the ability of the test to disco
2,725
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Use The KISS method to explain it to everyone... Keep It Simple Stupid K.I.S.S. . In accounting a simple audit starts with a 1% sample of total transactions for a specific expenditure(s) or income(s) vs actual bank deposits & withdrawals. If they don't match or "add" up. You increase the sample size up to 5%. The more errors you find the higher the percentage of you sample grows looking for errors or fraud. Up to 100 %. An even simpler example for statisticians is the law of large numbers. The larger number of individual samples the more accurate the outcome. The opposite affect is what I call the law of miniscule numbers. Meaning the sample is too small to reflect true accuracy. Hope this helps !
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives
Use The KISS method to explain it to everyone... Keep It Simple Stupid K.I.S.S. . In accounting a simple audit starts with a 1% sample of total transactions for a specific expenditure(s) or income(s)
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Use The KISS method to explain it to everyone... Keep It Simple Stupid K.I.S.S. . In accounting a simple audit starts with a 1% sample of total transactions for a specific expenditure(s) or income(s) vs actual bank deposits & withdrawals. If they don't match or "add" up. You increase the sample size up to 5%. The more errors you find the higher the percentage of you sample grows looking for errors or fraud. Up to 100 %. An even simpler example for statisticians is the law of large numbers. The larger number of individual samples the more accurate the outcome. The opposite affect is what I call the law of miniscule numbers. Meaning the sample is too small to reflect true accuracy. Hope this helps !
Is there a name for the phenomenon of false positives counterintuitively outstripping true positives Use The KISS method to explain it to everyone... Keep It Simple Stupid K.I.S.S. . In accounting a simple audit starts with a 1% sample of total transactions for a specific expenditure(s) or income(s)
2,726
How and why do normalization and feature scaling work?
It's simply a case of getting all your data on the same scale: if the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what methods you're using to do it). Ensuring standardised feature values implicitly weights all features equally in their representation.
How and why do normalization and feature scaling work?
It's simply a case of getting all your data on the same scale: if the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what me
How and why do normalization and feature scaling work? It's simply a case of getting all your data on the same scale: if the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what methods you're using to do it). Ensuring standardised feature values implicitly weights all features equally in their representation.
How and why do normalization and feature scaling work? It's simply a case of getting all your data on the same scale: if the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what me
2,727
How and why do normalization and feature scaling work?
It is true that preprocessing in machine learning is somewhat a very black art. It is not written down in papers a lot why several preprocessing steps are essential to make it work. I am also not sure if it is understood in every case. To make things more complicated, it depends heavily on the method you use and also on the problem domain. Some methods e.g. are affine transformation invariant. If you have a neural network and just apply an affine transformation to your data, the network does not lose or gain anything in theory. In practice, however, a neural network works best if the inputs are centered and white. That means that their covariance is diagonal and the mean is the zero vector. Why does it improve things? It is only because the optimisation of the neural net works more gracefully, since the hidden activation functions don't saturate that fast and thus do not give you near zero gradients early on in learning. Other methods, e.g. K-Means, might give you totally different solutions depending on the preprocessing. This is because an affine transformation implies a change in the metric space: the Euclidean distance btw two samples will be different after that transformation. At the end of the day, you want to understand what you are doing to the data. E.g. whitening in computer vision and sample wise normalization is something that the human brain does as well in its vision pipeline.
How and why do normalization and feature scaling work?
It is true that preprocessing in machine learning is somewhat a very black art. It is not written down in papers a lot why several preprocessing steps are essential to make it work. I am also not sure
How and why do normalization and feature scaling work? It is true that preprocessing in machine learning is somewhat a very black art. It is not written down in papers a lot why several preprocessing steps are essential to make it work. I am also not sure if it is understood in every case. To make things more complicated, it depends heavily on the method you use and also on the problem domain. Some methods e.g. are affine transformation invariant. If you have a neural network and just apply an affine transformation to your data, the network does not lose or gain anything in theory. In practice, however, a neural network works best if the inputs are centered and white. That means that their covariance is diagonal and the mean is the zero vector. Why does it improve things? It is only because the optimisation of the neural net works more gracefully, since the hidden activation functions don't saturate that fast and thus do not give you near zero gradients early on in learning. Other methods, e.g. K-Means, might give you totally different solutions depending on the preprocessing. This is because an affine transformation implies a change in the metric space: the Euclidean distance btw two samples will be different after that transformation. At the end of the day, you want to understand what you are doing to the data. E.g. whitening in computer vision and sample wise normalization is something that the human brain does as well in its vision pipeline.
How and why do normalization and feature scaling work? It is true that preprocessing in machine learning is somewhat a very black art. It is not written down in papers a lot why several preprocessing steps are essential to make it work. I am also not sure
2,728
How and why do normalization and feature scaling work?
Some ideas, references and plots on why input normalization can be useful for ANN and k-means: K-means: K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance. Example in Matlab: X = [randn(100,2)+ones(100,2);... randn(100,2)-ones(100,2)]; % Introduce denormalization % X(:, 2) = X(:, 2) * 1000 + 500; opts = statset('Display','final'); [idx,ctrs] = kmeans(X,2,... 'Distance','city',... 'Replicates',5,... 'Options',opts); plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',12) hold on plot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',12) plot(ctrs(:,1),ctrs(:,2),'kx',... 'MarkerSize',12,'LineWidth',2) plot(ctrs(:,1),ctrs(:,2),'ko',... 'MarkerSize',12,'LineWidth',2) legend('Cluster 1','Cluster 2','Centroids',... 'Location','NW') title('K-means with normalization') (FYI: How can I detect if my dataset is clustered or unclustered (i.e. forming one single cluster) Distributed clustering: The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure. Artificial neural network (inputs): If the input variables are combined linearly, as in an MLP, then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs. Artificial neural network (inputs/outputs) Should you do any of these things to your data? The answer is, it depends. Standardizing either input or target variables tends to make the training process better behaved by improving the numerical condition (see ftp://ftp.sas.com/pub/neural/illcond/illcond.html) of the optimization problem and ensuring that various default values involved in initialization and termination are appropriate. Standardizing targets can also affect the objective function. Standardization of cases should be approached with caution because it discards information. If that information is irrelevant, then standardizing cases can be quite helpful. If that information is important, then standardizing cases can be disastrous. Interestingly, changing the measurement units may even lead one to see a very different clustering structure: Kaufman, Leonard, and Peter J. Rousseeuw.. "Finding groups in data: An introduction to cluster analysis." (2005). In some applications, changing the measurement units may even lead one to see a very different clustering structure. For example, the age (in years) and height (in centimeters) of four imaginary people are given in Table 3 and plotted in Figure 3. It appears that {A, B ) and { C, 0) are two well-separated clusters. On the other hand, when height is expressed in feet one obtains Table 4 and Figure 4, where the obvious clusters are now {A, C} and { B, D}. This partition is completely different from the first because each subject has received another companion. (Figure 4 would have been flattened even more if age had been measured in days.) To avoid this dependence on the choice of measurement units, one has the option of standardizing the data. This converts the original measurements to unitless variables. Kaufman et al. continues with some interesting considerations (page 11): From a philosophical point of view, standardization does not really solve the problem. Indeed, the choice of measurement units gives rise to relative weights of the variables. Expressing a variable in smaller units will lead to a larger range for that variable, which will then have a large effect on the resulting structure. On the other hand, by standardizing one attempts to give all variables an equal weight, in the hope of achieving objectivity. As such, it may be used by a practitioner who possesses no prior knowledge. However, it may well be that some variables are intrinsically more important than others in a particular application, and then the assignment of weights should be based on subject-matter knowledge (see, e.g., Abrahamowicz, 1985). On the other hand, there have been attempts to devise clustering techniques that are independent of the scale of the variables (Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is to search for a partition that minimizes the total volume of the convex hulls of the clusters. In principle such a method is invariant with respect to linear transformations of the data, but unfortunately no algorithm exists for its implementation (except for an approximation that is restricted to two dimensions). Therefore, the dilemma of standardization appears unavoidable at present and the programs described in this book leave the choice up to the user.
How and why do normalization and feature scaling work?
Some ideas, references and plots on why input normalization can be useful for ANN and k-means: K-means: K-means clustering is "isotropic" in all directions of space and therefore tends to produce m
How and why do normalization and feature scaling work? Some ideas, references and plots on why input normalization can be useful for ANN and k-means: K-means: K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance. Example in Matlab: X = [randn(100,2)+ones(100,2);... randn(100,2)-ones(100,2)]; % Introduce denormalization % X(:, 2) = X(:, 2) * 1000 + 500; opts = statset('Display','final'); [idx,ctrs] = kmeans(X,2,... 'Distance','city',... 'Replicates',5,... 'Options',opts); plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',12) hold on plot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',12) plot(ctrs(:,1),ctrs(:,2),'kx',... 'MarkerSize',12,'LineWidth',2) plot(ctrs(:,1),ctrs(:,2),'ko',... 'MarkerSize',12,'LineWidth',2) legend('Cluster 1','Cluster 2','Centroids',... 'Location','NW') title('K-means with normalization') (FYI: How can I detect if my dataset is clustered or unclustered (i.e. forming one single cluster) Distributed clustering: The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure. Artificial neural network (inputs): If the input variables are combined linearly, as in an MLP, then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs. Artificial neural network (inputs/outputs) Should you do any of these things to your data? The answer is, it depends. Standardizing either input or target variables tends to make the training process better behaved by improving the numerical condition (see ftp://ftp.sas.com/pub/neural/illcond/illcond.html) of the optimization problem and ensuring that various default values involved in initialization and termination are appropriate. Standardizing targets can also affect the objective function. Standardization of cases should be approached with caution because it discards information. If that information is irrelevant, then standardizing cases can be quite helpful. If that information is important, then standardizing cases can be disastrous. Interestingly, changing the measurement units may even lead one to see a very different clustering structure: Kaufman, Leonard, and Peter J. Rousseeuw.. "Finding groups in data: An introduction to cluster analysis." (2005). In some applications, changing the measurement units may even lead one to see a very different clustering structure. For example, the age (in years) and height (in centimeters) of four imaginary people are given in Table 3 and plotted in Figure 3. It appears that {A, B ) and { C, 0) are two well-separated clusters. On the other hand, when height is expressed in feet one obtains Table 4 and Figure 4, where the obvious clusters are now {A, C} and { B, D}. This partition is completely different from the first because each subject has received another companion. (Figure 4 would have been flattened even more if age had been measured in days.) To avoid this dependence on the choice of measurement units, one has the option of standardizing the data. This converts the original measurements to unitless variables. Kaufman et al. continues with some interesting considerations (page 11): From a philosophical point of view, standardization does not really solve the problem. Indeed, the choice of measurement units gives rise to relative weights of the variables. Expressing a variable in smaller units will lead to a larger range for that variable, which will then have a large effect on the resulting structure. On the other hand, by standardizing one attempts to give all variables an equal weight, in the hope of achieving objectivity. As such, it may be used by a practitioner who possesses no prior knowledge. However, it may well be that some variables are intrinsically more important than others in a particular application, and then the assignment of weights should be based on subject-matter knowledge (see, e.g., Abrahamowicz, 1985). On the other hand, there have been attempts to devise clustering techniques that are independent of the scale of the variables (Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is to search for a partition that minimizes the total volume of the convex hulls of the clusters. In principle such a method is invariant with respect to linear transformations of the data, but unfortunately no algorithm exists for its implementation (except for an approximation that is restricted to two dimensions). Therefore, the dilemma of standardization appears unavoidable at present and the programs described in this book leave the choice up to the user.
How and why do normalization and feature scaling work? Some ideas, references and plots on why input normalization can be useful for ANN and k-means: K-means: K-means clustering is "isotropic" in all directions of space and therefore tends to produce m
2,729
How and why do normalization and feature scaling work?
There are two separate issues: a) learning the right function eg k-means: the input scale basically specifies the similarity, so the clusters found depend on the scaling. regularisation - eg l2 weights regularisation - you assume each weight should be "equally small"- if your data are not scaled "appropriately" this will not be the case b) optimisation , namely by gradient descent ( eg most neural networks). For gradient descent, you need to choose the learning rate...but a good learning rate ( at least on 1st hidden layer) depends on the input scaling : small [relevant] inputs will typically require larger weights, so you would like larger learning rate for those weight ( to get there faster), and v.v for large inputs... since you only want to use a single learning rate, you rescale your inputs. ( and whitening ie decorellating is also important for the same reason)
How and why do normalization and feature scaling work?
There are two separate issues: a) learning the right function eg k-means: the input scale basically specifies the similarity, so the clusters found depend on the scaling. regularisation - eg l2 weigh
How and why do normalization and feature scaling work? There are two separate issues: a) learning the right function eg k-means: the input scale basically specifies the similarity, so the clusters found depend on the scaling. regularisation - eg l2 weights regularisation - you assume each weight should be "equally small"- if your data are not scaled "appropriately" this will not be the case b) optimisation , namely by gradient descent ( eg most neural networks). For gradient descent, you need to choose the learning rate...but a good learning rate ( at least on 1st hidden layer) depends on the input scaling : small [relevant] inputs will typically require larger weights, so you would like larger learning rate for those weight ( to get there faster), and v.v for large inputs... since you only want to use a single learning rate, you rescale your inputs. ( and whitening ie decorellating is also important for the same reason)
How and why do normalization and feature scaling work? There are two separate issues: a) learning the right function eg k-means: the input scale basically specifies the similarity, so the clusters found depend on the scaling. regularisation - eg l2 weigh
2,730
How and why do normalization and feature scaling work?
Why does feature scaling work? I can give you an example (from Quora) Let me answer this from general ML perspective and not only neural networks. When you collect data and extract features, many times the data is collected on different scales. For example, the age of employees in a company may be between 21-70 years, the size of the house they live is 500-5000 Sq feet and their salaries may range from $30000-$80000. In this situation if you use a simple Euclidean metric, the age feature will not play any role because it is several order smaller than other features. However, it may contain some important information that may be useful for the task. Here, you may want to normalize the features independently to the same scale, say [0,1], so they contribute equally while computing the distance.
How and why do normalization and feature scaling work?
Why does feature scaling work? I can give you an example (from Quora) Let me answer this from general ML perspective and not only neural networks. When you collect data and extract features, many tim
How and why do normalization and feature scaling work? Why does feature scaling work? I can give you an example (from Quora) Let me answer this from general ML perspective and not only neural networks. When you collect data and extract features, many times the data is collected on different scales. For example, the age of employees in a company may be between 21-70 years, the size of the house they live is 500-5000 Sq feet and their salaries may range from $30000-$80000. In this situation if you use a simple Euclidean metric, the age feature will not play any role because it is several order smaller than other features. However, it may contain some important information that may be useful for the task. Here, you may want to normalize the features independently to the same scale, say [0,1], so they contribute equally while computing the distance.
How and why do normalization and feature scaling work? Why does feature scaling work? I can give you an example (from Quora) Let me answer this from general ML perspective and not only neural networks. When you collect data and extract features, many tim
2,731
How and why do normalization and feature scaling work?
Pre-processing often works because it does remove features of the data which are not related to the classification problem you are trying solve. Think for instance about classifying sound data from different speakers. Fluctuations in loudness (amplitude) might be irrelevant, whereas the frequency spectrum is the really relevant aspect. So in this case, normalizing amplitude will be really helpful for most ML algorithms, because it removes an aspect of the data that is irrelevant and would cause a neural network to overfit to spurious patterns.
How and why do normalization and feature scaling work?
Pre-processing often works because it does remove features of the data which are not related to the classification problem you are trying solve. Think for instance about classifying sound data from di
How and why do normalization and feature scaling work? Pre-processing often works because it does remove features of the data which are not related to the classification problem you are trying solve. Think for instance about classifying sound data from different speakers. Fluctuations in loudness (amplitude) might be irrelevant, whereas the frequency spectrum is the really relevant aspect. So in this case, normalizing amplitude will be really helpful for most ML algorithms, because it removes an aspect of the data that is irrelevant and would cause a neural network to overfit to spurious patterns.
How and why do normalization and feature scaling work? Pre-processing often works because it does remove features of the data which are not related to the classification problem you are trying solve. Think for instance about classifying sound data from di
2,732
How and why do normalization and feature scaling work?
This paper is talks only about k-means, but it explains and proves the requirement of data preprocessing quite nicely. Standardization is the central preprocessing step in data mining, to standardize values of features or attributes from different dynamic range into a specific range. In this paper, we have analyzed the performances of the three standardization methods on conventional K-means algorithm. By comparing the results on infectious diseases datasets, it was found that the result obtained by the z-score standardization method is more effective and efficient than min-max and decimal scaling standardization methods. . ... if there are some features, with a large size or great variability, these kind of features will strongly affect the clustering result. In this case, data standardization would be an important preprocessing task to scale or control the variability of the datasets. . ... the features need to be dimensionless since the numerical values of the ranges of dimensional features rely upon the units of measurements and, hence, a selection of the units of measurements may significantly alter the outcomes of clustering. Therefore, one should not employ distance measures like the Euclidean distance without having normalization of the data sets Source: http://maxwellsci.com/print/rjaset/v6-3299-3303.pdf
How and why do normalization and feature scaling work?
This paper is talks only about k-means, but it explains and proves the requirement of data preprocessing quite nicely. Standardization is the central preprocessing step in data mining, to standard
How and why do normalization and feature scaling work? This paper is talks only about k-means, but it explains and proves the requirement of data preprocessing quite nicely. Standardization is the central preprocessing step in data mining, to standardize values of features or attributes from different dynamic range into a specific range. In this paper, we have analyzed the performances of the three standardization methods on conventional K-means algorithm. By comparing the results on infectious diseases datasets, it was found that the result obtained by the z-score standardization method is more effective and efficient than min-max and decimal scaling standardization methods. . ... if there are some features, with a large size or great variability, these kind of features will strongly affect the clustering result. In this case, data standardization would be an important preprocessing task to scale or control the variability of the datasets. . ... the features need to be dimensionless since the numerical values of the ranges of dimensional features rely upon the units of measurements and, hence, a selection of the units of measurements may significantly alter the outcomes of clustering. Therefore, one should not employ distance measures like the Euclidean distance without having normalization of the data sets Source: http://maxwellsci.com/print/rjaset/v6-3299-3303.pdf
How and why do normalization and feature scaling work? This paper is talks only about k-means, but it explains and proves the requirement of data preprocessing quite nicely. Standardization is the central preprocessing step in data mining, to standard
2,733
How and why do normalization and feature scaling work?
I think that this is done simply so that the feature with a larger value does not overshadow the effects of the feature with a smaller value when learning a classifier . This becomes particularly important if the feature with smaller values actually contributes to class separability .The classifiers like logistic regression would have difficulty learning the decision boundary, for example if it exists at micro level of a feature and we have other features of the order of millions .Also helps the algorithm to converge better . Therefore we don't take any chances when coding these into our algorithms. Its much easier for a classifier, to learn the contributions (weights) of features this way. Also true for K means when using euclidean norms (confusion because of scale). Some algorithms can work without normalizing also.
How and why do normalization and feature scaling work?
I think that this is done simply so that the feature with a larger value does not overshadow the effects of the feature with a smaller value when learning a classifier . This becomes particularly impo
How and why do normalization and feature scaling work? I think that this is done simply so that the feature with a larger value does not overshadow the effects of the feature with a smaller value when learning a classifier . This becomes particularly important if the feature with smaller values actually contributes to class separability .The classifiers like logistic regression would have difficulty learning the decision boundary, for example if it exists at micro level of a feature and we have other features of the order of millions .Also helps the algorithm to converge better . Therefore we don't take any chances when coding these into our algorithms. Its much easier for a classifier, to learn the contributions (weights) of features this way. Also true for K means when using euclidean norms (confusion because of scale). Some algorithms can work without normalizing also.
How and why do normalization and feature scaling work? I think that this is done simply so that the feature with a larger value does not overshadow the effects of the feature with a smaller value when learning a classifier . This becomes particularly impo
2,734
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
For $x_2$ and $x_1$ close to each other, the percent change $\frac{x_2-x_1}{x_1}$ approximates the log difference $\log x_2 - \log x_1$. Why does the percent change approximate the log difference? An idea from calculus is that you can approximate a smooth function with a line. The linear approximation is simply the first two terms of a Taylor Series. The first order Taylor Expansion of $\log(x)$ around $x=1$ is given by: $$ \log(x) \approx \log(1) + \frac{d}{dx} \left. \log (x) \right|_{x=1} \left( x - 1 \right)$$ The right hand side simplifies to $0 + \frac{1}{1}\left( x - 1\right)$ hence: $$ \log(x) \approx x-1$$ So for $x$ in the neighborhood of 1, we can approximate $\log(x)$ with the line $y = x - 1$ Below is a graph of $y = \log(x)$ and $y = x - 1$. Example: $\log(1.02) = .0198 \approx 1.02 - 1$. Now consider two variables $x_2$ and $x_1$ such that $\frac{x_2}{x_1} \approx 1$. Then the log difference is approximately the percent change $\frac{x_2}{x_1} - 1 = \frac{x_2 - x_1}{x_1}$: $$ \log x_2 - \log x_1 = \log\left( \frac{x_2}{x_1} \right) \approx \frac{x_2}{x_1} - 1 $$ The percent change is a linear approximation of the log difference! Why log differences? Often times when you're thinking in terms of compounding percent changes, the mathematically cleaner concept is to think in terms of log differences. When you're repeatedly multiplying terms together, it's often more convenient to work in logs and instead add terms together. Let's say our wealth at time $T$ is given by: $$ W_T = \prod_{t=1}^T (1 + R_t)$$ Then it might be more convenient to write: $$ \log W_T = \sum_{t=1}^T r_t $$ where $r_t = \log (1 + R_t) = \log W_t - \log W_{t-1}$. Where are percent changes and the log difference NOT the same? For big percent changes, the log difference is not the same thing as the percent change because approximating the curve $y = \log(x)$ with the line $y = x - 1$ gets worse and worse the further you get from $x=1$. For example: $$ \log\left(1.6 \right) - \log(1) = .47 \neq 1.6 - 1$$ What's the log difference in this case? One way to think about it is that a difference in logs of .47 is equivalent to an accumulation of 47 different .01 log differences, which is approximately 47 1% changes all compounded together. \begin{align*} \log(1.6) - \log(1) &= 47 \left( .01 \right) \\ & \approx 47 \left( \log(1.01) \right) \end{align*} Then exponentiate both sides to get: $$ 1.6 \approx 1.01 ^{47}$$ A log difference of .47 is approximately equivalent to 47 different 1% increases compounded, or even better, 470 different .1% increases all compounded etc... Several of the answers here make this idea more explicit.
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
For $x_2$ and $x_1$ close to each other, the percent change $\frac{x_2-x_1}{x_1}$ approximates the log difference $\log x_2 - \log x_1$. Why does the percent change approximate the log difference? An
Why is it that natural log changes are percentage changes? What is about logs that makes this so? For $x_2$ and $x_1$ close to each other, the percent change $\frac{x_2-x_1}{x_1}$ approximates the log difference $\log x_2 - \log x_1$. Why does the percent change approximate the log difference? An idea from calculus is that you can approximate a smooth function with a line. The linear approximation is simply the first two terms of a Taylor Series. The first order Taylor Expansion of $\log(x)$ around $x=1$ is given by: $$ \log(x) \approx \log(1) + \frac{d}{dx} \left. \log (x) \right|_{x=1} \left( x - 1 \right)$$ The right hand side simplifies to $0 + \frac{1}{1}\left( x - 1\right)$ hence: $$ \log(x) \approx x-1$$ So for $x$ in the neighborhood of 1, we can approximate $\log(x)$ with the line $y = x - 1$ Below is a graph of $y = \log(x)$ and $y = x - 1$. Example: $\log(1.02) = .0198 \approx 1.02 - 1$. Now consider two variables $x_2$ and $x_1$ such that $\frac{x_2}{x_1} \approx 1$. Then the log difference is approximately the percent change $\frac{x_2}{x_1} - 1 = \frac{x_2 - x_1}{x_1}$: $$ \log x_2 - \log x_1 = \log\left( \frac{x_2}{x_1} \right) \approx \frac{x_2}{x_1} - 1 $$ The percent change is a linear approximation of the log difference! Why log differences? Often times when you're thinking in terms of compounding percent changes, the mathematically cleaner concept is to think in terms of log differences. When you're repeatedly multiplying terms together, it's often more convenient to work in logs and instead add terms together. Let's say our wealth at time $T$ is given by: $$ W_T = \prod_{t=1}^T (1 + R_t)$$ Then it might be more convenient to write: $$ \log W_T = \sum_{t=1}^T r_t $$ where $r_t = \log (1 + R_t) = \log W_t - \log W_{t-1}$. Where are percent changes and the log difference NOT the same? For big percent changes, the log difference is not the same thing as the percent change because approximating the curve $y = \log(x)$ with the line $y = x - 1$ gets worse and worse the further you get from $x=1$. For example: $$ \log\left(1.6 \right) - \log(1) = .47 \neq 1.6 - 1$$ What's the log difference in this case? One way to think about it is that a difference in logs of .47 is equivalent to an accumulation of 47 different .01 log differences, which is approximately 47 1% changes all compounded together. \begin{align*} \log(1.6) - \log(1) &= 47 \left( .01 \right) \\ & \approx 47 \left( \log(1.01) \right) \end{align*} Then exponentiate both sides to get: $$ 1.6 \approx 1.01 ^{47}$$ A log difference of .47 is approximately equivalent to 47 different 1% increases compounded, or even better, 470 different .1% increases all compounded etc... Several of the answers here make this idea more explicit.
Why is it that natural log changes are percentage changes? What is about logs that makes this so? For $x_2$ and $x_1$ close to each other, the percent change $\frac{x_2-x_1}{x_1}$ approximates the log difference $\log x_2 - \log x_1$. Why does the percent change approximate the log difference? An
2,735
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
Here's a version for dummies... We have the model $Y= \beta_o+\beta_1X+\varepsilon$ - a simple straight line through the data cloud - and we know that once we estimate the coefficients, a $1\text{-unit}$ increase in the prior value of $X=x_1$ will result in a increase of $\hat \beta_1$ in the value of $Y$, from $Y=y_1$, as $\hat\beta_1(x_1+1) -\hat\beta_1x_1= \hat\beta_1$. But the units can actually be meaningless in absolute values. So we can, instead change the model to $\ln(Y)= \delta_o+\delta_1X+\varepsilon$ (brand new coefficients). Now for the same unit increase in $\hat\delta_1$, we have a change $$\ln(y_2)-\ln(y_1)=\ln\left(\frac{y_2}{y_1}\right)=\hat\delta_1(x_1+1) -\hat\delta_1x_1= \hat\delta_1 \tag{*}$$ To see the implications for the change in percentage, we can exponentiate $(*)$: $$\exp(\hat\delta_1)=\frac{y_2}{y_1}=\frac{\color{blue}{y_1}+y_2\color{blue}{-y_1}}{y_1}= 1+\frac{y_2-y_1}{y_1}\tag{**}$$ $\frac{y_2-y_1}{y_1}$ is the relative change, and from $(**)$, $\small 100\,\frac{y_2-y_1}{y_1}=100(\exp(\hat\delta_1)-1)$ the percentage change. The key to answer the question is to see that $\exp(\hat\delta_1)-1\approx \hat\delta_1$ for small values of $\hat\delta_1$, which amounts to the same use of the first two terms of the Taylor expansion that Matthew used, but this time of $e^x$ (Maclaurin series) evaluated at zero because we are working with exponents, as opposed to logarithms: $$e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$$ or with $\delta_1$ as the $x$ variable: $$\exp(\hat\delta_1)=1+\hat\delta_1$$ so $\hat\delta_1=\exp(\hat\delta_1)-1$ around zero (we evaluated the polynomial expansion at zero when we did the Taylor series). Visually,
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
Here's a version for dummies... We have the model $Y= \beta_o+\beta_1X+\varepsilon$ - a simple straight line through the data cloud - and we know that once we estimate the coefficients, a $1\text{-uni
Why is it that natural log changes are percentage changes? What is about logs that makes this so? Here's a version for dummies... We have the model $Y= \beta_o+\beta_1X+\varepsilon$ - a simple straight line through the data cloud - and we know that once we estimate the coefficients, a $1\text{-unit}$ increase in the prior value of $X=x_1$ will result in a increase of $\hat \beta_1$ in the value of $Y$, from $Y=y_1$, as $\hat\beta_1(x_1+1) -\hat\beta_1x_1= \hat\beta_1$. But the units can actually be meaningless in absolute values. So we can, instead change the model to $\ln(Y)= \delta_o+\delta_1X+\varepsilon$ (brand new coefficients). Now for the same unit increase in $\hat\delta_1$, we have a change $$\ln(y_2)-\ln(y_1)=\ln\left(\frac{y_2}{y_1}\right)=\hat\delta_1(x_1+1) -\hat\delta_1x_1= \hat\delta_1 \tag{*}$$ To see the implications for the change in percentage, we can exponentiate $(*)$: $$\exp(\hat\delta_1)=\frac{y_2}{y_1}=\frac{\color{blue}{y_1}+y_2\color{blue}{-y_1}}{y_1}= 1+\frac{y_2-y_1}{y_1}\tag{**}$$ $\frac{y_2-y_1}{y_1}$ is the relative change, and from $(**)$, $\small 100\,\frac{y_2-y_1}{y_1}=100(\exp(\hat\delta_1)-1)$ the percentage change. The key to answer the question is to see that $\exp(\hat\delta_1)-1\approx \hat\delta_1$ for small values of $\hat\delta_1$, which amounts to the same use of the first two terms of the Taylor expansion that Matthew used, but this time of $e^x$ (Maclaurin series) evaluated at zero because we are working with exponents, as opposed to logarithms: $$e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$$ or with $\delta_1$ as the $x$ variable: $$\exp(\hat\delta_1)=1+\hat\delta_1$$ so $\hat\delta_1=\exp(\hat\delta_1)-1$ around zero (we evaluated the polynomial expansion at zero when we did the Taylor series). Visually,
Why is it that natural log changes are percentage changes? What is about logs that makes this so? Here's a version for dummies... We have the model $Y= \beta_o+\beta_1X+\varepsilon$ - a simple straight line through the data cloud - and we know that once we estimate the coefficients, a $1\text{-uni
2,736
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
Say you have a model $$\ln y = A+B x$$ Take a derivative of a log: $$\frac{d}{dx}\ln y\equiv\frac{1}{y}\frac{dy}{dx}=B$$ Now you can see that the slope $b$ is now a slope of the relative change of $y$: $$\frac{dy}{y}=B dx$$ If you didn't have the log transform then you'd get a slope of absolute change of $y$: $$dy=B dx$$ I didn't replace $dx,dy$ with $\Delta x,\Delta y$ to emphasize that this works for small changes.
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
Say you have a model $$\ln y = A+B x$$ Take a derivative of a log: $$\frac{d}{dx}\ln y\equiv\frac{1}{y}\frac{dy}{dx}=B$$ Now you can see that the slope $b$ is now a slope of the relative change of $y
Why is it that natural log changes are percentage changes? What is about logs that makes this so? Say you have a model $$\ln y = A+B x$$ Take a derivative of a log: $$\frac{d}{dx}\ln y\equiv\frac{1}{y}\frac{dy}{dx}=B$$ Now you can see that the slope $b$ is now a slope of the relative change of $y$: $$\frac{dy}{y}=B dx$$ If you didn't have the log transform then you'd get a slope of absolute change of $y$: $$dy=B dx$$ I didn't replace $dx,dy$ with $\Delta x,\Delta y$ to emphasize that this works for small changes.
Why is it that natural log changes are percentage changes? What is about logs that makes this so? Say you have a model $$\ln y = A+B x$$ Take a derivative of a log: $$\frac{d}{dx}\ln y\equiv\frac{1}{y}\frac{dy}{dx}=B$$ Now you can see that the slope $b$ is now a slope of the relative change of $y
2,737
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
There are many great explanations in the present answers, but here is another one framed in terms of financial analysis of the accrual of interest on an initial investment. Suppose you have an initial amount of one unit that accrues interest at (nominal) rate $r$ per annum, with interest "compounded" over $n$ periods in the year. At the end of one year, the value of that initial investment of one unit is: $$I(n) = \Big( 1+\frac{r}{n} \Big)^n.$$ The more often this interest is "compounded" the more money you get on your initial investment (since compounding means you are getting interest on your interest). Taking the limit as $n \rightarrow \infty$ we get "continuously compounding interest", which gives: $$I(\infty) = \lim_{n \rightarrow \infty} \Big( 1+\frac{r}{n} \Big)^n = \exp(r).$$ Taking logarithms of both sides gives $r = \ln I(\infty)$, which means that the logarithm of the ratio of the final investment to the initial investment is the continuously compounding interest rate. From this result, we see that logarithmic differences in time-series outcomes can be interpreted as continuously compounding rates of change. (This interpretation is also justified by the answer by aksakal, but the present working gives you another way to look at it.)
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
There are many great explanations in the present answers, but here is another one framed in terms of financial analysis of the accrual of interest on an initial investment. Suppose you have an initia
Why is it that natural log changes are percentage changes? What is about logs that makes this so? There are many great explanations in the present answers, but here is another one framed in terms of financial analysis of the accrual of interest on an initial investment. Suppose you have an initial amount of one unit that accrues interest at (nominal) rate $r$ per annum, with interest "compounded" over $n$ periods in the year. At the end of one year, the value of that initial investment of one unit is: $$I(n) = \Big( 1+\frac{r}{n} \Big)^n.$$ The more often this interest is "compounded" the more money you get on your initial investment (since compounding means you are getting interest on your interest). Taking the limit as $n \rightarrow \infty$ we get "continuously compounding interest", which gives: $$I(\infty) = \lim_{n \rightarrow \infty} \Big( 1+\frac{r}{n} \Big)^n = \exp(r).$$ Taking logarithms of both sides gives $r = \ln I(\infty)$, which means that the logarithm of the ratio of the final investment to the initial investment is the continuously compounding interest rate. From this result, we see that logarithmic differences in time-series outcomes can be interpreted as continuously compounding rates of change. (This interpretation is also justified by the answer by aksakal, but the present working gives you another way to look at it.)
Why is it that natural log changes are percentage changes? What is about logs that makes this so? There are many great explanations in the present answers, but here is another one framed in terms of financial analysis of the accrual of interest on an initial investment. Suppose you have an initia
2,738
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
These posts all focus on the difference between two values as a proportion of the the first: $\frac{y-x}{x}$ or $\frac{y}{x} - 1$. They explain why $$\frac{y}{x} - 1 \approx \ln(\frac{y}{x}) = \ln(y) - \ln(x).$$ You might be interested in the difference as a proportion of the average rather than as a proportion of the first value, $\frac{y-x}{\frac{y+x}{2}}$ rather than $\frac{y-x}{x}$. The difference as a proportion of the average is relevant when comparing methods of measurement, e.g. when doing a Bland-Altman plot with differences proportional to the mean. In this case, approximating the proportionate difference with the difference in the logarithms is even better: $$\frac{y-x}{\frac{y+x}{2}} \approx \ln(\frac{y}{x}) .$$ Here is why: Let $z = \frac{y}{x}$. $$\frac{y-x}{\frac{y+x}{2}} = \frac{2(z-1)}{z+1}$$ Compare the Taylor series about $z =1$ for $\frac{2(z-1)}{z+1}$ and $\ln(z)$. $$\frac{2(z-1)}{z+1} = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{4}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{2^{k-1}}(z-1)^k + ....$$ $$\ln(z) = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{3}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{k}(z-1)^k + ....$$ The series are the same out to the $(z-1)^2$ term. The approximation works quite well from $z=0.5$ to $z=2$ , i.e., when one of the values is up to twice as large as the other, as shown by this figure. A value of $z=0.5$ or $z=2$ corresponds to a difference that is 2/3 of the average. $$\frac{y-x}{\frac{y+x}{2}} = \frac{2x-x}{\frac{2x+x}{2}} = \frac{x}{\frac{3x}{2}} = \frac{2}{3}$$
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
These posts all focus on the difference between two values as a proportion of the the first: $\frac{y-x}{x}$ or $\frac{y}{x} - 1$. They explain why $$\frac{y}{x} - 1 \approx \ln(\frac{y}{x}) = \ln(y)
Why is it that natural log changes are percentage changes? What is about logs that makes this so? These posts all focus on the difference between two values as a proportion of the the first: $\frac{y-x}{x}$ or $\frac{y}{x} - 1$. They explain why $$\frac{y}{x} - 1 \approx \ln(\frac{y}{x}) = \ln(y) - \ln(x).$$ You might be interested in the difference as a proportion of the average rather than as a proportion of the first value, $\frac{y-x}{\frac{y+x}{2}}$ rather than $\frac{y-x}{x}$. The difference as a proportion of the average is relevant when comparing methods of measurement, e.g. when doing a Bland-Altman plot with differences proportional to the mean. In this case, approximating the proportionate difference with the difference in the logarithms is even better: $$\frac{y-x}{\frac{y+x}{2}} \approx \ln(\frac{y}{x}) .$$ Here is why: Let $z = \frac{y}{x}$. $$\frac{y-x}{\frac{y+x}{2}} = \frac{2(z-1)}{z+1}$$ Compare the Taylor series about $z =1$ for $\frac{2(z-1)}{z+1}$ and $\ln(z)$. $$\frac{2(z-1)}{z+1} = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{4}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{2^{k-1}}(z-1)^k + ....$$ $$\ln(z) = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{3}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{k}(z-1)^k + ....$$ The series are the same out to the $(z-1)^2$ term. The approximation works quite well from $z=0.5$ to $z=2$ , i.e., when one of the values is up to twice as large as the other, as shown by this figure. A value of $z=0.5$ or $z=2$ corresponds to a difference that is 2/3 of the average. $$\frac{y-x}{\frac{y+x}{2}} = \frac{2x-x}{\frac{2x+x}{2}} = \frac{x}{\frac{3x}{2}} = \frac{2}{3}$$
Why is it that natural log changes are percentage changes? What is about logs that makes this so? These posts all focus on the difference between two values as a proportion of the the first: $\frac{y-x}{x}$ or $\frac{y}{x} - 1$. They explain why $$\frac{y}{x} - 1 \approx \ln(\frac{y}{x}) = \ln(y)
2,739
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
This answer does not assume a linear regression framework, nor does it rely on any approximations. First, let's define some terms: $$ Old=the\ original\ value\ (or\ variable) $$ $$ New=the\ new\ value\ (or\ variable) $$ $$ PC = Proportion\ Change $$ PC also equals PercentChange/100, and has the domain of [-1,inf]. I find it easier to work with proportions rather than percentages. Also, the New and Old data are not transformed. Now, let's calculate PC: $$ New=Old*(1+PC) $$ $$ \frac{New}{Old}=1+PC $$ $$ PC=\frac{New}{Old}-1 $$ Now, let's define and calculate the difference of the log-transformed data: $$ \Delta=logNew-logOld $$ $$ \Delta=log\left(\frac{New}{Old}\right) $$ $$ \Delta=log(1+PC) $$ Now, let's rearrange and solve for PC: $$ e^\Delta=1+PC $$ $$ PC=e^\Delta-1 $$ $$ PC=e^\left(log\left(\frac{New}{Old}\right)\right)-1 $$ This answer outlines a similar derivation.
Why is it that natural log changes are percentage changes? What is about logs that makes this so?
This answer does not assume a linear regression framework, nor does it rely on any approximations. First, let's define some terms: $$ Old=the\ original\ value\ (or\ variable) $$ $$ New=the\ new\ value
Why is it that natural log changes are percentage changes? What is about logs that makes this so? This answer does not assume a linear regression framework, nor does it rely on any approximations. First, let's define some terms: $$ Old=the\ original\ value\ (or\ variable) $$ $$ New=the\ new\ value\ (or\ variable) $$ $$ PC = Proportion\ Change $$ PC also equals PercentChange/100, and has the domain of [-1,inf]. I find it easier to work with proportions rather than percentages. Also, the New and Old data are not transformed. Now, let's calculate PC: $$ New=Old*(1+PC) $$ $$ \frac{New}{Old}=1+PC $$ $$ PC=\frac{New}{Old}-1 $$ Now, let's define and calculate the difference of the log-transformed data: $$ \Delta=logNew-logOld $$ $$ \Delta=log\left(\frac{New}{Old}\right) $$ $$ \Delta=log(1+PC) $$ Now, let's rearrange and solve for PC: $$ e^\Delta=1+PC $$ $$ PC=e^\Delta-1 $$ $$ PC=e^\left(log\left(\frac{New}{Old}\right)\right)-1 $$ This answer outlines a similar derivation.
Why is it that natural log changes are percentage changes? What is about logs that makes this so? This answer does not assume a linear regression framework, nor does it rely on any approximations. First, let's define some terms: $$ Old=the\ original\ value\ (or\ variable) $$ $$ New=the\ new\ value
2,740
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
Both, categorical cross entropy and sparse categorical cross entropy have the same loss function which you have mentioned above. The only difference is the format in which you mention $Y_i$ (i,e true labels). If your $Y_i$'s are one-hot encoded, use categorical_crossentropy. Examples (for a 3-class classification): [1,0,0] , [0,1,0], [0,0,1] But if your $Y_i$'s are integers, use sparse_categorical_crossentropy. Examples for above 3-class classification problem: [1] , [2], [3] The usage entirely depends on how you load your dataset. One advantage of using sparse categorical cross entropy is it saves time in memory as well as computation because it simply uses a single integer for a class, rather than a whole vector.
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
Both, categorical cross entropy and sparse categorical cross entropy have the same loss function which you have mentioned above. The only difference is the format in which you mention $Y_i$ (i,e true
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other Both, categorical cross entropy and sparse categorical cross entropy have the same loss function which you have mentioned above. The only difference is the format in which you mention $Y_i$ (i,e true labels). If your $Y_i$'s are one-hot encoded, use categorical_crossentropy. Examples (for a 3-class classification): [1,0,0] , [0,1,0], [0,0,1] But if your $Y_i$'s are integers, use sparse_categorical_crossentropy. Examples for above 3-class classification problem: [1] , [2], [3] The usage entirely depends on how you load your dataset. One advantage of using sparse categorical cross entropy is it saves time in memory as well as computation because it simply uses a single integer for a class, rather than a whole vector.
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other Both, categorical cross entropy and sparse categorical cross entropy have the same loss function which you have mentioned above. The only difference is the format in which you mention $Y_i$ (i,e true
2,741
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
The formula which you posted in your question refers to binary_crossentropy, not categorical_crossentropy. The former is used when you have only one class. The latter refers to a situation when you have multiple classes and its formula looks like below: $$J(\textbf{w}) = -\sum_{i=1}^{N} y_i \text{log}(\hat{y}_i).$$ This loss works as skadaver mentioned on one-hot encoded values e.g [1,0,0], [0,1,0], [0,0,1] The sparse_categorical_crossentropy is a little bit different, it works on integers that's true, but these integers must be the class indices, not actual values. This loss computes logarithm only for output index which ground truth indicates to. So when model output is for example [0.1, 0.3, 0.7] and ground truth is 3 (if indexed from 1) then loss compute only logarithm of 0.7. This doesn't change the final value, because in the regular version of categorical crossentropy other values are immediately multiplied by zero (because of one-hot encoding characteristic). Thanks to that it computes logarithm once per instance and omits the summation which leads to better performance. The formula might look like this: $$J(\textbf{w}) = -\text{log}(\hat{y}_y).$$
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
The formula which you posted in your question refers to binary_crossentropy, not categorical_crossentropy. The former is used when you have only one class. The latter refers to a situation when you ha
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other The formula which you posted in your question refers to binary_crossentropy, not categorical_crossentropy. The former is used when you have only one class. The latter refers to a situation when you have multiple classes and its formula looks like below: $$J(\textbf{w}) = -\sum_{i=1}^{N} y_i \text{log}(\hat{y}_i).$$ This loss works as skadaver mentioned on one-hot encoded values e.g [1,0,0], [0,1,0], [0,0,1] The sparse_categorical_crossentropy is a little bit different, it works on integers that's true, but these integers must be the class indices, not actual values. This loss computes logarithm only for output index which ground truth indicates to. So when model output is for example [0.1, 0.3, 0.7] and ground truth is 3 (if indexed from 1) then loss compute only logarithm of 0.7. This doesn't change the final value, because in the regular version of categorical crossentropy other values are immediately multiplied by zero (because of one-hot encoding characteristic). Thanks to that it computes logarithm once per instance and omits the summation which leads to better performance. The formula might look like this: $$J(\textbf{w}) = -\text{log}(\hat{y}_y).$$
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other The formula which you posted in your question refers to binary_crossentropy, not categorical_crossentropy. The former is used when you have only one class. The latter refers to a situation when you ha
2,742
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
I have no better answer than the links and me too encountered the same question. I just want to point out, that the formula for loss function (cross entropy) seems to be a little bit erroneous (and might be misleading.) One should probably drop the 2nd term in the bracket to have simply $$J(\textbf{w}) = -\frac{1}{N} \sum_{i=1}^{N} y_i \text{log}(\hat{y}_i).$$ Sorry for writing my comment here, but I haven't got enough reputation points to be able to comment...
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
I have no better answer than the links and me too encountered the same question. I just want to point out, that the formula for loss function (cross entropy) seems to be a little bit erroneous (and mi
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other I have no better answer than the links and me too encountered the same question. I just want to point out, that the formula for loss function (cross entropy) seems to be a little bit erroneous (and might be misleading.) One should probably drop the 2nd term in the bracket to have simply $$J(\textbf{w}) = -\frac{1}{N} \sum_{i=1}^{N} y_i \text{log}(\hat{y}_i).$$ Sorry for writing my comment here, but I haven't got enough reputation points to be able to comment...
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other I have no better answer than the links and me too encountered the same question. I just want to point out, that the formula for loss function (cross entropy) seems to be a little bit erroneous (and mi
2,743
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
By the nature of your question, it sounds like you have 3 or more categories. However, for the sake of completion I would like to add that if you are dealing with a binary classification, using binary cross entropy might be more appropriate. Furthermore, be careful to choose the loss and metric properly, since this can lead to some unexpected and weird behaviour in the performance of your model.
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other
By the nature of your question, it sounds like you have 3 or more categories. However, for the sake of completion I would like to add that if you are dealing with a binary classification, using binary
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other By the nature of your question, it sounds like you have 3 or more categories. However, for the sake of completion I would like to add that if you are dealing with a binary classification, using binary cross entropy might be more appropriate. Furthermore, be careful to choose the loss and metric properly, since this can lead to some unexpected and weird behaviour in the performance of your model.
Cross Entropy vs. Sparse Cross Entropy: When to use one over the other By the nature of your question, it sounds like you have 3 or more categories. However, for the sake of completion I would like to add that if you are dealing with a binary classification, using binary
2,744
Multivariate multiple regression in R
Briefly stated, this is because base-R's manova(lm()) uses sequential model comparisons for so-called Type I sum of squares, whereas car's Manova() by default uses model comparisons for Type II sum of squares. I assume you're familiar with the model-comparison approach to ANOVA or regression analysis. This approach defines these tests by comparing a restricted model (corresponding to a null hypothesis) to an unrestricted model (corresponding to the alternative hypothesis). If you're not familiar with this idea, I recommend Maxwell & Delaney's excellent "Designing experiments and analyzing data" (2004). For type I SS, the restricted model in a regression analysis for your first predictor c is the null-model which only uses the absolute term: lm(Y ~ 1), where Y in your case would be the multivariate DV defined by cbind(A, B). The unrestricted model then adds predictor c, i.e. lm(Y ~ c + 1). For type II SS, the unrestricted model in a regression analysis for your first predictor c is the full model which includes all predictors except for their interactions, i.e., lm(Y ~ c + d + e + f + g + H + I). The restricted model removes predictor c from the unrestricted model, i.e., lm(Y ~ d + e + f + g + H + I). Since both functions rely on different model comparisons, they lead to different results. The question which one is preferable is hard to answer - it really depends on your hypotheses. What follows assumes you're familiar with how multivariate test statistics like the Pillai-Bartlett Trace are calculated based on the null-model, the full model, and the pair of restricted-unrestricted models. For brevity, I only consider predictors c and H, and only test for c. N <- 100 # generate some data: number of subjects c <- rbinom(N, 1, 0.2) # dichotomous predictor c H <- rnorm(N, -10, 2) # metric predictor H A <- -1.4*c + 0.6*H + rnorm(N, 0, 3) # DV A B <- 1.4*c - 0.6*H + rnorm(N, 0, 3) # DV B Y <- cbind(A, B) # DV matrix my.model <- lm(Y ~ c + H) # the multivariate model summary(manova(my.model)) # from base-R: SS type I # Df Pillai approx F num Df den Df Pr(>F) # c 1 0.06835 3.5213 2 96 0.03344 * # H 1 0.32664 23.2842 2 96 5.7e-09 *** # Residuals 97 For comparison, the result from car's Manova() function using SS type II. library(car) # for Manova() Manova(my.model, type="II") # Type II MANOVA Tests: Pillai test statistic # Df test stat approx F num Df den Df Pr(>F) # c 1 0.05904 3.0119 2 96 0.05387 . # H 1 0.32664 23.2842 2 96 5.7e-09 *** Now manually verify both results. Build the design matrix $X$ first and compare to R's design matrix. X <- cbind(1, c, H) XR <- model.matrix(~ c + H) all.equal(X, XR, check.attributes=FALSE) # [1] TRUE Now define the orthogonal projection for the full model ($P_{f} = X (X'X)^{-1} X'$, using all predictors). This gives us the matrix $W = Y' (I-P_{f}) Y$. Pf <- X %*% solve(t(X) %*% X) %*% t(X) Id <- diag(N) WW <- t(Y) %*% (Id - Pf) %*% Y Restricted and unrestricted models for SS type I plus their projections $P_{rI}$ and $P_{uI}$, leading to matrix $B_{I} = Y' (P_{uI} - P_{PrI}) Y$. XrI <- X[ , 1] PrI <- XrI %*% solve(t(XrI) %*% XrI) %*% t(XrI) XuI <- X[ , c(1, 2)] PuI <- XuI %*% solve(t(XuI) %*% XuI) %*% t(XuI) Bi <- t(Y) %*% (PuI - PrI) %*% Y Restricted and unrestricted models for SS type II plus their projections $P_{rI}$ and $P_{uII}$, leading to matrix $B_{II} = Y' (P_{uII} - P_{PrII}) Y$. XrII <- X[ , -2] PrII <- XrII %*% solve(t(XrII) %*% XrII) %*% t(XrII) PuII <- Pf Bii <- t(Y) %*% (PuII - PrII) %*% Y Pillai-Bartlett trace for both types of SS: trace of $(B + W)^{-1} B$. (PBTi <- sum(diag(solve(Bi + WW) %*% Bi))) # SS type I # [1] 0.0683467 (PBTii <- sum(diag(solve(Bii + WW) %*% Bii))) # SS type II # [1] 0.05904288 Note that the calculations for the orthogonal projections mimic the mathematical formula, but are a bad idea numerically. One should really use QR-decompositions or SVD in combination with crossprod() instead.
Multivariate multiple regression in R
Briefly stated, this is because base-R's manova(lm()) uses sequential model comparisons for so-called Type I sum of squares, whereas car's Manova() by default uses model comparisons for Type II sum of
Multivariate multiple regression in R Briefly stated, this is because base-R's manova(lm()) uses sequential model comparisons for so-called Type I sum of squares, whereas car's Manova() by default uses model comparisons for Type II sum of squares. I assume you're familiar with the model-comparison approach to ANOVA or regression analysis. This approach defines these tests by comparing a restricted model (corresponding to a null hypothesis) to an unrestricted model (corresponding to the alternative hypothesis). If you're not familiar with this idea, I recommend Maxwell & Delaney's excellent "Designing experiments and analyzing data" (2004). For type I SS, the restricted model in a regression analysis for your first predictor c is the null-model which only uses the absolute term: lm(Y ~ 1), where Y in your case would be the multivariate DV defined by cbind(A, B). The unrestricted model then adds predictor c, i.e. lm(Y ~ c + 1). For type II SS, the unrestricted model in a regression analysis for your first predictor c is the full model which includes all predictors except for their interactions, i.e., lm(Y ~ c + d + e + f + g + H + I). The restricted model removes predictor c from the unrestricted model, i.e., lm(Y ~ d + e + f + g + H + I). Since both functions rely on different model comparisons, they lead to different results. The question which one is preferable is hard to answer - it really depends on your hypotheses. What follows assumes you're familiar with how multivariate test statistics like the Pillai-Bartlett Trace are calculated based on the null-model, the full model, and the pair of restricted-unrestricted models. For brevity, I only consider predictors c and H, and only test for c. N <- 100 # generate some data: number of subjects c <- rbinom(N, 1, 0.2) # dichotomous predictor c H <- rnorm(N, -10, 2) # metric predictor H A <- -1.4*c + 0.6*H + rnorm(N, 0, 3) # DV A B <- 1.4*c - 0.6*H + rnorm(N, 0, 3) # DV B Y <- cbind(A, B) # DV matrix my.model <- lm(Y ~ c + H) # the multivariate model summary(manova(my.model)) # from base-R: SS type I # Df Pillai approx F num Df den Df Pr(>F) # c 1 0.06835 3.5213 2 96 0.03344 * # H 1 0.32664 23.2842 2 96 5.7e-09 *** # Residuals 97 For comparison, the result from car's Manova() function using SS type II. library(car) # for Manova() Manova(my.model, type="II") # Type II MANOVA Tests: Pillai test statistic # Df test stat approx F num Df den Df Pr(>F) # c 1 0.05904 3.0119 2 96 0.05387 . # H 1 0.32664 23.2842 2 96 5.7e-09 *** Now manually verify both results. Build the design matrix $X$ first and compare to R's design matrix. X <- cbind(1, c, H) XR <- model.matrix(~ c + H) all.equal(X, XR, check.attributes=FALSE) # [1] TRUE Now define the orthogonal projection for the full model ($P_{f} = X (X'X)^{-1} X'$, using all predictors). This gives us the matrix $W = Y' (I-P_{f}) Y$. Pf <- X %*% solve(t(X) %*% X) %*% t(X) Id <- diag(N) WW <- t(Y) %*% (Id - Pf) %*% Y Restricted and unrestricted models for SS type I plus their projections $P_{rI}$ and $P_{uI}$, leading to matrix $B_{I} = Y' (P_{uI} - P_{PrI}) Y$. XrI <- X[ , 1] PrI <- XrI %*% solve(t(XrI) %*% XrI) %*% t(XrI) XuI <- X[ , c(1, 2)] PuI <- XuI %*% solve(t(XuI) %*% XuI) %*% t(XuI) Bi <- t(Y) %*% (PuI - PrI) %*% Y Restricted and unrestricted models for SS type II plus their projections $P_{rI}$ and $P_{uII}$, leading to matrix $B_{II} = Y' (P_{uII} - P_{PrII}) Y$. XrII <- X[ , -2] PrII <- XrII %*% solve(t(XrII) %*% XrII) %*% t(XrII) PuII <- Pf Bii <- t(Y) %*% (PuII - PrII) %*% Y Pillai-Bartlett trace for both types of SS: trace of $(B + W)^{-1} B$. (PBTi <- sum(diag(solve(Bi + WW) %*% Bi))) # SS type I # [1] 0.0683467 (PBTii <- sum(diag(solve(Bii + WW) %*% Bii))) # SS type II # [1] 0.05904288 Note that the calculations for the orthogonal projections mimic the mathematical formula, but are a bad idea numerically. One should really use QR-decompositions or SVD in combination with crossprod() instead.
Multivariate multiple regression in R Briefly stated, this is because base-R's manova(lm()) uses sequential model comparisons for so-called Type I sum of squares, whereas car's Manova() by default uses model comparisons for Type II sum of
2,745
Multivariate multiple regression in R
Well, I still don't have enough points to comment on previous answer and thats why I am writing it as a separate answer, so please pardon me. (If possible please push me over the 50 rep points ;) So here are the 2cents: Type I , II and III errors testing are essentially variations due to data being unbalanced. (Defn Unbalanced: Not having equal number of observations in each of the strata). If the data is balanced Type I , II and III error testing gives exact same results. So what happens when the data is imbalanced? Consider a model that includes two factors A and B; there are therefore two main effects, and an interaction, AB. SS(A, B, AB) indicates full model SS(A, B) indicates the model with no interaction. SS(B, AB) indicates the model that does not account for effects from factor A, and so on. This notation now makes sense. Just keep it in mind. SS(AB | A, B) = SS(A, B, AB) - SS(A, B) SS(A | B, AB) = SS(A, B, AB) - SS(B, AB) SS(B | A, AB) = SS(A, B, AB) - SS(A, AB) SS(A | B) = SS(A, B) - SS(B) SS(B | A) = SS(A, B) - SS(A) Type I, also called "sequential" sum of squares: 1) SS(A) for factor A. 2) SS(B | A) for factor B. 3) SS(AB | B, A) for interaction AB. So we estimate main effect of A first them, effect of B given A, and then estimate interaction AB given A and B (This is where being imbalanced data, the differences kick in. As we estimate main effect first and then main of other and then interaction in a "sequence") Type II: 1) SS(A | B) for factor A. 2) SS(B | A) for factor B. Type II tests significance of main effect of A after B and B after A. Why is there no SS(AB | B, A) ? Caveat is that type II method can be used only when we have already tested for interaction to be insignificant. Given that there is no interaction (SS(AB | B, A) is insignificant) type II test has better power over type III Type III: 1) SS(A | B, AB) for factor A. 2) SS(B | A, AB) for factor B. So we tested for interaction during type II and interaction was significant. Now we need to use type III as it takes into account the interaction term. As @caracal has said already, When data is balanced, the factors are orthogonal, and types I, II and III all give the same results. I hope this helps ! Disclosure: Most of it is not my own work. I found this excellent page linked and felt like boiling it down further to make it simpler.
Multivariate multiple regression in R
Well, I still don't have enough points to comment on previous answer and thats why I am writing it as a separate answer, so please pardon me. (If possible please push me over the 50 rep points ;) So h
Multivariate multiple regression in R Well, I still don't have enough points to comment on previous answer and thats why I am writing it as a separate answer, so please pardon me. (If possible please push me over the 50 rep points ;) So here are the 2cents: Type I , II and III errors testing are essentially variations due to data being unbalanced. (Defn Unbalanced: Not having equal number of observations in each of the strata). If the data is balanced Type I , II and III error testing gives exact same results. So what happens when the data is imbalanced? Consider a model that includes two factors A and B; there are therefore two main effects, and an interaction, AB. SS(A, B, AB) indicates full model SS(A, B) indicates the model with no interaction. SS(B, AB) indicates the model that does not account for effects from factor A, and so on. This notation now makes sense. Just keep it in mind. SS(AB | A, B) = SS(A, B, AB) - SS(A, B) SS(A | B, AB) = SS(A, B, AB) - SS(B, AB) SS(B | A, AB) = SS(A, B, AB) - SS(A, AB) SS(A | B) = SS(A, B) - SS(B) SS(B | A) = SS(A, B) - SS(A) Type I, also called "sequential" sum of squares: 1) SS(A) for factor A. 2) SS(B | A) for factor B. 3) SS(AB | B, A) for interaction AB. So we estimate main effect of A first them, effect of B given A, and then estimate interaction AB given A and B (This is where being imbalanced data, the differences kick in. As we estimate main effect first and then main of other and then interaction in a "sequence") Type II: 1) SS(A | B) for factor A. 2) SS(B | A) for factor B. Type II tests significance of main effect of A after B and B after A. Why is there no SS(AB | B, A) ? Caveat is that type II method can be used only when we have already tested for interaction to be insignificant. Given that there is no interaction (SS(AB | B, A) is insignificant) type II test has better power over type III Type III: 1) SS(A | B, AB) for factor A. 2) SS(B | A, AB) for factor B. So we tested for interaction during type II and interaction was significant. Now we need to use type III as it takes into account the interaction term. As @caracal has said already, When data is balanced, the factors are orthogonal, and types I, II and III all give the same results. I hope this helps ! Disclosure: Most of it is not my own work. I found this excellent page linked and felt like boiling it down further to make it simpler.
Multivariate multiple regression in R Well, I still don't have enough points to comment on previous answer and thats why I am writing it as a separate answer, so please pardon me. (If possible please push me over the 50 rep points ;) So h
2,746
One-hot vs dummy encoding in Scikit-learn
Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to True? I do not see any "warning" on the website. For an unregularized linear model with one-hot encoding, yes, you need to set the intercept to be false or else incur perfect collinearity. sklearn also allows for a ridge shrinkage penalty, and in that case it is not necessary, and in fact you should include both the intercept and all the levels. For dummy encoding you should include an intercept, unless you have standardized all your variables, in which case the intercept is zero. Since one-hot encoding generates more variables, does it have more degree of freedom than dummy encoding? The intercept is an additional degree of freedom, so in a well specified model it all equals out. For the second one, what if there are k categorical variables? k variables are removed in dummy encoding. Is the degree of freedom still the same? You could not fit a model in which you used all the levels of both categorical variables, intercept or not. For, as soon as you have one-hot-encoded all the levels in one variable in the model, say with binary variables $x_1, x_2, \ldots, x_n$, then you have a linear combination of predictors equal to the constant vector $$ x_1 + x_2 + \cdots + x_n = 1 $$ If you then try to enter all the levels of another categorical $x'$ into the model, you end up with a distinct linear combination equal to a constant vector $$ x_1' + x_2' + \cdots + x_k' = 1 $$ and so you have created a linear dependency $$ x_1 + x_2 + \cdots x_n - x_1' - x_2' - \cdots - x_k' = 0$$ So you must leave out a level in the second variable, and everything lines up properly. Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept. In one-hot encoding, 3*4=12 variables are built without an intercept. Am I correct? The second thing does not actually work. The $3 \times 4 = 12$ column design matrix you create will be singular. You need to remove three columns, one from each of three distinct categorical encodings, to recover non-singularity of your design.
One-hot vs dummy encoding in Scikit-learn
Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to Tr
One-hot vs dummy encoding in Scikit-learn Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to True? I do not see any "warning" on the website. For an unregularized linear model with one-hot encoding, yes, you need to set the intercept to be false or else incur perfect collinearity. sklearn also allows for a ridge shrinkage penalty, and in that case it is not necessary, and in fact you should include both the intercept and all the levels. For dummy encoding you should include an intercept, unless you have standardized all your variables, in which case the intercept is zero. Since one-hot encoding generates more variables, does it have more degree of freedom than dummy encoding? The intercept is an additional degree of freedom, so in a well specified model it all equals out. For the second one, what if there are k categorical variables? k variables are removed in dummy encoding. Is the degree of freedom still the same? You could not fit a model in which you used all the levels of both categorical variables, intercept or not. For, as soon as you have one-hot-encoded all the levels in one variable in the model, say with binary variables $x_1, x_2, \ldots, x_n$, then you have a linear combination of predictors equal to the constant vector $$ x_1 + x_2 + \cdots + x_n = 1 $$ If you then try to enter all the levels of another categorical $x'$ into the model, you end up with a distinct linear combination equal to a constant vector $$ x_1' + x_2' + \cdots + x_k' = 1 $$ and so you have created a linear dependency $$ x_1 + x_2 + \cdots x_n - x_1' - x_2' - \cdots - x_k' = 0$$ So you must leave out a level in the second variable, and everything lines up properly. Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept. In one-hot encoding, 3*4=12 variables are built without an intercept. Am I correct? The second thing does not actually work. The $3 \times 4 = 12$ column design matrix you create will be singular. You need to remove three columns, one from each of three distinct categorical encodings, to recover non-singularity of your design.
One-hot vs dummy encoding in Scikit-learn Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to Tr
2,747
One-hot vs dummy encoding in Scikit-learn
To add a little to @MatthewDrury's answer regarding this question: Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept. In one-hot encoding, 3*4=12 variables are built without an intercept. Am I correct? We can examine what the design matrix would look like with and without an intercept by using model.matrix from R. With an intercept: > df <- expand.grid(w = letters[1:4], x = letters[5:8], y = letters[9:12]) > model.matrix(~ w + x + y, df) (Intercept) wb wc wd xf xg xh yj yk yl 1 1 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 3 1 0 1 0 0 0 0 0 0 0 4 1 0 0 1 0 0 0 0 0 0 5 1 0 0 0 1 0 0 0 0 0 6 1 1 0 0 1 0 0 0 0 0 7 1 0 1 0 1 0 0 0 0 0 8 1 0 0 1 1 0 0 0 0 0 9 1 0 0 0 0 1 0 0 0 0 10 1 1 0 0 0 1 0 0 0 0 11 1 0 1 0 0 1 0 0 0 0 12 1 0 0 1 0 1 0 0 0 0 13 1 0 0 0 0 0 1 0 0 0 14 1 1 0 0 0 0 1 0 0 0 15 1 0 1 0 0 0 1 0 0 0 16 1 0 0 1 0 0 1 0 0 0 17 1 0 0 0 0 0 0 1 0 0 18 1 1 0 0 0 0 0 1 0 0 19 1 0 1 0 0 0 0 1 0 0 20 1 0 0 1 0 0 0 1 0 0 21 1 0 0 0 1 0 0 1 0 0 22 1 1 0 0 1 0 0 1 0 0 23 1 0 1 0 1 0 0 1 0 0 24 1 0 0 1 1 0 0 1 0 0 25 1 0 0 0 0 1 0 1 0 0 26 1 1 0 0 0 1 0 1 0 0 27 1 0 1 0 0 1 0 1 0 0 28 1 0 0 1 0 1 0 1 0 0 29 1 0 0 0 0 0 1 1 0 0 30 1 1 0 0 0 0 1 1 0 0 31 1 0 1 0 0 0 1 1 0 0 32 1 0 0 1 0 0 1 1 0 0 33 1 0 0 0 0 0 0 0 1 0 34 1 1 0 0 0 0 0 0 1 0 35 1 0 1 0 0 0 0 0 1 0 36 1 0 0 1 0 0 0 0 1 0 37 1 0 0 0 1 0 0 0 1 0 38 1 1 0 0 1 0 0 0 1 0 39 1 0 1 0 1 0 0 0 1 0 40 1 0 0 1 1 0 0 0 1 0 41 1 0 0 0 0 1 0 0 1 0 42 1 1 0 0 0 1 0 0 1 0 43 1 0 1 0 0 1 0 0 1 0 44 1 0 0 1 0 1 0 0 1 0 45 1 0 0 0 0 0 1 0 1 0 46 1 1 0 0 0 0 1 0 1 0 47 1 0 1 0 0 0 1 0 1 0 48 1 0 0 1 0 0 1 0 1 0 49 1 0 0 0 0 0 0 0 0 1 50 1 1 0 0 0 0 0 0 0 1 51 1 0 1 0 0 0 0 0 0 1 52 1 0 0 1 0 0 0 0 0 1 53 1 0 0 0 1 0 0 0 0 1 54 1 1 0 0 1 0 0 0 0 1 55 1 0 1 0 1 0 0 0 0 1 56 1 0 0 1 1 0 0 0 0 1 57 1 0 0 0 0 1 0 0 0 1 58 1 1 0 0 0 1 0 0 0 1 59 1 0 1 0 0 1 0 0 0 1 60 1 0 0 1 0 1 0 0 0 1 61 1 0 0 0 0 0 1 0 0 1 62 1 1 0 0 0 0 1 0 0 1 63 1 0 1 0 0 0 1 0 0 1 64 1 0 0 1 0 0 1 0 0 1 Without an intercept: > model.matrix(~ w + x + y - 1, df) wa wb wc wd xf xg xh yj yk yl 1 1 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 0 3 0 0 1 0 0 0 0 0 0 0 4 0 0 0 1 0 0 0 0 0 0 5 1 0 0 0 1 0 0 0 0 0 6 0 1 0 0 1 0 0 0 0 0 7 0 0 1 0 1 0 0 0 0 0 8 0 0 0 1 1 0 0 0 0 0 9 1 0 0 0 0 1 0 0 0 0 10 0 1 0 0 0 1 0 0 0 0 11 0 0 1 0 0 1 0 0 0 0 12 0 0 0 1 0 1 0 0 0 0 13 1 0 0 0 0 0 1 0 0 0 14 0 1 0 0 0 0 1 0 0 0 15 0 0 1 0 0 0 1 0 0 0 16 0 0 0 1 0 0 1 0 0 0 17 1 0 0 0 0 0 0 1 0 0 18 0 1 0 0 0 0 0 1 0 0 19 0 0 1 0 0 0 0 1 0 0 20 0 0 0 1 0 0 0 1 0 0 21 1 0 0 0 1 0 0 1 0 0 22 0 1 0 0 1 0 0 1 0 0 23 0 0 1 0 1 0 0 1 0 0 24 0 0 0 1 1 0 0 1 0 0 25 1 0 0 0 0 1 0 1 0 0 26 0 1 0 0 0 1 0 1 0 0 27 0 0 1 0 0 1 0 1 0 0 28 0 0 0 1 0 1 0 1 0 0 29 1 0 0 0 0 0 1 1 0 0 30 0 1 0 0 0 0 1 1 0 0 31 0 0 1 0 0 0 1 1 0 0 32 0 0 0 1 0 0 1 1 0 0 33 1 0 0 0 0 0 0 0 1 0 34 0 1 0 0 0 0 0 0 1 0 35 0 0 1 0 0 0 0 0 1 0 36 0 0 0 1 0 0 0 0 1 0 37 1 0 0 0 1 0 0 0 1 0 38 0 1 0 0 1 0 0 0 1 0 39 0 0 1 0 1 0 0 0 1 0 40 0 0 0 1 1 0 0 0 1 0 41 1 0 0 0 0 1 0 0 1 0 42 0 1 0 0 0 1 0 0 1 0 43 0 0 1 0 0 1 0 0 1 0 44 0 0 0 1 0 1 0 0 1 0 45 1 0 0 0 0 0 1 0 1 0 46 0 1 0 0 0 0 1 0 1 0 47 0 0 1 0 0 0 1 0 1 0 48 0 0 0 1 0 0 1 0 1 0 49 1 0 0 0 0 0 0 0 0 1 50 0 1 0 0 0 0 0 0 0 1 51 0 0 1 0 0 0 0 0 0 1 52 0 0 0 1 0 0 0 0 0 1 53 1 0 0 0 1 0 0 0 0 1 54 0 1 0 0 1 0 0 0 0 1 55 0 0 1 0 1 0 0 0 0 1 56 0 0 0 1 1 0 0 0 0 1 57 1 0 0 0 0 1 0 0 0 1 58 0 1 0 0 0 1 0 0 0 1 59 0 0 1 0 0 1 0 0 0 1 60 0 0 0 1 0 1 0 0 0 1 61 1 0 0 0 0 0 1 0 0 1 62 0 1 0 0 0 0 1 0 0 1 63 0 0 1 0 0 0 1 0 0 1 64 0 0 0 1 0 0 1 0 0 1 We can see that when we use an intercept, model.matrix uses dummy encoding with each variable w, x, and y being turned into 3 dummy variables, plus an intercept column. So there is a total of 10 degrees of freedom. When we don't use an intercept, model.matrix creates 4 dummy variables for w and 3 dummy variables for x and y (and no intercept column). So the number of degrees of freedom is still 10.
One-hot vs dummy encoding in Scikit-learn
To add a little to @MatthewDrury's answer regarding this question: Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept.
One-hot vs dummy encoding in Scikit-learn To add a little to @MatthewDrury's answer regarding this question: Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept. In one-hot encoding, 3*4=12 variables are built without an intercept. Am I correct? We can examine what the design matrix would look like with and without an intercept by using model.matrix from R. With an intercept: > df <- expand.grid(w = letters[1:4], x = letters[5:8], y = letters[9:12]) > model.matrix(~ w + x + y, df) (Intercept) wb wc wd xf xg xh yj yk yl 1 1 0 0 0 0 0 0 0 0 0 2 1 1 0 0 0 0 0 0 0 0 3 1 0 1 0 0 0 0 0 0 0 4 1 0 0 1 0 0 0 0 0 0 5 1 0 0 0 1 0 0 0 0 0 6 1 1 0 0 1 0 0 0 0 0 7 1 0 1 0 1 0 0 0 0 0 8 1 0 0 1 1 0 0 0 0 0 9 1 0 0 0 0 1 0 0 0 0 10 1 1 0 0 0 1 0 0 0 0 11 1 0 1 0 0 1 0 0 0 0 12 1 0 0 1 0 1 0 0 0 0 13 1 0 0 0 0 0 1 0 0 0 14 1 1 0 0 0 0 1 0 0 0 15 1 0 1 0 0 0 1 0 0 0 16 1 0 0 1 0 0 1 0 0 0 17 1 0 0 0 0 0 0 1 0 0 18 1 1 0 0 0 0 0 1 0 0 19 1 0 1 0 0 0 0 1 0 0 20 1 0 0 1 0 0 0 1 0 0 21 1 0 0 0 1 0 0 1 0 0 22 1 1 0 0 1 0 0 1 0 0 23 1 0 1 0 1 0 0 1 0 0 24 1 0 0 1 1 0 0 1 0 0 25 1 0 0 0 0 1 0 1 0 0 26 1 1 0 0 0 1 0 1 0 0 27 1 0 1 0 0 1 0 1 0 0 28 1 0 0 1 0 1 0 1 0 0 29 1 0 0 0 0 0 1 1 0 0 30 1 1 0 0 0 0 1 1 0 0 31 1 0 1 0 0 0 1 1 0 0 32 1 0 0 1 0 0 1 1 0 0 33 1 0 0 0 0 0 0 0 1 0 34 1 1 0 0 0 0 0 0 1 0 35 1 0 1 0 0 0 0 0 1 0 36 1 0 0 1 0 0 0 0 1 0 37 1 0 0 0 1 0 0 0 1 0 38 1 1 0 0 1 0 0 0 1 0 39 1 0 1 0 1 0 0 0 1 0 40 1 0 0 1 1 0 0 0 1 0 41 1 0 0 0 0 1 0 0 1 0 42 1 1 0 0 0 1 0 0 1 0 43 1 0 1 0 0 1 0 0 1 0 44 1 0 0 1 0 1 0 0 1 0 45 1 0 0 0 0 0 1 0 1 0 46 1 1 0 0 0 0 1 0 1 0 47 1 0 1 0 0 0 1 0 1 0 48 1 0 0 1 0 0 1 0 1 0 49 1 0 0 0 0 0 0 0 0 1 50 1 1 0 0 0 0 0 0 0 1 51 1 0 1 0 0 0 0 0 0 1 52 1 0 0 1 0 0 0 0 0 1 53 1 0 0 0 1 0 0 0 0 1 54 1 1 0 0 1 0 0 0 0 1 55 1 0 1 0 1 0 0 0 0 1 56 1 0 0 1 1 0 0 0 0 1 57 1 0 0 0 0 1 0 0 0 1 58 1 1 0 0 0 1 0 0 0 1 59 1 0 1 0 0 1 0 0 0 1 60 1 0 0 1 0 1 0 0 0 1 61 1 0 0 0 0 0 1 0 0 1 62 1 1 0 0 0 0 1 0 0 1 63 1 0 1 0 0 0 1 0 0 1 64 1 0 0 1 0 0 1 0 0 1 Without an intercept: > model.matrix(~ w + x + y - 1, df) wa wb wc wd xf xg xh yj yk yl 1 1 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 0 0 3 0 0 1 0 0 0 0 0 0 0 4 0 0 0 1 0 0 0 0 0 0 5 1 0 0 0 1 0 0 0 0 0 6 0 1 0 0 1 0 0 0 0 0 7 0 0 1 0 1 0 0 0 0 0 8 0 0 0 1 1 0 0 0 0 0 9 1 0 0 0 0 1 0 0 0 0 10 0 1 0 0 0 1 0 0 0 0 11 0 0 1 0 0 1 0 0 0 0 12 0 0 0 1 0 1 0 0 0 0 13 1 0 0 0 0 0 1 0 0 0 14 0 1 0 0 0 0 1 0 0 0 15 0 0 1 0 0 0 1 0 0 0 16 0 0 0 1 0 0 1 0 0 0 17 1 0 0 0 0 0 0 1 0 0 18 0 1 0 0 0 0 0 1 0 0 19 0 0 1 0 0 0 0 1 0 0 20 0 0 0 1 0 0 0 1 0 0 21 1 0 0 0 1 0 0 1 0 0 22 0 1 0 0 1 0 0 1 0 0 23 0 0 1 0 1 0 0 1 0 0 24 0 0 0 1 1 0 0 1 0 0 25 1 0 0 0 0 1 0 1 0 0 26 0 1 0 0 0 1 0 1 0 0 27 0 0 1 0 0 1 0 1 0 0 28 0 0 0 1 0 1 0 1 0 0 29 1 0 0 0 0 0 1 1 0 0 30 0 1 0 0 0 0 1 1 0 0 31 0 0 1 0 0 0 1 1 0 0 32 0 0 0 1 0 0 1 1 0 0 33 1 0 0 0 0 0 0 0 1 0 34 0 1 0 0 0 0 0 0 1 0 35 0 0 1 0 0 0 0 0 1 0 36 0 0 0 1 0 0 0 0 1 0 37 1 0 0 0 1 0 0 0 1 0 38 0 1 0 0 1 0 0 0 1 0 39 0 0 1 0 1 0 0 0 1 0 40 0 0 0 1 1 0 0 0 1 0 41 1 0 0 0 0 1 0 0 1 0 42 0 1 0 0 0 1 0 0 1 0 43 0 0 1 0 0 1 0 0 1 0 44 0 0 0 1 0 1 0 0 1 0 45 1 0 0 0 0 0 1 0 1 0 46 0 1 0 0 0 0 1 0 1 0 47 0 0 1 0 0 0 1 0 1 0 48 0 0 0 1 0 0 1 0 1 0 49 1 0 0 0 0 0 0 0 0 1 50 0 1 0 0 0 0 0 0 0 1 51 0 0 1 0 0 0 0 0 0 1 52 0 0 0 1 0 0 0 0 0 1 53 1 0 0 0 1 0 0 0 0 1 54 0 1 0 0 1 0 0 0 0 1 55 0 0 1 0 1 0 0 0 0 1 56 0 0 0 1 1 0 0 0 0 1 57 1 0 0 0 0 1 0 0 0 1 58 0 1 0 0 0 1 0 0 0 1 59 0 0 1 0 0 1 0 0 0 1 60 0 0 0 1 0 1 0 0 0 1 61 1 0 0 0 0 0 1 0 0 1 62 0 1 0 0 0 0 1 0 0 1 63 0 0 1 0 0 0 1 0 0 1 64 0 0 0 1 0 0 1 0 0 1 We can see that when we use an intercept, model.matrix uses dummy encoding with each variable w, x, and y being turned into 3 dummy variables, plus an intercept column. So there is a total of 10 degrees of freedom. When we don't use an intercept, model.matrix creates 4 dummy variables for w and 3 dummy variables for x and y (and no intercept column). So the number of degrees of freedom is still 10.
One-hot vs dummy encoding in Scikit-learn To add a little to @MatthewDrury's answer regarding this question: Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept.
2,748
One-hot vs dummy encoding in Scikit-learn
I totally agree with @Matthew Drury and @Cameron Bieganek's analysis of perfect collinearity and degree of freedom. However, I want to argue here that we do not need to avoid perfect collinearity if we are using methods such as gradient descends as our optimizer. (Update, I just realized that there are more situations where we do not need to avoid collinearity: For example, We do not need to avoid perfect collinearity if we do regression with regularizer either. Since it will add a identity matrix $\lambda I$ to the matrix($X^TX$) below invertible again.) The reason why we might want to avoid perfect collinearity is when we are using linear regression and our loss function is MSE, we could solve the closed form solution, which involves the inverse of a matrix about $X$, and perfect collinearity would make this matrix non invertible. However, in practice, as the computation of inverse matrix is quite expensive $\mathcal{O}(n^3)$, we could use other faster method such as gradient descend to compute an approximate solution and this process does not involves the inverse of matrix. Thus, perfect collinearity could be tolerated here.(Maybe this is why they do not have a warning) So, we could either use: one hot with intercept or one hot without intercept or dummy with intercept. and they would generate quite similar result. I run a regression using mpg dateset with sklearn, apply one hot or dummy to the "origin" feature which has three categories and the result is quite similar: (Please pay attention to how the residual and the coef are alike to each other, I marked parameters in front of enumerated category terms with red rectangles, with the others are the parameters of some continuous features. Detailed code could be seen here, sorry for the lack of comment and a large part of the code comes from tensorflow turorials.) Relationship between the three results: BTW, we could also observe that for case of one hot without intercept, the parameters in front of the last three one hot features are actually equals to the element-wise sum of intercept and the parameters in front of the last three one hot features in one hot with intercept, which can be explained by the perfect collinearity. We can also notice that in the dummy with intercept case, the intercept is actually the parameter(-13.71778...) of the second categorical encoded term in the one-hot without intercept case. And the two parameter of the categorical encoded term in the dummy case is the difference between the corresponding parameters wrt the second term parameter, which is consistent with the interpretation of the parameter before categorical terms in econometrics: how much different each of the other categories makes to the output compare to the basic category.
One-hot vs dummy encoding in Scikit-learn
I totally agree with @Matthew Drury and @Cameron Bieganek's analysis of perfect collinearity and degree of freedom. However, I want to argue here that we do not need to avoid perfect collinearity if w
One-hot vs dummy encoding in Scikit-learn I totally agree with @Matthew Drury and @Cameron Bieganek's analysis of perfect collinearity and degree of freedom. However, I want to argue here that we do not need to avoid perfect collinearity if we are using methods such as gradient descends as our optimizer. (Update, I just realized that there are more situations where we do not need to avoid collinearity: For example, We do not need to avoid perfect collinearity if we do regression with regularizer either. Since it will add a identity matrix $\lambda I$ to the matrix($X^TX$) below invertible again.) The reason why we might want to avoid perfect collinearity is when we are using linear regression and our loss function is MSE, we could solve the closed form solution, which involves the inverse of a matrix about $X$, and perfect collinearity would make this matrix non invertible. However, in practice, as the computation of inverse matrix is quite expensive $\mathcal{O}(n^3)$, we could use other faster method such as gradient descend to compute an approximate solution and this process does not involves the inverse of matrix. Thus, perfect collinearity could be tolerated here.(Maybe this is why they do not have a warning) So, we could either use: one hot with intercept or one hot without intercept or dummy with intercept. and they would generate quite similar result. I run a regression using mpg dateset with sklearn, apply one hot or dummy to the "origin" feature which has three categories and the result is quite similar: (Please pay attention to how the residual and the coef are alike to each other, I marked parameters in front of enumerated category terms with red rectangles, with the others are the parameters of some continuous features. Detailed code could be seen here, sorry for the lack of comment and a large part of the code comes from tensorflow turorials.) Relationship between the three results: BTW, we could also observe that for case of one hot without intercept, the parameters in front of the last three one hot features are actually equals to the element-wise sum of intercept and the parameters in front of the last three one hot features in one hot with intercept, which can be explained by the perfect collinearity. We can also notice that in the dummy with intercept case, the intercept is actually the parameter(-13.71778...) of the second categorical encoded term in the one-hot without intercept case. And the two parameter of the categorical encoded term in the dummy case is the difference between the corresponding parameters wrt the second term parameter, which is consistent with the interpretation of the parameter before categorical terms in econometrics: how much different each of the other categories makes to the output compare to the basic category.
One-hot vs dummy encoding in Scikit-learn I totally agree with @Matthew Drury and @Cameron Bieganek's analysis of perfect collinearity and degree of freedom. However, I want to argue here that we do not need to avoid perfect collinearity if w
2,749
How is the minimum of a set of IID random variables distributed?
If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
How is the minimum of a set of IID random variables distributed?
If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
How is the minimum of a set of IID random variables distributed? If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
How is the minimum of a set of IID random variables distributed? If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
2,750
How is the minimum of a set of IID random variables distributed?
If the CDF of $X_i$ is denoted by $F(x)$, then the CDF of the minimum is given by $1-[1-F(x)]^n$. Reasoning: given $n$ random variables, the probability $P(Y\leq y) = P(\min(X_1\dots X_n)\leq y)$ implies that at least one $X_i$ is smaller than $y$. The probability that at least one $X_i$ is smaller than $y$ is equivalent to one minus the probability that all $X_i$ are greater than $y$, i.e. $P(Y\leq y) = 1 - P(X_1 \gt y,\dots, X_n \gt y)$. If the $X_i$'s are independent identically-distributed, then the probability that all $X_i$ are greater than $y$ is $[1-F(y)]^n$. Therefore, the original probability is $P(Y \leq y) = 1-[1-F(y)]^n$. Example: say $X_i \sim \text{Uniform} (0,1)$, then intuitively the probability $\min(X_1\dots X_n)\leq 1$ should be equal to 1 (as the minimum value would always be less than 1 since $0\leq X_i\leq 1$ for all $i$). In this case $F(1)=1$ thus the probability is always 1.
How is the minimum of a set of IID random variables distributed?
If the CDF of $X_i$ is denoted by $F(x)$, then the CDF of the minimum is given by $1-[1-F(x)]^n$. Reasoning: given $n$ random variables, the probability $P(Y\leq y) = P(\min(X_1\dots X_n)\leq y)$ impl
How is the minimum of a set of IID random variables distributed? If the CDF of $X_i$ is denoted by $F(x)$, then the CDF of the minimum is given by $1-[1-F(x)]^n$. Reasoning: given $n$ random variables, the probability $P(Y\leq y) = P(\min(X_1\dots X_n)\leq y)$ implies that at least one $X_i$ is smaller than $y$. The probability that at least one $X_i$ is smaller than $y$ is equivalent to one minus the probability that all $X_i$ are greater than $y$, i.e. $P(Y\leq y) = 1 - P(X_1 \gt y,\dots, X_n \gt y)$. If the $X_i$'s are independent identically-distributed, then the probability that all $X_i$ are greater than $y$ is $[1-F(y)]^n$. Therefore, the original probability is $P(Y \leq y) = 1-[1-F(y)]^n$. Example: say $X_i \sim \text{Uniform} (0,1)$, then intuitively the probability $\min(X_1\dots X_n)\leq 1$ should be equal to 1 (as the minimum value would always be less than 1 since $0\leq X_i\leq 1$ for all $i$). In this case $F(1)=1$ thus the probability is always 1.
How is the minimum of a set of IID random variables distributed? If the CDF of $X_i$ is denoted by $F(x)$, then the CDF of the minimum is given by $1-[1-F(x)]^n$. Reasoning: given $n$ random variables, the probability $P(Y\leq y) = P(\min(X_1\dots X_n)\leq y)$ impl
2,751
How is the minimum of a set of IID random variables distributed?
Rob Hyndman gave the easy exact answer for a fixed n. If you're interested in asymptotic behavior for large n, this is handled in the field of extreme value theory. There is a small family of possible limiting distributions; see for example the first chapters of this book.
How is the minimum of a set of IID random variables distributed?
Rob Hyndman gave the easy exact answer for a fixed n. If you're interested in asymptotic behavior for large n, this is handled in the field of extreme value theory. There is a small family of possib
How is the minimum of a set of IID random variables distributed? Rob Hyndman gave the easy exact answer for a fixed n. If you're interested in asymptotic behavior for large n, this is handled in the field of extreme value theory. There is a small family of possible limiting distributions; see for example the first chapters of this book.
How is the minimum of a set of IID random variables distributed? Rob Hyndman gave the easy exact answer for a fixed n. If you're interested in asymptotic behavior for large n, this is handled in the field of extreme value theory. There is a small family of possib
2,752
Optimization when Cost Function Slow to Evaluate
TL;DR I recommend using LIPO. It is provably correct and provably better than pure random search (PRS). It is also extremely simple to implement, and has no hyperparameters. I have not conducted an analysis that compares LIPO to BO, but my expectation is that the simplicity and efficiency of LIPO imply that it will out-perform BO. (See also: What are some of the disavantage of bayesian hyper parameter optimization?) LIPO and its Variants This is an exciting arrival which, if it is not new, is certainly new to me. It proceeds by alternating between placing informed bounds on the function, and sampling from the best bound, and using quadratic approximations. I'm still working through all the details, but I think this is very promising. This is a nice blog write-up, and the paper is Cédric Malherbe and Nicolas Vayatis "Global optimization of Lipschitz functions." LIPO is most useful when the number of hyper-parameters that you are searching over is small. Bayesian Optimization Bayesian Optimization-type methods build Gaussian process surrogate models to explore the parameter space. The main idea is that parameter tuples that are closer together will have similar function values, so the assumption of a co-variance structure among points allows the algorithm to make educated guesses about what best parameter tuple is most worthwhile to try next. This strategy helps to reduce the number of function evaluations; in fact, the motivation of BO methods is to keep the number of function evaluations as low as possible while "using the whole buffalo" to make good guesses about what point to test next. There are different figures of merit (expected improvement, expected quantile improvement, probability of improvement...) which are used to compare points to visit next. Contrast this to something like a grid search, which will never use any information from its previous function evaluations to inform where to go next. Incidentally, this is also a powerful global optimization technique, and as such makes no assumptions about the convexity of the surface. Additionally, if the function is stochastic (say, evaluations have some inherent random noise), this can be directly accounted for in the GP model. On the other hand, you'll have to fit at least one GP at every iteration (or several, picking the "best", or averaging over alternatives, or fully Bayesian methods). Then, the model is used to make (probably thousands) of predictions, usually in the form of multistart local optimization, with the observation that it's much cheaper to evaluate the GP prediction function than the function under optimization. But even with this computational overhead, it tends to be the case that even nonconvex functions can be optimized with a relatively small number of function calls. A downside to GP is that the number of iterations to get a good result tends to grow with the number of hyper-parameters to search over. A widely-cited paper on the topic is Jones et al (1998), "Efficient Global Optimization of Expensive Black-Box Functions." But there are many variations on this idea. Random Search Even when the cost function is expensive to evaluate, random search can still be useful. Random search is dirt-simple to implement. The only choice for a researcher to make is setting the the probability $p$ that you want your results to lie in some quantile $q$; the rest proceeds automatically using results from basic probability. Suppose your quantile is $q = 0.95$ and you want a $p=0.95$ probability that the model results are in top $100\times (1-q)=5$ percent of all hyperparameter tuples. The probability that all $n$ attempted tuples are not in that window is $q^n = 0.95^n$ (because they are chosen independently at random from the same distribution), so the probability that at least one tuple is in that region is $1 - 0.95^n$. Putting it all together, we have $$ 1 - q^n \ge p \implies n \ge \frac{\log(1 - p)}{\log(q)} $$ which in our specific case yields $n \ge 59$. This result is why most people recommend $n=60$ attempted tuples for random search. It's worth noting that $n=60$ is comparable to the number of experiments required to get good results with Gaussian Process-based methods when there are a moderate number of parameters. Unlike Gaussian Processes, for random search, the number of queried tuples does not grow with the number of hyper-parameters to search over. Indee,d the dimension of the problem does not appear in the expression that recommends attempting $n=60$ random values. However, this does not mean that random search is "immune" to curse of dimensionality. Increasing the dimension of the hyperparameter search space can mean that the average result drawn from among the "best 5% of values" is still very poor. More information: The "Amazing Hidden Power" of Random Search? The intuition is that if we increase the volume of the search space, then we are naturally also increasing the volume of 5% of the search space. Since you have a probabilistic characterization of how good the results are, this result can be a persuasive tool to convince your boss that running additional experiments will yield diminishing marginal returns.
Optimization when Cost Function Slow to Evaluate
TL;DR I recommend using LIPO. It is provably correct and provably better than pure random search (PRS). It is also extremely simple to implement, and has no hyperparameters. I have not conducted an an
Optimization when Cost Function Slow to Evaluate TL;DR I recommend using LIPO. It is provably correct and provably better than pure random search (PRS). It is also extremely simple to implement, and has no hyperparameters. I have not conducted an analysis that compares LIPO to BO, but my expectation is that the simplicity and efficiency of LIPO imply that it will out-perform BO. (See also: What are some of the disavantage of bayesian hyper parameter optimization?) LIPO and its Variants This is an exciting arrival which, if it is not new, is certainly new to me. It proceeds by alternating between placing informed bounds on the function, and sampling from the best bound, and using quadratic approximations. I'm still working through all the details, but I think this is very promising. This is a nice blog write-up, and the paper is Cédric Malherbe and Nicolas Vayatis "Global optimization of Lipschitz functions." LIPO is most useful when the number of hyper-parameters that you are searching over is small. Bayesian Optimization Bayesian Optimization-type methods build Gaussian process surrogate models to explore the parameter space. The main idea is that parameter tuples that are closer together will have similar function values, so the assumption of a co-variance structure among points allows the algorithm to make educated guesses about what best parameter tuple is most worthwhile to try next. This strategy helps to reduce the number of function evaluations; in fact, the motivation of BO methods is to keep the number of function evaluations as low as possible while "using the whole buffalo" to make good guesses about what point to test next. There are different figures of merit (expected improvement, expected quantile improvement, probability of improvement...) which are used to compare points to visit next. Contrast this to something like a grid search, which will never use any information from its previous function evaluations to inform where to go next. Incidentally, this is also a powerful global optimization technique, and as such makes no assumptions about the convexity of the surface. Additionally, if the function is stochastic (say, evaluations have some inherent random noise), this can be directly accounted for in the GP model. On the other hand, you'll have to fit at least one GP at every iteration (or several, picking the "best", or averaging over alternatives, or fully Bayesian methods). Then, the model is used to make (probably thousands) of predictions, usually in the form of multistart local optimization, with the observation that it's much cheaper to evaluate the GP prediction function than the function under optimization. But even with this computational overhead, it tends to be the case that even nonconvex functions can be optimized with a relatively small number of function calls. A downside to GP is that the number of iterations to get a good result tends to grow with the number of hyper-parameters to search over. A widely-cited paper on the topic is Jones et al (1998), "Efficient Global Optimization of Expensive Black-Box Functions." But there are many variations on this idea. Random Search Even when the cost function is expensive to evaluate, random search can still be useful. Random search is dirt-simple to implement. The only choice for a researcher to make is setting the the probability $p$ that you want your results to lie in some quantile $q$; the rest proceeds automatically using results from basic probability. Suppose your quantile is $q = 0.95$ and you want a $p=0.95$ probability that the model results are in top $100\times (1-q)=5$ percent of all hyperparameter tuples. The probability that all $n$ attempted tuples are not in that window is $q^n = 0.95^n$ (because they are chosen independently at random from the same distribution), so the probability that at least one tuple is in that region is $1 - 0.95^n$. Putting it all together, we have $$ 1 - q^n \ge p \implies n \ge \frac{\log(1 - p)}{\log(q)} $$ which in our specific case yields $n \ge 59$. This result is why most people recommend $n=60$ attempted tuples for random search. It's worth noting that $n=60$ is comparable to the number of experiments required to get good results with Gaussian Process-based methods when there are a moderate number of parameters. Unlike Gaussian Processes, for random search, the number of queried tuples does not grow with the number of hyper-parameters to search over. Indee,d the dimension of the problem does not appear in the expression that recommends attempting $n=60$ random values. However, this does not mean that random search is "immune" to curse of dimensionality. Increasing the dimension of the hyperparameter search space can mean that the average result drawn from among the "best 5% of values" is still very poor. More information: The "Amazing Hidden Power" of Random Search? The intuition is that if we increase the volume of the search space, then we are naturally also increasing the volume of 5% of the search space. Since you have a probabilistic characterization of how good the results are, this result can be a persuasive tool to convince your boss that running additional experiments will yield diminishing marginal returns.
Optimization when Cost Function Slow to Evaluate TL;DR I recommend using LIPO. It is provably correct and provably better than pure random search (PRS). It is also extremely simple to implement, and has no hyperparameters. I have not conducted an an
2,753
Optimization when Cost Function Slow to Evaluate
The literature on evaluation of expensive black-box function is quite vast and it is usually based on surrogate-model methods, as other people pointed out. Black-box here means that little is known about the underlying function, the only thing you can do is evaluate $f(x)$ at a chosen point $x$ (gradients are usually not available). I would say that the current gold standard for evaluation of (very) costly black-box function is (global) Bayesian optimization (BO). Sycorax already described some features of BO, so I am just adding some information that might be useful. As a starting point, you might want to read this overview paper 1. There is also a more recent one [2]. Bayesian optimization has been growing steadily as a field in the recent years, with a series of dedicated workshops (e.g., BayesOpt, and check out these videos from the Sheffield workshop on BO), since it has very practical applications in machine learning, such as for optimizing hyper-parameters of ML algorithms -- see e.g. this paper [3] and related toolbox, SpearMint. There are many other packages in various languages that implement various kinds of Bayesian optimization algorithms. As I mentioned, the underlying requirement is that each function evaluation is very costly, so that the BO-related computations add a negligible overhead. To give a ballpark, BO can be definitely helpful if your function evaluates in a time of the order of minutes or more. You can also apply it for quicker computations (e.g. tens of seconds), but depending on which algorithm you use you may have to adopt various approximations. If your function evaluates in the time scale of seconds, I think you're hitting the boundaries of current research and perhaps other methods might become more useful. Also, I have to say, BO is rarely truly black-box and you often have to tweak the algorithms, sometimes a lot, to make it work at full potential with a specific real-world problem. BO aside, for a review of general derivative-free optimization methods you can have a look at this review [4] and check for algorithms that have good properties of quick convergence. For example, Multi-level Coordinate Search (MCS) usually converges very quickly to a neighbourhood of a minimum (not always the global minimum, of course). MCS is thought for global optimization, but you can make it local by setting appropriate bound constraints. Finally, you are interested in BO for target functions that are both costly and noisy, see my answer to this question. References: 1 Brochu et al., "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning" (2010). [2] Shahriari et al., "Taking the Human Out of the Loop: A Review of Bayesian Optimization" (2015). [3] Snoek et al., "Practical Bayesian Optimization of Machine Learning Algorithms", NIPS (2012). [4] Rios and Sahinidis, "Derivative-free optimization: a review of algorithms and comparison of software implementations", Journal of Global Optimization (2013).
Optimization when Cost Function Slow to Evaluate
The literature on evaluation of expensive black-box function is quite vast and it is usually based on surrogate-model methods, as other people pointed out. Black-box here means that little is known ab
Optimization when Cost Function Slow to Evaluate The literature on evaluation of expensive black-box function is quite vast and it is usually based on surrogate-model methods, as other people pointed out. Black-box here means that little is known about the underlying function, the only thing you can do is evaluate $f(x)$ at a chosen point $x$ (gradients are usually not available). I would say that the current gold standard for evaluation of (very) costly black-box function is (global) Bayesian optimization (BO). Sycorax already described some features of BO, so I am just adding some information that might be useful. As a starting point, you might want to read this overview paper 1. There is also a more recent one [2]. Bayesian optimization has been growing steadily as a field in the recent years, with a series of dedicated workshops (e.g., BayesOpt, and check out these videos from the Sheffield workshop on BO), since it has very practical applications in machine learning, such as for optimizing hyper-parameters of ML algorithms -- see e.g. this paper [3] and related toolbox, SpearMint. There are many other packages in various languages that implement various kinds of Bayesian optimization algorithms. As I mentioned, the underlying requirement is that each function evaluation is very costly, so that the BO-related computations add a negligible overhead. To give a ballpark, BO can be definitely helpful if your function evaluates in a time of the order of minutes or more. You can also apply it for quicker computations (e.g. tens of seconds), but depending on which algorithm you use you may have to adopt various approximations. If your function evaluates in the time scale of seconds, I think you're hitting the boundaries of current research and perhaps other methods might become more useful. Also, I have to say, BO is rarely truly black-box and you often have to tweak the algorithms, sometimes a lot, to make it work at full potential with a specific real-world problem. BO aside, for a review of general derivative-free optimization methods you can have a look at this review [4] and check for algorithms that have good properties of quick convergence. For example, Multi-level Coordinate Search (MCS) usually converges very quickly to a neighbourhood of a minimum (not always the global minimum, of course). MCS is thought for global optimization, but you can make it local by setting appropriate bound constraints. Finally, you are interested in BO for target functions that are both costly and noisy, see my answer to this question. References: 1 Brochu et al., "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning" (2010). [2] Shahriari et al., "Taking the Human Out of the Loop: A Review of Bayesian Optimization" (2015). [3] Snoek et al., "Practical Bayesian Optimization of Machine Learning Algorithms", NIPS (2012). [4] Rios and Sahinidis, "Derivative-free optimization: a review of algorithms and comparison of software implementations", Journal of Global Optimization (2013).
Optimization when Cost Function Slow to Evaluate The literature on evaluation of expensive black-box function is quite vast and it is usually based on surrogate-model methods, as other people pointed out. Black-box here means that little is known ab
2,754
Optimization when Cost Function Slow to Evaluate
I don't know the algorithms myself, but I believe the kind of optimization algorithm that you are looking for is derivative-free optimization, which is used when the objective is costly or noisy. For example, take a look at this paper (Björkman, M. & Holmström, K. "Global Optimization of Costly Nonconvex Functions Using Radial Basis Functions." Optimization and Engineering (2000) 1: 373. doi:10.1023/A:1011584207202) whose abstract seems to indicate this is exactly what you want: The paper considers global optimization of costly objective functions, i.e. the problem of finding the global minimum when there are several local minima and each function value takes considerable CPU time to compute. Such problems often arise in industrial and financial applications, where a function value could be a result of a time-consuming computer simulation or optimization. Derivatives are most often hard to obtain, and the algorithms presented make no use of such information.
Optimization when Cost Function Slow to Evaluate
I don't know the algorithms myself, but I believe the kind of optimization algorithm that you are looking for is derivative-free optimization, which is used when the objective is costly or noisy. Fo
Optimization when Cost Function Slow to Evaluate I don't know the algorithms myself, but I believe the kind of optimization algorithm that you are looking for is derivative-free optimization, which is used when the objective is costly or noisy. For example, take a look at this paper (Björkman, M. & Holmström, K. "Global Optimization of Costly Nonconvex Functions Using Radial Basis Functions." Optimization and Engineering (2000) 1: 373. doi:10.1023/A:1011584207202) whose abstract seems to indicate this is exactly what you want: The paper considers global optimization of costly objective functions, i.e. the problem of finding the global minimum when there are several local minima and each function value takes considerable CPU time to compute. Such problems often arise in industrial and financial applications, where a function value could be a result of a time-consuming computer simulation or optimization. Derivatives are most often hard to obtain, and the algorithms presented make no use of such information.
Optimization when Cost Function Slow to Evaluate I don't know the algorithms myself, but I believe the kind of optimization algorithm that you are looking for is derivative-free optimization, which is used when the objective is costly or noisy. Fo
2,755
Optimization when Cost Function Slow to Evaluate
You are not alone. Expensive-to-evaluate systems are very common in engineering, such as finite element method (FEM) models and computational fluid dynamics (CFD) models. Optimization of these computatationaly expensive models is very needed and challenge because evoluationary algorithms often needs tens of thouands of evaluations of the problem which is not an option for expensive-to-evaluate problems. Fortunately, there are lots of methods (algorithms) avaiable to solve this problem. As far as I know, most of them are based on surrogate models (metamodels). Some are listed below. Efficient Global Optimization (EGO, also known as Bayesian optimization) [1] . The EGO algorithm has been mentioned above and may be the most famous surrogate-based optimization algorithm. It is based on the Kriging model and an infill criterion called expected improvement function (EI). R packages including the EGO algorithm are DiceOptim and DiceKriging. Mode-pursuing sampling (MPS) method [2]. The MPS algorithm is built on the RBF model and an adptive sampling strategy is used to pick up candidate points. The MATLAB code is publised by the authors at http://www.sfu.ca/~gwa5/software.html. The MPS algorithm may need more evaluations to get the optimum, but can handle more complicated probelms than the EGO algorithm from my personal experience. Ensemble surrogates models by Juliane Müller [3]. She used multiple surrogates to enhance the searching ability. The MATLAB toolbox MATSuMoTo is avaiable at https://github.com/Piiloblondie/MATSuMoTo. In summery, these surrogate-based optimization algorithms try to find the global optimum of the problem using as few evaluations as possible. This is achieved by making the full use of the informations that the surrogate (surrogates) provides. Reviews on optimization of compuationally expensive problems are in [4-6]. Reference: D. R. Jones, M. Schonlau, and W. J. Welch, "Efficient global optimization of expensive black-box functions," Journal of Global Optimization, vol. 13, pp. 455-492, 1998. L. Wang, S. Shan, and G. G. Wang, "Mode-pursuing sampling method for global optimization on expensive black-box functions," Engineering Optimization, vol. 36, pp. 419-438, 2004. J. Müller, "Surrogate Model Algorithms for Computationally Expensive Black-Box Global Optimization Problems," Tampere University of Technology, 2012. G. G. Wang and S. Shan, "Review of metamodeling techniques in support of engineering design optimization," Journal of Mechanical Design, vol. 129, pp. 370-380, 2007. A. I. Forrester and A. J. Keane, "Recent advances in surrogate-based optimization," Progress in Aerospace Sciences, vol. 45, pp. 50-79, 2009. F. A. C. Viana, T. W. Simpson, V. Balabanov, and V. Toropov, "Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?," AIAA Journal, vol. 52, pp. 670-690, 2014/04/01 2014.
Optimization when Cost Function Slow to Evaluate
You are not alone. Expensive-to-evaluate systems are very common in engineering, such as finite element method (FEM) models and computational fluid dynamics (CFD) models. Optimization of these computa
Optimization when Cost Function Slow to Evaluate You are not alone. Expensive-to-evaluate systems are very common in engineering, such as finite element method (FEM) models and computational fluid dynamics (CFD) models. Optimization of these computatationaly expensive models is very needed and challenge because evoluationary algorithms often needs tens of thouands of evaluations of the problem which is not an option for expensive-to-evaluate problems. Fortunately, there are lots of methods (algorithms) avaiable to solve this problem. As far as I know, most of them are based on surrogate models (metamodels). Some are listed below. Efficient Global Optimization (EGO, also known as Bayesian optimization) [1] . The EGO algorithm has been mentioned above and may be the most famous surrogate-based optimization algorithm. It is based on the Kriging model and an infill criterion called expected improvement function (EI). R packages including the EGO algorithm are DiceOptim and DiceKriging. Mode-pursuing sampling (MPS) method [2]. The MPS algorithm is built on the RBF model and an adptive sampling strategy is used to pick up candidate points. The MATLAB code is publised by the authors at http://www.sfu.ca/~gwa5/software.html. The MPS algorithm may need more evaluations to get the optimum, but can handle more complicated probelms than the EGO algorithm from my personal experience. Ensemble surrogates models by Juliane Müller [3]. She used multiple surrogates to enhance the searching ability. The MATLAB toolbox MATSuMoTo is avaiable at https://github.com/Piiloblondie/MATSuMoTo. In summery, these surrogate-based optimization algorithms try to find the global optimum of the problem using as few evaluations as possible. This is achieved by making the full use of the informations that the surrogate (surrogates) provides. Reviews on optimization of compuationally expensive problems are in [4-6]. Reference: D. R. Jones, M. Schonlau, and W. J. Welch, "Efficient global optimization of expensive black-box functions," Journal of Global Optimization, vol. 13, pp. 455-492, 1998. L. Wang, S. Shan, and G. G. Wang, "Mode-pursuing sampling method for global optimization on expensive black-box functions," Engineering Optimization, vol. 36, pp. 419-438, 2004. J. Müller, "Surrogate Model Algorithms for Computationally Expensive Black-Box Global Optimization Problems," Tampere University of Technology, 2012. G. G. Wang and S. Shan, "Review of metamodeling techniques in support of engineering design optimization," Journal of Mechanical Design, vol. 129, pp. 370-380, 2007. A. I. Forrester and A. J. Keane, "Recent advances in surrogate-based optimization," Progress in Aerospace Sciences, vol. 45, pp. 50-79, 2009. F. A. C. Viana, T. W. Simpson, V. Balabanov, and V. Toropov, "Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?," AIAA Journal, vol. 52, pp. 670-690, 2014/04/01 2014.
Optimization when Cost Function Slow to Evaluate You are not alone. Expensive-to-evaluate systems are very common in engineering, such as finite element method (FEM) models and computational fluid dynamics (CFD) models. Optimization of these computa
2,756
Optimization when Cost Function Slow to Evaluate
The two simple strategies that I have successfully used in the past are: If possible, try to find a simpler surrogate function approximating your full cost function evaluation -- typical an analytical model replacing a simulation. Optimize this simpler function. Then validate and fine tune the resulting solution with your exact cost function. If possible, try to find a way to evaluate an exact "delta-cost" function -- exact as opposed to be an approximation from using the gradient. That is, from an initial 15-dimensional point for which you have the full cost evaluated, find a way to derive how the cost would change by making a small change to one (or several) of the 15 components of your current point. You would need to exploit localization properties of a small perturbation if any in your particular case and you would likely need to define, cache, and update an internal state variable along the way. Those strategies are very case specific, I don't know whether they can be applicable in your case or not, sorry if they are not. Both could be applicable (as it was in my use cases): apply the "delta-cost" strategy to a simpler analytical model -- performance may improve by several orders of magnitudes. Another strategy would be to use a second order method that typically tends to reduce the number of iterations (but each iteration is more complex) -- e.g., Levenberg–Marquardt algorithm. But considering you don't seem to have a way to directly and efficiently evaluate the gradient, this is probably not a viable option in this case.
Optimization when Cost Function Slow to Evaluate
The two simple strategies that I have successfully used in the past are: If possible, try to find a simpler surrogate function approximating your full cost function evaluation -- typical an analytica
Optimization when Cost Function Slow to Evaluate The two simple strategies that I have successfully used in the past are: If possible, try to find a simpler surrogate function approximating your full cost function evaluation -- typical an analytical model replacing a simulation. Optimize this simpler function. Then validate and fine tune the resulting solution with your exact cost function. If possible, try to find a way to evaluate an exact "delta-cost" function -- exact as opposed to be an approximation from using the gradient. That is, from an initial 15-dimensional point for which you have the full cost evaluated, find a way to derive how the cost would change by making a small change to one (or several) of the 15 components of your current point. You would need to exploit localization properties of a small perturbation if any in your particular case and you would likely need to define, cache, and update an internal state variable along the way. Those strategies are very case specific, I don't know whether they can be applicable in your case or not, sorry if they are not. Both could be applicable (as it was in my use cases): apply the "delta-cost" strategy to a simpler analytical model -- performance may improve by several orders of magnitudes. Another strategy would be to use a second order method that typically tends to reduce the number of iterations (but each iteration is more complex) -- e.g., Levenberg–Marquardt algorithm. But considering you don't seem to have a way to directly and efficiently evaluate the gradient, this is probably not a viable option in this case.
Optimization when Cost Function Slow to Evaluate The two simple strategies that I have successfully used in the past are: If possible, try to find a simpler surrogate function approximating your full cost function evaluation -- typical an analytica
2,757
Optimization when Cost Function Slow to Evaluate
There are many tricks used in stochastic gradient descent that can be also applied to objective function evaluation. The overall idea is trying to approximate the objective function using a subset of data. My answers in these two posts discuss why stochastic gradient descent works: the intuition behind it is to approximate the gradient using a subset of data. How could stochastic gradient descent save time comparing to standard gradient descent? How to run linear regression in a parallel/distributed way for big data setting? The same trick applies to the objective function. Let's still use linear regression as an example: suppose the objective function is $\|Ax-b\|^2$. If $A$ is huge, say a trillion rows, evaluating it once will take a very long time. We can always use a subset of $A$ and $b$ to approximate the objective function, which is the squared loss on the subset of the data.
Optimization when Cost Function Slow to Evaluate
There are many tricks used in stochastic gradient descent that can be also applied to objective function evaluation. The overall idea is trying to approximate the objective function using a subset of
Optimization when Cost Function Slow to Evaluate There are many tricks used in stochastic gradient descent that can be also applied to objective function evaluation. The overall idea is trying to approximate the objective function using a subset of data. My answers in these two posts discuss why stochastic gradient descent works: the intuition behind it is to approximate the gradient using a subset of data. How could stochastic gradient descent save time comparing to standard gradient descent? How to run linear regression in a parallel/distributed way for big data setting? The same trick applies to the objective function. Let's still use linear regression as an example: suppose the objective function is $\|Ax-b\|^2$. If $A$ is huge, say a trillion rows, evaluating it once will take a very long time. We can always use a subset of $A$ and $b$ to approximate the objective function, which is the squared loss on the subset of the data.
Optimization when Cost Function Slow to Evaluate There are many tricks used in stochastic gradient descent that can be also applied to objective function evaluation. The overall idea is trying to approximate the objective function using a subset of
2,758
What method can be used to detect seasonality in data?
A really good way to find periodicity in any regular series of data is to inspect its power spectrum after removing any overall trend. (This lends itself well to automated screening when the total power is normalized to a standard value, such as unity.) The preliminary trend removal (and optional differencing to remove serial correlation) is essential to avoid confounding periods with other behaviors. The power spectrum is the discrete Fourier transform of the autocovariance function of an appropriately smoothed version of the original series. If you think of the time series as sampling a physical waveform, you can estimate how much of the wave's total power is carried within each frequency. The power spectrum (or periodogram) plots the power versus frequency. Cyclic (that is, repetitive or seasonal patterns) will show up as large spikes located at their frequencies. As an example, consider this (simulated) time series of residuals from a daily measurement taken for one year (365 values). The values fluctuate around $0$ without any evident trends, showing that all important trends have been removed. The fluctuation appears random: no periodicity is apparent. Here's another plot of the same data, drawn to help us see possible periodic patterns. If you look really hard, you might be able to discern a noisy but repetitive pattern that occurs 11 to 12 times. The longish sequences of above-zero and below-zero values at least suggest some positive autocorrelation, showing this series is not completely random. Here's the periodogram, shown for periods up to 91 (one-quarter of the total series length). It was constructed with a Welch window and normalized to unit area (for the entire periodogram, not just the part shown here). The power looks like "white noise" (small random fluctuations) plus two prominent spikes. They're hard to miss, aren't they? The larger occurs at a period of 12 and the smaller at a period of 52. This method has thereby detected a monthly cycle and a weekly cycle in these data. That's really all there is to it. To automate detection of cycles ("seasonality"), just scan the periodogram (which is a list of values) for relatively large local maxima. It's time to reveal how these data were created. The values are generated from a sum of two sine waves, one with frequency 12 (of squared amplitude 3/4) and another with frequency 52 (of squared amplitude 1/4). These are what the spikes in the periodogram detected. Their sum is shown as the thick black curve. Iid Normal noise of variance 2 was then added, as shown by the light gray bars extending from the black curve to the red dots. This noise introduced the low-level wiggles at the bottom of the periodogram, which otherwise would just be a flat 0. Fully two-thirds of the total variation in the values is non-periodic and random, which is very noisy: that's why it's so difficult to make out the periodicity just by looking at the dots. Nevertheless (in part because there's so much data) finding the frequencies with the periodogram is easy and the result is clear. Instructions and good advice for computing periodograms appear on the Numerical Recipes site: look for the section on "power spectrum estimation using the FFT." R has code for periodogram estimation. These illustrations were created in Mathematica 8; the periodogram was computed with its "Fourier" function.
What method can be used to detect seasonality in data?
A really good way to find periodicity in any regular series of data is to inspect its power spectrum after removing any overall trend. (This lends itself well to automated screening when the total po
What method can be used to detect seasonality in data? A really good way to find periodicity in any regular series of data is to inspect its power spectrum after removing any overall trend. (This lends itself well to automated screening when the total power is normalized to a standard value, such as unity.) The preliminary trend removal (and optional differencing to remove serial correlation) is essential to avoid confounding periods with other behaviors. The power spectrum is the discrete Fourier transform of the autocovariance function of an appropriately smoothed version of the original series. If you think of the time series as sampling a physical waveform, you can estimate how much of the wave's total power is carried within each frequency. The power spectrum (or periodogram) plots the power versus frequency. Cyclic (that is, repetitive or seasonal patterns) will show up as large spikes located at their frequencies. As an example, consider this (simulated) time series of residuals from a daily measurement taken for one year (365 values). The values fluctuate around $0$ without any evident trends, showing that all important trends have been removed. The fluctuation appears random: no periodicity is apparent. Here's another plot of the same data, drawn to help us see possible periodic patterns. If you look really hard, you might be able to discern a noisy but repetitive pattern that occurs 11 to 12 times. The longish sequences of above-zero and below-zero values at least suggest some positive autocorrelation, showing this series is not completely random. Here's the periodogram, shown for periods up to 91 (one-quarter of the total series length). It was constructed with a Welch window and normalized to unit area (for the entire periodogram, not just the part shown here). The power looks like "white noise" (small random fluctuations) plus two prominent spikes. They're hard to miss, aren't they? The larger occurs at a period of 12 and the smaller at a period of 52. This method has thereby detected a monthly cycle and a weekly cycle in these data. That's really all there is to it. To automate detection of cycles ("seasonality"), just scan the periodogram (which is a list of values) for relatively large local maxima. It's time to reveal how these data were created. The values are generated from a sum of two sine waves, one with frequency 12 (of squared amplitude 3/4) and another with frequency 52 (of squared amplitude 1/4). These are what the spikes in the periodogram detected. Their sum is shown as the thick black curve. Iid Normal noise of variance 2 was then added, as shown by the light gray bars extending from the black curve to the red dots. This noise introduced the low-level wiggles at the bottom of the periodogram, which otherwise would just be a flat 0. Fully two-thirds of the total variation in the values is non-periodic and random, which is very noisy: that's why it's so difficult to make out the periodicity just by looking at the dots. Nevertheless (in part because there's so much data) finding the frequencies with the periodogram is easy and the result is clear. Instructions and good advice for computing periodograms appear on the Numerical Recipes site: look for the section on "power spectrum estimation using the FFT." R has code for periodogram estimation. These illustrations were created in Mathematica 8; the periodogram was computed with its "Fourier" function.
What method can be used to detect seasonality in data? A really good way to find periodicity in any regular series of data is to inspect its power spectrum after removing any overall trend. (This lends itself well to automated screening when the total po
2,759
What method can be used to detect seasonality in data?
Here's an example using monthly data on log unemployment claims from a city in New Jersey (from Stata, only because that's what I analyzed these data in originally). The heights of the lines indicate the correlation between a variable and the sth lag of itself; the gray area gives you a sense of whether this correlation is significant (this range is a guide only and isn't the most reliable way to test the significance). If this correlation is high, there is evidence of serial correlation. Note the humps that occur around periods 12, 24, and 36. Since this is monthly data, this suggests that the correlation gets stronger when you look at periods exactly 1, 2, or 3 years previous. This is evidence of monthly seasonality. You can test these relationships statistically by regressing the variable on dummy variables indicating the seasonality component---here, month dummies. You can test the joint significance of those dummies to test for seasonality. This procedure isn't quite right, as the test requires that the error terms not be serially correlated. So, before testing these seasonality dummies, we need to remove the remaining serial correlation (typically by including lags of the variable). There may be pulses, breaks, and all the other time series problems that you need to correct as well to get the appropriate results from the test. You didn't ask about those, so I won't go into detail (plus, there are a lot of CV questions on those topics). (Just to feed your curiosity, this series requires the month dummies, a single lag of itself, and a shift component to get rid of the serial correlation.)
What method can be used to detect seasonality in data?
Here's an example using monthly data on log unemployment claims from a city in New Jersey (from Stata, only because that's what I analyzed these data in originally). The heights of the lines indicate
What method can be used to detect seasonality in data? Here's an example using monthly data on log unemployment claims from a city in New Jersey (from Stata, only because that's what I analyzed these data in originally). The heights of the lines indicate the correlation between a variable and the sth lag of itself; the gray area gives you a sense of whether this correlation is significant (this range is a guide only and isn't the most reliable way to test the significance). If this correlation is high, there is evidence of serial correlation. Note the humps that occur around periods 12, 24, and 36. Since this is monthly data, this suggests that the correlation gets stronger when you look at periods exactly 1, 2, or 3 years previous. This is evidence of monthly seasonality. You can test these relationships statistically by regressing the variable on dummy variables indicating the seasonality component---here, month dummies. You can test the joint significance of those dummies to test for seasonality. This procedure isn't quite right, as the test requires that the error terms not be serially correlated. So, before testing these seasonality dummies, we need to remove the remaining serial correlation (typically by including lags of the variable). There may be pulses, breaks, and all the other time series problems that you need to correct as well to get the appropriate results from the test. You didn't ask about those, so I won't go into detail (plus, there are a lot of CV questions on those topics). (Just to feed your curiosity, this series requires the month dummies, a single lag of itself, and a shift component to get rid of the serial correlation.)
What method can be used to detect seasonality in data? Here's an example using monthly data on log unemployment claims from a city in New Jersey (from Stata, only because that's what I analyzed these data in originally). The heights of the lines indicate
2,760
What method can be used to detect seasonality in data?
Seasonality can and does often change over time thus summary measures can be quite inadequate to detect structure. One needs to test for transience in ARIMA coefficients and often changes in the “seasonal dummies”. For example in a 10 year horizon there may not have been a June effect for the first k years but the last 10-k years there is evidence of a June effect. A simple composite June effect might be non-significant since the effect was not constant over time. In a similar manner a seasonal ARIMA component may have also changed. Care should be taken to include local level shifts and or local time trends while ensuring that the variance of the errors has remained constant over time. One should not evaluate transformations like GLS/weighted least Squares or power transformations like logs/square roots, etc. on the original data but on the errors from a tentative model. The Gaussian assumptions have nothing whatsoever to do with the observed data but all to do with the errors from the model. This is due to the underpinnings of the statistical tests which use the ratio of a non-central chi-square variable to a central chi-square variable. If you wanted to post an example series from your world I would be glad to provide you and the list a thorough analysis leading to the detection of the seasonal structure.
What method can be used to detect seasonality in data?
Seasonality can and does often change over time thus summary measures can be quite inadequate to detect structure. One needs to test for transience in ARIMA coefficients and often changes in the “seas
What method can be used to detect seasonality in data? Seasonality can and does often change over time thus summary measures can be quite inadequate to detect structure. One needs to test for transience in ARIMA coefficients and often changes in the “seasonal dummies”. For example in a 10 year horizon there may not have been a June effect for the first k years but the last 10-k years there is evidence of a June effect. A simple composite June effect might be non-significant since the effect was not constant over time. In a similar manner a seasonal ARIMA component may have also changed. Care should be taken to include local level shifts and or local time trends while ensuring that the variance of the errors has remained constant over time. One should not evaluate transformations like GLS/weighted least Squares or power transformations like logs/square roots, etc. on the original data but on the errors from a tentative model. The Gaussian assumptions have nothing whatsoever to do with the observed data but all to do with the errors from the model. This is due to the underpinnings of the statistical tests which use the ratio of a non-central chi-square variable to a central chi-square variable. If you wanted to post an example series from your world I would be glad to provide you and the list a thorough analysis leading to the detection of the seasonal structure.
What method can be used to detect seasonality in data? Seasonality can and does often change over time thus summary measures can be quite inadequate to detect structure. One needs to test for transience in ARIMA coefficients and often changes in the “seas
2,761
What method can be used to detect seasonality in data?
Continuous wavelet transform can show the seasonality as well. Because the assumption of periodogram is the seasonality is stationary, wavelet is better than periodogram since it allows the change of seasonality along the time. Just like periodogram decomposes the time series into sine or cosine waves of different frequencies and calculates the power in each frequency, continuous wavelet transform decomposes the time series into Morlet wavelet of different frequencies, and calculate the power of the time series against each frequency. This is an example of wavelet. We can see there is a strong signal of frequency of 0.02/kyr during 0-400 kyrs. One issue of wavelet is that, since the data length is not enough to calculate wavelet at the ends of the time series(like first 100 days has 500-day cycle), the wavelet spectrum is not accurate(which is also called edge effects), cone of influence will be drawn(the dashed line in the wavelet above), and only the spectrum within cone of influence is reliable. Helpful resources: https://en.wikipedia.org/wiki/Continuous_wavelet_transform https://www.youtube.com/watch?v=GV34hKXDw_c&t=189s Picture comes from: http://mres.uni-potsdam.de/index.php/2017/03/02/calculating-the-continuous-1-d-wavelet-transform-with-the-new-function-cwt-update/
What method can be used to detect seasonality in data?
Continuous wavelet transform can show the seasonality as well. Because the assumption of periodogram is the seasonality is stationary, wavelet is better than periodogram since it allows the change of
What method can be used to detect seasonality in data? Continuous wavelet transform can show the seasonality as well. Because the assumption of periodogram is the seasonality is stationary, wavelet is better than periodogram since it allows the change of seasonality along the time. Just like periodogram decomposes the time series into sine or cosine waves of different frequencies and calculates the power in each frequency, continuous wavelet transform decomposes the time series into Morlet wavelet of different frequencies, and calculate the power of the time series against each frequency. This is an example of wavelet. We can see there is a strong signal of frequency of 0.02/kyr during 0-400 kyrs. One issue of wavelet is that, since the data length is not enough to calculate wavelet at the ends of the time series(like first 100 days has 500-day cycle), the wavelet spectrum is not accurate(which is also called edge effects), cone of influence will be drawn(the dashed line in the wavelet above), and only the spectrum within cone of influence is reliable. Helpful resources: https://en.wikipedia.org/wiki/Continuous_wavelet_transform https://www.youtube.com/watch?v=GV34hKXDw_c&t=189s Picture comes from: http://mres.uni-potsdam.de/index.php/2017/03/02/calculating-the-continuous-1-d-wavelet-transform-with-the-new-function-cwt-update/
What method can be used to detect seasonality in data? Continuous wavelet transform can show the seasonality as well. Because the assumption of periodogram is the seasonality is stationary, wavelet is better than periodogram since it allows the change of
2,762
What method can be used to detect seasonality in data?
Charlie's answer is good, and it's where I'd start. If you don't want to use ACF graphs, you could create k-1 dummy variables for the k time periods present. Then you can see if the dummy variables are significant in a regression with the dummy variables (and likely a trend term). If your data is quarterly: dummy Q2 is 1 if this is the second quarter, else 0 dummy Q3 is 1 if this is the third quarter, else 0 dummy Q4 is 1 if this is the fourth quarter, else 0 Note quarter 1 is the base case (all 3 dummies zero) You might want to also check out "time series decomposition" in Minitab -- often called "classical decomposition". In the end, you may want to use something more modern, but this is a simple place to start.
What method can be used to detect seasonality in data?
Charlie's answer is good, and it's where I'd start. If you don't want to use ACF graphs, you could create k-1 dummy variables for the k time periods present. Then you can see if the dummy variables a
What method can be used to detect seasonality in data? Charlie's answer is good, and it's where I'd start. If you don't want to use ACF graphs, you could create k-1 dummy variables for the k time periods present. Then you can see if the dummy variables are significant in a regression with the dummy variables (and likely a trend term). If your data is quarterly: dummy Q2 is 1 if this is the second quarter, else 0 dummy Q3 is 1 if this is the third quarter, else 0 dummy Q4 is 1 if this is the fourth quarter, else 0 Note quarter 1 is the base case (all 3 dummies zero) You might want to also check out "time series decomposition" in Minitab -- often called "classical decomposition". In the end, you may want to use something more modern, but this is a simple place to start.
What method can be used to detect seasonality in data? Charlie's answer is good, and it's where I'd start. If you don't want to use ACF graphs, you could create k-1 dummy variables for the k time periods present. Then you can see if the dummy variables a
2,763
What method can be used to detect seasonality in data?
I"m a bit new to R myself, but my understanding of the ACF function is that if the vertical line goes above the top dashed line or below the bottom dashed line, there is some autoregression (including seasonality). Try creating a vector of sine
What method can be used to detect seasonality in data?
I"m a bit new to R myself, but my understanding of the ACF function is that if the vertical line goes above the top dashed line or below the bottom dashed line, there is some autoregression (including
What method can be used to detect seasonality in data? I"m a bit new to R myself, but my understanding of the ACF function is that if the vertical line goes above the top dashed line or below the bottom dashed line, there is some autoregression (including seasonality). Try creating a vector of sine
What method can be used to detect seasonality in data? I"m a bit new to R myself, but my understanding of the ACF function is that if the vertical line goes above the top dashed line or below the bottom dashed line, there is some autoregression (including
2,764
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
The Pearson and Spearman correlation are defined as long as you have some $0$s and some $1$s for both of two binary variables, say $y$ and $x$. It is easy to get a good qualitative idea of what they mean by thinking of a scatter plot of the two variables. Clearly, there are only four possibilities $(0,0), (0,1), (1, 0), (1,1)$ (so that jittering to shake identical points apart for visualization is a good idea). For example, in any situation where the two vectors are identical, subject to having some 0s and some 1s in each, then by definition $y = x$ and the correlation is necessarily $1$. Similarly, it is possible that $y = 1 -x$ and then the correlation is $-1$. For this set-up, there is no scope for monotonic relations that are not linear. When taking ranks of $0$s and $1$s under the usual midrank convention the ranks are just a linear transformation of the original $0$s and $1$s and the Spearman correlation is necessarily identical to the Pearson correlation. Hence there is no reason to consider Spearman correlation separately here, or indeed at all. Correlations arise naturally for some problems involving $0$s and $1$s, e.g. in the study of binary processes in time or space. On the whole, however, there will be better ways of thinking about such data, depending largely on the main motive for such a study. For example, the fact that correlations make much sense does not mean that linear regression is a good way to model a binary response. If one of the binary variables is a response, then most statistical people would start by considering a logit model.
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
The Pearson and Spearman correlation are defined as long as you have some $0$s and some $1$s for both of two binary variables, say $y$ and $x$. It is easy to get a good qualitative idea of what they m
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? The Pearson and Spearman correlation are defined as long as you have some $0$s and some $1$s for both of two binary variables, say $y$ and $x$. It is easy to get a good qualitative idea of what they mean by thinking of a scatter plot of the two variables. Clearly, there are only four possibilities $(0,0), (0,1), (1, 0), (1,1)$ (so that jittering to shake identical points apart for visualization is a good idea). For example, in any situation where the two vectors are identical, subject to having some 0s and some 1s in each, then by definition $y = x$ and the correlation is necessarily $1$. Similarly, it is possible that $y = 1 -x$ and then the correlation is $-1$. For this set-up, there is no scope for monotonic relations that are not linear. When taking ranks of $0$s and $1$s under the usual midrank convention the ranks are just a linear transformation of the original $0$s and $1$s and the Spearman correlation is necessarily identical to the Pearson correlation. Hence there is no reason to consider Spearman correlation separately here, or indeed at all. Correlations arise naturally for some problems involving $0$s and $1$s, e.g. in the study of binary processes in time or space. On the whole, however, there will be better ways of thinking about such data, depending largely on the main motive for such a study. For example, the fact that correlations make much sense does not mean that linear regression is a good way to model a binary response. If one of the binary variables is a response, then most statistical people would start by considering a logit model.
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? The Pearson and Spearman correlation are defined as long as you have some $0$s and some $1$s for both of two binary variables, say $y$ and $x$. It is easy to get a good qualitative idea of what they m
2,765
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
There are specialised similarity metrics for binary vectors, such as: Jaccard-Needham Dice Yule Russell-Rao Sokal-Michener Rogers-Tanimoto Kulzinsky etc. For details, see here.
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
There are specialised similarity metrics for binary vectors, such as: Jaccard-Needham Dice Yule Russell-Rao Sokal-Michener Rogers-Tanimoto Kulzinsky etc. For details, see here.
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? There are specialised similarity metrics for binary vectors, such as: Jaccard-Needham Dice Yule Russell-Rao Sokal-Michener Rogers-Tanimoto Kulzinsky etc. For details, see here.
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? There are specialised similarity metrics for binary vectors, such as: Jaccard-Needham Dice Yule Russell-Rao Sokal-Michener Rogers-Tanimoto Kulzinsky etc. For details, see here.
2,766
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
I would not advise to use Pearson's correlation coefficient for binary data, see the following counter-example: set.seed(10) a = rbinom(n=100, size=1, prob=0.9) b = rbinom(n=100, size=1, prob=0.9) in most cases both give a 1 table(a,b) > table(a,b) b a 0 1 0 0 3 1 9 88 but the correlation does not show this cor(a, b, method="pearson") > cor(a, b, method="pearson") [1] -0.05530639 A binary similarity measure such as Jaccard index shows however a much higher association: install.packages("clusteval") library('clusteval') cluster_similarity(a,b, similarity="jaccard", method="independence") > cluster_similarity(a,b, similarity="jaccard", method="independence") [1] 0.7854966 Why is this? See here the simple bivariate regression plot(jitter(a, factor = .25), jitter(b, factor = .25), xlab="a", ylab="b", pch=15, col="blue", ylim=c(-0.05,1.05), xlim=c(-0.05,1.05)) abline(lm(a~b), lwd=2, col="blue") text(.5,.9,expression(paste(rho, " = -0.055"))) plot below (small noise added to make the number of points clearer)
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
I would not advise to use Pearson's correlation coefficient for binary data, see the following counter-example: set.seed(10) a = rbinom(n=100, size=1, prob=0.9) b = rbinom(n=100, size=1, prob=0.9)
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? I would not advise to use Pearson's correlation coefficient for binary data, see the following counter-example: set.seed(10) a = rbinom(n=100, size=1, prob=0.9) b = rbinom(n=100, size=1, prob=0.9) in most cases both give a 1 table(a,b) > table(a,b) b a 0 1 0 0 3 1 9 88 but the correlation does not show this cor(a, b, method="pearson") > cor(a, b, method="pearson") [1] -0.05530639 A binary similarity measure such as Jaccard index shows however a much higher association: install.packages("clusteval") library('clusteval') cluster_similarity(a,b, similarity="jaccard", method="independence") > cluster_similarity(a,b, similarity="jaccard", method="independence") [1] 0.7854966 Why is this? See here the simple bivariate regression plot(jitter(a, factor = .25), jitter(b, factor = .25), xlab="a", ylab="b", pch=15, col="blue", ylim=c(-0.05,1.05), xlim=c(-0.05,1.05)) abline(lm(a~b), lwd=2, col="blue") text(.5,.9,expression(paste(rho, " = -0.055"))) plot below (small noise added to make the number of points clearer)
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? I would not advise to use Pearson's correlation coefficient for binary data, see the following counter-example: set.seed(10) a = rbinom(n=100, size=1, prob=0.9) b = rbinom(n=100, size=1, prob=0.9)
2,767
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
Arne's response above isn't quite right. Correlation is a measure of dependence between variables. The samples A and B are both independent draws, although they are from the same distribution, so we should expect ~0 correlation. Running a similar simulation and creating a new variable c that is dependent on the value of a: from scipy import stats a = stats.bernoulli(p=.9).rvs(10000) b = stats.bernoulli(p=.9).rvs(10000) dep = .9 c = [] for i in a: if i ==0: # note this would be quicker with an np.random.choice() c.append(stats.bernoulli(p=1-dep).rvs(1)[0]) else: c.append(stats.bernoulli(p=dep).rvs(1)[0]) We can see that the stas.pearsonr(a,b) ~= 0 stas.pearsonr(a,c) ~= 0.6 stats.spearmanr(a,c) ~=0.6 stats.kendalltau(a,c) ~=0.6
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
Arne's response above isn't quite right. Correlation is a measure of dependence between variables. The samples A and B are both independent draws, although they are from the same distribution, so we s
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? Arne's response above isn't quite right. Correlation is a measure of dependence between variables. The samples A and B are both independent draws, although they are from the same distribution, so we should expect ~0 correlation. Running a similar simulation and creating a new variable c that is dependent on the value of a: from scipy import stats a = stats.bernoulli(p=.9).rvs(10000) b = stats.bernoulli(p=.9).rvs(10000) dep = .9 c = [] for i in a: if i ==0: # note this would be quicker with an np.random.choice() c.append(stats.bernoulli(p=1-dep).rvs(1)[0]) else: c.append(stats.bernoulli(p=dep).rvs(1)[0]) We can see that the stas.pearsonr(a,b) ~= 0 stas.pearsonr(a,c) ~= 0.6 stats.spearmanr(a,c) ~=0.6 stats.kendalltau(a,c) ~=0.6
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? Arne's response above isn't quite right. Correlation is a measure of dependence between variables. The samples A and B are both independent draws, although they are from the same distribution, so we s
2,768
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
A possible issue with using the Pearson correlation for two dichotomous variables is that the correlation may be sensitive to the "levels" of the variables, i.e. the rates at which the variables are 1. Specifically, suppose that you think the two dichotomous variables (X,Y) are generated by underlying latent continuous variables (X*,Y*). Then it is possible to construct a sequence of examples where the underlying variables (X*,Y*) have the same Pearson correlation in each case, but the Pearson correlation between (X,Y) changes. The example below in R shows such a sequence. The example shifts the continuous latent (X*,Y*) distribution to the right along the x-axis (not changing the shape of the latent distribution at all), and finds that the Pearson correlation between (X,Y) decreases as we do so. For this reason, you might consider using the tetrachoric correlation for dichotomous data, if it is feasible to estimate. This question has more details on the polychoric correlation, which is a generalization of the tetrachoric. # consider two dichotomous variables x and y that are each generated by an # underlying common standard normal normal factor and a unique standard normal # normal factor, plus x has a shift u that makes it more common than 50:50 set.seed(12345) library(polycor) N <- 10000 U <- seq(0,1.2,0.1) dout <- list() for(u in U) { print(u) # u is the shift common <- rnorm(N) # common factor xunderlying <- common*0.7 + rnorm(N)*0.3 + u yunderlying <- common*0.7 + rnorm(N)*0.3 plot(xunderlying,yunderlying) abline(v = mean(xunderlying),col='red') abline(h = mean(yunderlying),col='red') x <- xunderlying > 0 # would be 50:50 chance if u = 0 y <- yunderlying > 0 print(table(x,y)) # obtain tetrachoric correlation using polycor package p <- polycor::polychor(x,y,ML=TRUE,std.err = TRUE) dout <- rbind(dout, data.frame(U=u, pctx = mean(x), # percent of x that is TRUE, used below pcty=mean(y), cor=cor(x,y), # pearson correlation, used below polychor_rho=p$rho, # tetrachoric correlation, used below underlying_cor = cor(xunderlying,yunderlying), # underlying correlation, used below polychor_xthresh = p$row.cuts, polychor_ythresh = p$col.cuts)) } # plot underlying cor as a function of pctx. # does not depend on pctx plot(dout$pctx,dout$underlying_cor,ylim = c(0,1)) # plot pearson correlation as a function of pctx (which is determined by u). # decreasing in pctx! plot(dout$pctx,dout$cor,ylim = c(0,1)) # plot estimated tetrachoric correlation as a function of pctx. # does not depend on pctx plot(dout$pctx,dout$polychor_rho,ylim = c(0,1))
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors?
A possible issue with using the Pearson correlation for two dichotomous variables is that the correlation may be sensitive to the "levels" of the variables, i.e. the rates at which the variables are 1
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? A possible issue with using the Pearson correlation for two dichotomous variables is that the correlation may be sensitive to the "levels" of the variables, i.e. the rates at which the variables are 1. Specifically, suppose that you think the two dichotomous variables (X,Y) are generated by underlying latent continuous variables (X*,Y*). Then it is possible to construct a sequence of examples where the underlying variables (X*,Y*) have the same Pearson correlation in each case, but the Pearson correlation between (X,Y) changes. The example below in R shows such a sequence. The example shifts the continuous latent (X*,Y*) distribution to the right along the x-axis (not changing the shape of the latent distribution at all), and finds that the Pearson correlation between (X,Y) decreases as we do so. For this reason, you might consider using the tetrachoric correlation for dichotomous data, if it is feasible to estimate. This question has more details on the polychoric correlation, which is a generalization of the tetrachoric. # consider two dichotomous variables x and y that are each generated by an # underlying common standard normal normal factor and a unique standard normal # normal factor, plus x has a shift u that makes it more common than 50:50 set.seed(12345) library(polycor) N <- 10000 U <- seq(0,1.2,0.1) dout <- list() for(u in U) { print(u) # u is the shift common <- rnorm(N) # common factor xunderlying <- common*0.7 + rnorm(N)*0.3 + u yunderlying <- common*0.7 + rnorm(N)*0.3 plot(xunderlying,yunderlying) abline(v = mean(xunderlying),col='red') abline(h = mean(yunderlying),col='red') x <- xunderlying > 0 # would be 50:50 chance if u = 0 y <- yunderlying > 0 print(table(x,y)) # obtain tetrachoric correlation using polycor package p <- polycor::polychor(x,y,ML=TRUE,std.err = TRUE) dout <- rbind(dout, data.frame(U=u, pctx = mean(x), # percent of x that is TRUE, used below pcty=mean(y), cor=cor(x,y), # pearson correlation, used below polychor_rho=p$rho, # tetrachoric correlation, used below underlying_cor = cor(xunderlying,yunderlying), # underlying correlation, used below polychor_xthresh = p$row.cuts, polychor_ythresh = p$col.cuts)) } # plot underlying cor as a function of pctx. # does not depend on pctx plot(dout$pctx,dout$underlying_cor,ylim = c(0,1)) # plot pearson correlation as a function of pctx (which is determined by u). # decreasing in pctx! plot(dout$pctx,dout$cor,ylim = c(0,1)) # plot estimated tetrachoric correlation as a function of pctx. # does not depend on pctx plot(dout$pctx,dout$polychor_rho,ylim = c(0,1))
Is it meaningful to calculate Pearson or Spearman correlation between two Boolean vectors? A possible issue with using the Pearson correlation for two dichotomous variables is that the correlation may be sensitive to the "levels" of the variables, i.e. the rates at which the variables are 1
2,769
What are good initial weights in a neural network?
I assume you are using logistic neurons, and that you are training by gradient descent/back-propagation. The logistic function is close to flat for large positive or negative inputs. The derivative at an input of $2$ is about $1/10$, but at $10$ the derivative is about $1/22000$ . This means that if the input of a logistic neuron is $10$ then, for a given training signal, the neuron will learn about $2200$ times slower that if the input was $2$. If you want the neuron to learn quickly, you either need to produce a huge training signal (such as with a cross-entropy loss function) or you want the derivative to be large. To make the derivative large, you set the initial weights so that you often get inputs in the range $[-4,4]$. The initial weights you give might or might not work. It depends on how the inputs are normalized. If the inputs are normalized to have mean $0$ and standard deviation $1$, then a random sum of $d$ terms with weights uniform on $(\frac{-1}{\sqrt{d}},\frac{1}{\sqrt{d}})$ will have mean $0$ and variance $\frac{1}{3}$, independent of $d$. The probability that you get a sum outside of $[-4,4]$ is small. That means as you increase $d$, you are not causing the neurons to start out saturated so that they don't learn. With inputs which are not normalized, those weights may not be effective at avoiding saturation.
What are good initial weights in a neural network?
I assume you are using logistic neurons, and that you are training by gradient descent/back-propagation. The logistic function is close to flat for large positive or negative inputs. The derivative a
What are good initial weights in a neural network? I assume you are using logistic neurons, and that you are training by gradient descent/back-propagation. The logistic function is close to flat for large positive or negative inputs. The derivative at an input of $2$ is about $1/10$, but at $10$ the derivative is about $1/22000$ . This means that if the input of a logistic neuron is $10$ then, for a given training signal, the neuron will learn about $2200$ times slower that if the input was $2$. If you want the neuron to learn quickly, you either need to produce a huge training signal (such as with a cross-entropy loss function) or you want the derivative to be large. To make the derivative large, you set the initial weights so that you often get inputs in the range $[-4,4]$. The initial weights you give might or might not work. It depends on how the inputs are normalized. If the inputs are normalized to have mean $0$ and standard deviation $1$, then a random sum of $d$ terms with weights uniform on $(\frac{-1}{\sqrt{d}},\frac{1}{\sqrt{d}})$ will have mean $0$ and variance $\frac{1}{3}$, independent of $d$. The probability that you get a sum outside of $[-4,4]$ is small. That means as you increase $d$, you are not causing the neurons to start out saturated so that they don't learn. With inputs which are not normalized, those weights may not be effective at avoiding saturation.
What are good initial weights in a neural network? I assume you are using logistic neurons, and that you are training by gradient descent/back-propagation. The logistic function is close to flat for large positive or negative inputs. The derivative a
2,770
What are good initial weights in a neural network?
[1] addresses the question: First, weights shouldn't be set to zeros in order to break the symmetry when backprogragating: Biases can generally be initialized to zero but weights need to be initialized carefully to break the symmetry between hidden units of the same layer. Because different output units receive different gradient signals, this symmetry breaking issue does not concern the output weights (into the output units), which can therefore also be set to zero. Some initialization strategies: [2] and [3] recommend scaling by the inverse of the square root of the fan-in Glorot and Bengio (2010) and the Deep Learning Tutorials use a combination of the fan-in and fan-out: for hyperbolic tangent units: sample a Uniform(-r, r) with $r=\sqrt{\frac{6}{\text{fan-in}+\text{fan-out}}}$ (fan-in is the number of inputs of the unit). for sigmoid units : sample a Uniform(-r, r) with $r=4 \sqrt{\frac{6}{\text{fan-in}+\text{fan-out}}}$ (fan-in is the number of inputs of the unit). in the case of RBMs, a zero-mean Gaussian with a small standard deviation around 0.1 or 0.01 works well (Hinton, 2010) to initialize the weights. Orthogonal random matrix initialization, i.e. W = np.random.randn(ndim, ndim); u, s, v = np.linalg.svd(W) then use u as your initialization matrix. Also, unsupervised pre-training may help in some situations: An important choice is whether one should use unsupervised pre-training (and which unsupervised feature learning algorithm to use) in order to initialize parameters. In most settings we have found unsupervised pre-training to help and very rarely to hurt, but of course that implies additional training time and additional hyper-parameters. Some ANN libraries also have some interesting lists, e.g. Lasagne: Constant([val]) Initialize weights with constant value. Normal([std, mean]) Sample initial weights from the Gaussian distribution. Uniform([range, std, mean]) Sample initial weights from the uniform distribution. Glorot(initializer[, gain, c01b]) Glorot weight initialization. GlorotNormal([gain, c01b]) Glorot with weights sampled from the Normal distribution. GlorotUniform([gain, c01b]) Glorot with weights sampled from the Uniform distribution. He(initializer[, gain, c01b]) He weight initialization. HeNormal([gain, c01b]) He initializer with weights sampled from the Normal distribution. HeUniform([gain, c01b]) He initializer with weights sampled from the Uniform distribution. Orthogonal([gain]) Intialize weights as Orthogonal matrix. Sparse([sparsity, std]) Initialize weights as sparse matrix. [1] Bengio, Yoshua. "Practical recommendations for gradient-based training of deep architectures." Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 437-478. [2] LeCun, Y., Bottou, L., Orr, G. B., and Muller, K. (1998a). Efficient backprop. In Neural Networks, Tricks of the Trade. [3] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." International conference on artificial intelligence and statistics. 2010.
What are good initial weights in a neural network?
[1] addresses the question: First, weights shouldn't be set to zeros in order to break the symmetry when backprogragating: Biases can generally be initialized to zero but weights need to be initializ
What are good initial weights in a neural network? [1] addresses the question: First, weights shouldn't be set to zeros in order to break the symmetry when backprogragating: Biases can generally be initialized to zero but weights need to be initialized carefully to break the symmetry between hidden units of the same layer. Because different output units receive different gradient signals, this symmetry breaking issue does not concern the output weights (into the output units), which can therefore also be set to zero. Some initialization strategies: [2] and [3] recommend scaling by the inverse of the square root of the fan-in Glorot and Bengio (2010) and the Deep Learning Tutorials use a combination of the fan-in and fan-out: for hyperbolic tangent units: sample a Uniform(-r, r) with $r=\sqrt{\frac{6}{\text{fan-in}+\text{fan-out}}}$ (fan-in is the number of inputs of the unit). for sigmoid units : sample a Uniform(-r, r) with $r=4 \sqrt{\frac{6}{\text{fan-in}+\text{fan-out}}}$ (fan-in is the number of inputs of the unit). in the case of RBMs, a zero-mean Gaussian with a small standard deviation around 0.1 or 0.01 works well (Hinton, 2010) to initialize the weights. Orthogonal random matrix initialization, i.e. W = np.random.randn(ndim, ndim); u, s, v = np.linalg.svd(W) then use u as your initialization matrix. Also, unsupervised pre-training may help in some situations: An important choice is whether one should use unsupervised pre-training (and which unsupervised feature learning algorithm to use) in order to initialize parameters. In most settings we have found unsupervised pre-training to help and very rarely to hurt, but of course that implies additional training time and additional hyper-parameters. Some ANN libraries also have some interesting lists, e.g. Lasagne: Constant([val]) Initialize weights with constant value. Normal([std, mean]) Sample initial weights from the Gaussian distribution. Uniform([range, std, mean]) Sample initial weights from the uniform distribution. Glorot(initializer[, gain, c01b]) Glorot weight initialization. GlorotNormal([gain, c01b]) Glorot with weights sampled from the Normal distribution. GlorotUniform([gain, c01b]) Glorot with weights sampled from the Uniform distribution. He(initializer[, gain, c01b]) He weight initialization. HeNormal([gain, c01b]) He initializer with weights sampled from the Normal distribution. HeUniform([gain, c01b]) He initializer with weights sampled from the Uniform distribution. Orthogonal([gain]) Intialize weights as Orthogonal matrix. Sparse([sparsity, std]) Initialize weights as sparse matrix. [1] Bengio, Yoshua. "Practical recommendations for gradient-based training of deep architectures." Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 437-478. [2] LeCun, Y., Bottou, L., Orr, G. B., and Muller, K. (1998a). Efficient backprop. In Neural Networks, Tricks of the Trade. [3] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." International conference on artificial intelligence and statistics. 2010.
What are good initial weights in a neural network? [1] addresses the question: First, weights shouldn't be set to zeros in order to break the symmetry when backprogragating: Biases can generally be initialized to zero but weights need to be initializ
2,771
What are good initial weights in a neural network?
The following explanation is taken from the book: Neural Networks for Pattern Recognition by Christopher Bishop. Great book! Assume you have previously whitened the inputs to the input units, i.e. $$<x_{i}> = 0$$ and $$<x_{i}^{2}> = 1$$ The question is: how to best choose the weights?. The idea is to pick values of the weights at random following a distribution which helps the optimization process to converge to a meaningful solution. You have for the activation of the units in the first layer, $$y = g(a) $$ where $$ a = \sum_{i=0}^{d}w_{i}x_{i}$$. Now, since you choose the weights independently from the inputs, $$<a> = \sum_{i=0}^{d}<w_{i}x_{i}> = \sum_{i=0}^{d}<w_{i}><x_{i}> = 0$$ and $$ <a^2> = \left<\left(\sum_{i=0}^{d}w_{i}x_{i}\right) \left(\sum_{i=0}^{d}w_{i}x_{i}\right)\right> = \sum_{i=0}^{d}<w_{i}^{2}><x_{i}^{2}> = \sigma^{2}d $$ where sigma is the variance of the distribution of weights. To derive this result you need to recall that weights are initialized independently from each other, i.e. $$<w_{i}w_{j}> = \delta_{ij}$$
What are good initial weights in a neural network?
The following explanation is taken from the book: Neural Networks for Pattern Recognition by Christopher Bishop. Great book! Assume you have previously whitened the inputs to the input units, i.e. $$<
What are good initial weights in a neural network? The following explanation is taken from the book: Neural Networks for Pattern Recognition by Christopher Bishop. Great book! Assume you have previously whitened the inputs to the input units, i.e. $$<x_{i}> = 0$$ and $$<x_{i}^{2}> = 1$$ The question is: how to best choose the weights?. The idea is to pick values of the weights at random following a distribution which helps the optimization process to converge to a meaningful solution. You have for the activation of the units in the first layer, $$y = g(a) $$ where $$ a = \sum_{i=0}^{d}w_{i}x_{i}$$. Now, since you choose the weights independently from the inputs, $$<a> = \sum_{i=0}^{d}<w_{i}x_{i}> = \sum_{i=0}^{d}<w_{i}><x_{i}> = 0$$ and $$ <a^2> = \left<\left(\sum_{i=0}^{d}w_{i}x_{i}\right) \left(\sum_{i=0}^{d}w_{i}x_{i}\right)\right> = \sum_{i=0}^{d}<w_{i}^{2}><x_{i}^{2}> = \sigma^{2}d $$ where sigma is the variance of the distribution of weights. To derive this result you need to recall that weights are initialized independently from each other, i.e. $$<w_{i}w_{j}> = \delta_{ij}$$
What are good initial weights in a neural network? The following explanation is taken from the book: Neural Networks for Pattern Recognition by Christopher Bishop. Great book! Assume you have previously whitened the inputs to the input units, i.e. $$<
2,772
What are good initial weights in a neural network?
Well just as an update, Delving Deep into Rectifiers: Surpassing Human-Level Performance n ImageNet Classification by He et al introduced an initialization specifically with initialization w = U([0,n]) * sqrt(2.0/n) where n is the number of inputs of your NN. I have seen this initialization used in many recent works (also with ReLU). They actually show how this starts to reduce the error rate much faster than the (-1/n, 1/n) that you mentioned. For the thorough explanation, see the paper but here's how fast it converges:
What are good initial weights in a neural network?
Well just as an update, Delving Deep into Rectifiers: Surpassing Human-Level Performance n ImageNet Classification by He et al introduced an initialization specifically with initialization w = U([0,n]
What are good initial weights in a neural network? Well just as an update, Delving Deep into Rectifiers: Surpassing Human-Level Performance n ImageNet Classification by He et al introduced an initialization specifically with initialization w = U([0,n]) * sqrt(2.0/n) where n is the number of inputs of your NN. I have seen this initialization used in many recent works (also with ReLU). They actually show how this starts to reduce the error rate much faster than the (-1/n, 1/n) that you mentioned. For the thorough explanation, see the paper but here's how fast it converges:
What are good initial weights in a neural network? Well just as an update, Delving Deep into Rectifiers: Surpassing Human-Level Performance n ImageNet Classification by He et al introduced an initialization specifically with initialization w = U([0,n]
2,773
What are good initial weights in a neural network?
The idea is that you want to initialize the weights in a way that ensures good forward and backward data flow through the network. That is, you don't want the activations to be consistently shrinking or increasing as you progress through the network. This image shows the activations of a 5 layer ReLU Multi-Layer Perceptron under 3 different initialization strategies after one pass of MNIST through the network. In all three cases weights are drawn from a zero-centered normal distribution which is determined by its standard deviation. You can see that if the initial weights are too small (the standard deviation is small) the activations get choked, and that if they are too large the activations explode. The middle value, that is approximately right can be found by setting the weights such that the variance of the activations and gradient updates stays approximately the same as you pass through the network. I wrote a blog post about weight initialization that goes into more detail, but the basic idea is as follows. If $x^{(i)}$ denotes the activations of the $i$-th layer, $n_i$ the size of the layer, and $w^{(i)}$ the weights connecting them to the $(i+1)$-st layer, then one can show that for activation functions $f$ with $f'(s) \approx 1$ we have $$ \text{Var}(x^{(i+1)}) = n_i \text{Var}(x^{(i)}) \text{Var}(w^{(i)}) $$ In order to achieve $\text{Var}(x^{(i+1)}) = \text{Var}(x^{(i)})$ we therefore have to impose the condition $$ \text{Var}( w^{(i)}) = \frac{1}{n_i}\,. $$ If we denote $\frac{\partial L}{\partial x_j^{(i)}}$ by $\Delta_j^{(i)}$, on the backward pass we similarly want $$ \text{Var}(\Delta^{(i)} ) = n_{i+1} \text{Var}(\Delta^{(i+1)}) \text{Var}(w^{(i)})\,. $$ Unless $n_i = n_{i+1}$, we have to compromise between these two conditions, and a reasonable choice is the harmonic mean $$ \text{Var}(w^{(i)}) = \frac{2}{n_i+n_{i+1}}\,. $$ If we sample weights from a normal distribution $N(0, \sigma) $ we satisfy this condition with $\sigma = \sqrt{\frac{2}{n_i + n_{i+1}}} $. For a uniform distribution $U(-a, a) $ we should take $a = \sqrt{\frac{6}{n_i+n_{i+1}}} $ since $\text{Var} \left( U(-a,a) \right) = a^2/3 $. We have thus arrived at Glorot initialization. This is the default initialization strategy for dense and 2D convolution layers in Keras, for instance. Glorot initialization works pretty well for trivial and $ \tanh $ activations, but doesn't do as well for $ \text{ReLU} $. Luckily, since $ f(s) = \text{ReLU}(s) $ just zeroes out negative inputs, it roughly removes half the variance and this is easily amended by multiplying one of our conditions above by two: $$ \text{Var}(w^{(i)}) = \frac{2}{n_i}\,. $$
What are good initial weights in a neural network?
The idea is that you want to initialize the weights in a way that ensures good forward and backward data flow through the network. That is, you don't want the activations to be consistently shrinking
What are good initial weights in a neural network? The idea is that you want to initialize the weights in a way that ensures good forward and backward data flow through the network. That is, you don't want the activations to be consistently shrinking or increasing as you progress through the network. This image shows the activations of a 5 layer ReLU Multi-Layer Perceptron under 3 different initialization strategies after one pass of MNIST through the network. In all three cases weights are drawn from a zero-centered normal distribution which is determined by its standard deviation. You can see that if the initial weights are too small (the standard deviation is small) the activations get choked, and that if they are too large the activations explode. The middle value, that is approximately right can be found by setting the weights such that the variance of the activations and gradient updates stays approximately the same as you pass through the network. I wrote a blog post about weight initialization that goes into more detail, but the basic idea is as follows. If $x^{(i)}$ denotes the activations of the $i$-th layer, $n_i$ the size of the layer, and $w^{(i)}$ the weights connecting them to the $(i+1)$-st layer, then one can show that for activation functions $f$ with $f'(s) \approx 1$ we have $$ \text{Var}(x^{(i+1)}) = n_i \text{Var}(x^{(i)}) \text{Var}(w^{(i)}) $$ In order to achieve $\text{Var}(x^{(i+1)}) = \text{Var}(x^{(i)})$ we therefore have to impose the condition $$ \text{Var}( w^{(i)}) = \frac{1}{n_i}\,. $$ If we denote $\frac{\partial L}{\partial x_j^{(i)}}$ by $\Delta_j^{(i)}$, on the backward pass we similarly want $$ \text{Var}(\Delta^{(i)} ) = n_{i+1} \text{Var}(\Delta^{(i+1)}) \text{Var}(w^{(i)})\,. $$ Unless $n_i = n_{i+1}$, we have to compromise between these two conditions, and a reasonable choice is the harmonic mean $$ \text{Var}(w^{(i)}) = \frac{2}{n_i+n_{i+1}}\,. $$ If we sample weights from a normal distribution $N(0, \sigma) $ we satisfy this condition with $\sigma = \sqrt{\frac{2}{n_i + n_{i+1}}} $. For a uniform distribution $U(-a, a) $ we should take $a = \sqrt{\frac{6}{n_i+n_{i+1}}} $ since $\text{Var} \left( U(-a,a) \right) = a^2/3 $. We have thus arrived at Glorot initialization. This is the default initialization strategy for dense and 2D convolution layers in Keras, for instance. Glorot initialization works pretty well for trivial and $ \tanh $ activations, but doesn't do as well for $ \text{ReLU} $. Luckily, since $ f(s) = \text{ReLU}(s) $ just zeroes out negative inputs, it roughly removes half the variance and this is easily amended by multiplying one of our conditions above by two: $$ \text{Var}(w^{(i)}) = \frac{2}{n_i}\,. $$
What are good initial weights in a neural network? The idea is that you want to initialize the weights in a way that ensures good forward and backward data flow through the network. That is, you don't want the activations to be consistently shrinking
2,774
What are good initial weights in a neural network?
One other technique that alleviates the problem of weight initialization is Batch Normalization. It acts to standardize the mean and variance of each unit in order to stabilize learning as described in the original paper. In practice, networks that use Batch Normalization (BN) are significantly more robust to bad initialization. BN works as follows: $$ \mu_B = \frac{1}{m}\sum_{i=1}^{M}x_i~~~and~~~ \sigma_{B}^{2} = \frac{1}{m}\sum_{i=1}^{m}(x_i - \mu_B)^{2} \\ \hat{x}_i = \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}~~~and~~~BN(x_i) = \gamma \hat{x}_i + \beta $$ We compute empirical mean and variance for each mini-batch, then we standardize the input $x_i$ and form the output $BN(x_i)$ by scaling $\hat{x}_i$ by $\gamma$ and adding $\beta$ both of which are learned during training. BN introduces two extra parameters ($\gamma$ and $\beta$) per activation that allow the $\hat{x}_i$ to have any mean and standard deviation. The reason for that is normalizing $x_i$ can reduce its expressive power. This new parameterization has better learning dynamics: in the old parameterization the mean of $x_i$ was determined by a complicated interaction between parameters of all preceding layers - so small changes to the network parameters amplify as the network becomes deeper. In the new parameterization the mean of $\hat{x}_i$ is determined by $\beta$ that we learn along with $\gamma$ during training. Thus, Batch Normalization stabilizes learning. As a result, Batch Normalization enables faster training by using much higher learning rates and alleviates the problem of bad initialization. BN also makes it possible to use saturating non-linearities by preventing the network from getting stuck in saturation modes. In summary, Batch Normalization is a differentiable transform that introduces normalized activations into the network. In practice, a BN layer can be inserted immediately after a fully connected layer.
What are good initial weights in a neural network?
One other technique that alleviates the problem of weight initialization is Batch Normalization. It acts to standardize the mean and variance of each unit in order to stabilize learning as described i
What are good initial weights in a neural network? One other technique that alleviates the problem of weight initialization is Batch Normalization. It acts to standardize the mean and variance of each unit in order to stabilize learning as described in the original paper. In practice, networks that use Batch Normalization (BN) are significantly more robust to bad initialization. BN works as follows: $$ \mu_B = \frac{1}{m}\sum_{i=1}^{M}x_i~~~and~~~ \sigma_{B}^{2} = \frac{1}{m}\sum_{i=1}^{m}(x_i - \mu_B)^{2} \\ \hat{x}_i = \frac{x_i - \mu_B}{\sqrt{\sigma_{B}^{2} + \epsilon}}~~~and~~~BN(x_i) = \gamma \hat{x}_i + \beta $$ We compute empirical mean and variance for each mini-batch, then we standardize the input $x_i$ and form the output $BN(x_i)$ by scaling $\hat{x}_i$ by $\gamma$ and adding $\beta$ both of which are learned during training. BN introduces two extra parameters ($\gamma$ and $\beta$) per activation that allow the $\hat{x}_i$ to have any mean and standard deviation. The reason for that is normalizing $x_i$ can reduce its expressive power. This new parameterization has better learning dynamics: in the old parameterization the mean of $x_i$ was determined by a complicated interaction between parameters of all preceding layers - so small changes to the network parameters amplify as the network becomes deeper. In the new parameterization the mean of $\hat{x}_i$ is determined by $\beta$ that we learn along with $\gamma$ during training. Thus, Batch Normalization stabilizes learning. As a result, Batch Normalization enables faster training by using much higher learning rates and alleviates the problem of bad initialization. BN also makes it possible to use saturating non-linearities by preventing the network from getting stuck in saturation modes. In summary, Batch Normalization is a differentiable transform that introduces normalized activations into the network. In practice, a BN layer can be inserted immediately after a fully connected layer.
What are good initial weights in a neural network? One other technique that alleviates the problem of weight initialization is Batch Normalization. It acts to standardize the mean and variance of each unit in order to stabilize learning as described i
2,775
What are good initial weights in a neural network?
There are two distinct ideas in this heuristic: Initialize the weights to be small - in addition to Douglas Zare excellent answer about sigmoid activations, the problem is more general. Even when the gradients are of "good" magnitude (e.g., using ReLU activations) training is hampered with big weights. Think about 2 neurons whose real weights should be $(3, -2)$. If you initialize them close to $0$, the maximal "distance" the weights have to traverse is roughly $3.6$ (Euclidean distance; 5 in Manhattan distance). While if you initialize them e.g. from $U(-3,3)$ you run into the risk that in the worst case the initial weights will be set to $(-3,3)$, in that case the distance the weights have to traverse is roughly $7.8$ (Euclidean; 11 Manhatten). Keep the variance of each weight $\propto \frac{1}{d}$ - the input to the next layer will thus have a variance $\propto 1$ (as it is a sum of $d$ neurons times their respective weights). Why do we want this? We want to keep the magnitude of the inputs to the layers the same. We don't want that inputs from a layer with a lot of hidden units will be much bigger than inputs from a layer with fewer hidden units. If we add a lot of inputs, we want the weights to be relatively smaller in magnitude, and if we're adding fewer inputs we want them to be larger. The $\frac{1}{3}$ variance constant in the $U(-\frac{1}{\sqrt d}, \frac{1}{\sqrt d})$ heuristic is actually problematic. To keep information flowing we would like that $\mathbb V[a_l] = \mathbb V[a_{l-1}]$- i.e. that the variance of the activations inputs / outputs stay more or less the same across layers, and similarly for backprop that $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] = \mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$ i.e., that the variance of the backprop derivatives will be more or less the same. The $\frac{1}{3}$ factor get's in our way. You can see this in the following graphs (taken from the original Xavier Glorot init paper): Here you can see that the activations follow $\mathbb V[a_l] \approx \frac{1}{3}\mathbb V[a_{l-1}] $. And: Here you can see that the derivatives follow $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] \approx \frac{1}{3}\mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$ In both cases - the narrow parts are bad: in the forward pass (activations) it means each neuron is basically calculating the same thing, and also that we are not really taking advantage of the activation function non-linearity ($\tanh$ were used in this network); in the backprop it means we are not really learning in the early layers. Xavier Glorot init fixed this by changing the distribution to $U(-\frac{\sqrt 3}{\sqrt n}, \frac{\sqrt 3}{\sqrt n}) $which eliminates the 1/3 factor. Also, since we care about both the previous layer number of neurons (for the forward prop) and the next layer number of neurons (for backprop) - Xavier Glorot init uses the harmonic mean between them as a compromise. Using this, the same network now looks like: If you want to learn more, check out my YouTube videos on the topic: Part 1 and Part 2.
What are good initial weights in a neural network?
There are two distinct ideas in this heuristic: Initialize the weights to be small - in addition to Douglas Zare excellent answer about sigmoid activations, the problem is more general. Even when the
What are good initial weights in a neural network? There are two distinct ideas in this heuristic: Initialize the weights to be small - in addition to Douglas Zare excellent answer about sigmoid activations, the problem is more general. Even when the gradients are of "good" magnitude (e.g., using ReLU activations) training is hampered with big weights. Think about 2 neurons whose real weights should be $(3, -2)$. If you initialize them close to $0$, the maximal "distance" the weights have to traverse is roughly $3.6$ (Euclidean distance; 5 in Manhattan distance). While if you initialize them e.g. from $U(-3,3)$ you run into the risk that in the worst case the initial weights will be set to $(-3,3)$, in that case the distance the weights have to traverse is roughly $7.8$ (Euclidean; 11 Manhatten). Keep the variance of each weight $\propto \frac{1}{d}$ - the input to the next layer will thus have a variance $\propto 1$ (as it is a sum of $d$ neurons times their respective weights). Why do we want this? We want to keep the magnitude of the inputs to the layers the same. We don't want that inputs from a layer with a lot of hidden units will be much bigger than inputs from a layer with fewer hidden units. If we add a lot of inputs, we want the weights to be relatively smaller in magnitude, and if we're adding fewer inputs we want them to be larger. The $\frac{1}{3}$ variance constant in the $U(-\frac{1}{\sqrt d}, \frac{1}{\sqrt d})$ heuristic is actually problematic. To keep information flowing we would like that $\mathbb V[a_l] = \mathbb V[a_{l-1}]$- i.e. that the variance of the activations inputs / outputs stay more or less the same across layers, and similarly for backprop that $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] = \mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$ i.e., that the variance of the backprop derivatives will be more or less the same. The $\frac{1}{3}$ factor get's in our way. You can see this in the following graphs (taken from the original Xavier Glorot init paper): Here you can see that the activations follow $\mathbb V[a_l] \approx \frac{1}{3}\mathbb V[a_{l-1}] $. And: Here you can see that the derivatives follow $\mathbb V[\frac{\partial \mathcal L}{\partial z_l}] \approx \frac{1}{3}\mathbb V[\frac{\partial \mathcal L}{\partial z_{l+1}}]$ In both cases - the narrow parts are bad: in the forward pass (activations) it means each neuron is basically calculating the same thing, and also that we are not really taking advantage of the activation function non-linearity ($\tanh$ were used in this network); in the backprop it means we are not really learning in the early layers. Xavier Glorot init fixed this by changing the distribution to $U(-\frac{\sqrt 3}{\sqrt n}, \frac{\sqrt 3}{\sqrt n}) $which eliminates the 1/3 factor. Also, since we care about both the previous layer number of neurons (for the forward prop) and the next layer number of neurons (for backprop) - Xavier Glorot init uses the harmonic mean between them as a compromise. Using this, the same network now looks like: If you want to learn more, check out my YouTube videos on the topic: Part 1 and Part 2.
What are good initial weights in a neural network? There are two distinct ideas in this heuristic: Initialize the weights to be small - in addition to Douglas Zare excellent answer about sigmoid activations, the problem is more general. Even when the
2,776
Diagnostics for logistic regression?
A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separation plot: A new visual method for evaluating the fit of binary models. American Journal of Political Science 55(4):991-1002. Esarey, Justin & Andrew Pierce. 2012. Assessing fit quality and testing for misspecification in binary-dependent variable models. Political Analysis 20(4): 480-500. Preprint PDF Here Both of these techniques purport to replace Goodness-of-Fit tests (like Hosmer & Lemeshow) and identify potential mis-specification (in particular non-linearity in included variables in the equation). These are particularly useful as typical R-square measures of fit are frequently criticized. Both of the above papers above utilize predicted probabilities vs. observed outcomes in plots - somewhat avoiding the unclear issue of what is a residual in such models. Examples of residuals could be contribution to the log-likelihood or Pearson residuals (I believe there are many more though). Another measure that is often of interest (although not a residual) are DFBeta's (the amount a coefficient estimate changes when an observation is excluded from the model). See examples in Stata for this UCLA page on Logistic Regression Diagnostics along with other potential diagnostic procedures. I don't have it handy, but I believe J. Scott Long's Regression Models for Categorical and Limited Dependent Variables goes in to sufficient detail on all of these different diagnostic measures in a simple manner.
Diagnostics for logistic regression?
A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separat
Diagnostics for logistic regression? A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separation plot: A new visual method for evaluating the fit of binary models. American Journal of Political Science 55(4):991-1002. Esarey, Justin & Andrew Pierce. 2012. Assessing fit quality and testing for misspecification in binary-dependent variable models. Political Analysis 20(4): 480-500. Preprint PDF Here Both of these techniques purport to replace Goodness-of-Fit tests (like Hosmer & Lemeshow) and identify potential mis-specification (in particular non-linearity in included variables in the equation). These are particularly useful as typical R-square measures of fit are frequently criticized. Both of the above papers above utilize predicted probabilities vs. observed outcomes in plots - somewhat avoiding the unclear issue of what is a residual in such models. Examples of residuals could be contribution to the log-likelihood or Pearson residuals (I believe there are many more though). Another measure that is often of interest (although not a residual) are DFBeta's (the amount a coefficient estimate changes when an observation is excluded from the model). See examples in Stata for this UCLA page on Logistic Regression Diagnostics along with other potential diagnostic procedures. I don't have it handy, but I believe J. Scott Long's Regression Models for Categorical and Limited Dependent Variables goes in to sufficient detail on all of these different diagnostic measures in a simple manner.
Diagnostics for logistic regression? A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separat
2,777
Diagnostics for logistic regression?
The question was not well enough motivated. There has to be a reason to run model diagnostics, such as Potential to change the model to make it better Not knowing which directed tests to use (i.e., tests of non-linearity or interaction) Failing to grasp that changing the model can easily distort statistical inference (standard errors, confidence intervals, $P$-values) Except for checking things that are orthogonal to the algebraic regression specification (e.g., examining the distribution of residuals in ordinary linear models), model diagnostics can create as many problems as they solve in my opinion. This is especially true of the binary logistic model since it has no distributional assumption. So it is usually better to spend time specifying the model, especially to not assume linearity for variables thought to be strong for which no prior evidence suggests linearity. In some occasions you can pre-specify a model that must fit, e.g., if the number of predictors is small or you allow all predictors to be nonlinear and (correctly) assume no interactions. Anyone feeling that model diagnostics can be used to change the model should run that process within a bootstrap loop to correctly estimate the induced model uncertainties.
Diagnostics for logistic regression?
The question was not well enough motivated. There has to be a reason to run model diagnostics, such as Potential to change the model to make it better Not knowing which directed tests to use (i.e.,
Diagnostics for logistic regression? The question was not well enough motivated. There has to be a reason to run model diagnostics, such as Potential to change the model to make it better Not knowing which directed tests to use (i.e., tests of non-linearity or interaction) Failing to grasp that changing the model can easily distort statistical inference (standard errors, confidence intervals, $P$-values) Except for checking things that are orthogonal to the algebraic regression specification (e.g., examining the distribution of residuals in ordinary linear models), model diagnostics can create as many problems as they solve in my opinion. This is especially true of the binary logistic model since it has no distributional assumption. So it is usually better to spend time specifying the model, especially to not assume linearity for variables thought to be strong for which no prior evidence suggests linearity. In some occasions you can pre-specify a model that must fit, e.g., if the number of predictors is small or you allow all predictors to be nonlinear and (correctly) assume no interactions. Anyone feeling that model diagnostics can be used to change the model should run that process within a bootstrap loop to correctly estimate the induced model uncertainties.
Diagnostics for logistic regression? The question was not well enough motivated. There has to be a reason to run model diagnostics, such as Potential to change the model to make it better Not knowing which directed tests to use (i.e.,
2,778
Diagnostics for logistic regression?
This thread is quite old, but I thought it would be useful to add that, since recently, you can use the DHARMa R package to transform the residuals of any GL(M)M into a standardized space. Once this is done, you can visually assess / test residual problems such as deviations from the distribution, residual dependency on a predictor, heteroskedasticity or autocorrelation in the normal way. See the package vignette for worked-through examples, also other questions on CV here and here.
Diagnostics for logistic regression?
This thread is quite old, but I thought it would be useful to add that, since recently, you can use the DHARMa R package to transform the residuals of any GL(M)M into a standardized space. Once this i
Diagnostics for logistic regression? This thread is quite old, but I thought it would be useful to add that, since recently, you can use the DHARMa R package to transform the residuals of any GL(M)M into a standardized space. Once this is done, you can visually assess / test residual problems such as deviations from the distribution, residual dependency on a predictor, heteroskedasticity or autocorrelation in the normal way. See the package vignette for worked-through examples, also other questions on CV here and here.
Diagnostics for logistic regression? This thread is quite old, but I thought it would be useful to add that, since recently, you can use the DHARMa R package to transform the residuals of any GL(M)M into a standardized space. Once this i
2,779
Free resources for learning R
Some useful R links (find out the link that suits you): Intro: for R basics http://cran.r-project.org/doc/contrib/usingR.pdf for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf http://portal.stats.ox.ac.uk/userdata/ruth/APTS2012/APTS.html Interactive intro to R programming language https://www.datacamp.com/courses/introduction-to-r Application focused R tutorial https://www.teamleada.com/tutorials/introduction-to-statistical-programming-in-r In-browser learning for R http://tryr.codeschool.com/ with a focus on economics: lecture notes with R code http://www.econ.uiuc.edu/~econ472/e-Tutorial.html A brief guide to R and Economics http://people.su.se/~ma/R_intro/R_intro.pdf Graphics: plots, maps, etc.: tutorial with info on plots http://cran.r-project.org/doc/contrib/Rossiter-RIntro-ITC.pdf a graph gallery of R plots and charts with supporting code http://addictedtor.free.fr/graphiques/ A tutorial for Lattice http://osiris.sunderland.ac.uk/~cs0her/Statistics/UsingLatticeGraphicsInR.htm Ggplot R graphics http://had.co.nz/ggplot2/ Ggplot Vs Lattice @ http://had.co.nz/ggplot/vs-lattice.html Multiple tutorials for using ggplot2 and Lattice http://learnr.wordpress.com/tag/ggplot2/ Google Charts with R http://www.iq.harvard.edu/blog/sss/archives/2008/04/google_charts_f_1.shtml Introduction to using RGoogleMaps @ http://cran.r-project.org/web/packages/RgoogleMaps/vignettes/RgoogleMaps-intro.pdf Thematic Maps with R https://stackoverflow.com/questions/1260965/developing-geographic-thematic-maps-with-r geographic maps in R http://smartdatacollective.com/Home/22052 GUIs: Poor Man GUI for R http://wiener.math.csi.cuny.edu/pmg/ R Commander is a robust GUI for R http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/installation-notes.html JGR is a Java-based GUI for R http://jgr.markushelbig.org/Screenshots.html Time series & finance: a good beginner’s tutorial for Time Series http://www.stat.pitt.edu/stoffer/tsa2/index.html Interesting time series packages in R http://robjhyndman.com/software advanced time series in R http://www.wise.xmu.edu.cn/2007summerworkshop/download/Advanced%20Topics%20in%20Time%20Series%20Econometrics%20Using%20R1_ZongwuCAI.pdf provides a great analysis and visualization framework for quantitative trading http://www.quantmod.com/ Guide to Credit Scoring using R http://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf an Open Source framework for Financial Analysis http://www.rmetrics.org/ Data / text mining: A Data Mining tool in R http://rattle.togaware.com/ An online e-book for Data Mining with R http://www.liaad.up.pt/~ltorgo/DataMiningWithR/ Introduction to the Text Mining package in R http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf Other statistical techniques: Quick-R http://www.statmethods.net/ annotated guides for a variety of models http://www.ats.ucla.edu/stat/r/dae/default.htm Social Network Analysis http://www.r-project.org/conferences/useR-2008/slides/Bojanowski.pdf Editors: Komodo Edit R editor http://www.sciviews.org/SciViews-K/index.html Tinn-R makes for a good R editor http://www.sciviews.org/Tinn-R/ An Eclipse plugin for R @ http://www.walware.de/goto/statet Instructions to install StatET in Eclipse http://www.splusbook.com/Rintro/R_Eclipse_StatET.pdf RStudio http://rstudio.org/ Emacs Speaks Statistics, a statistical language package for Emacs http://ess.r-project.org/ Interfacing w/ other languages / software: to embed R data frames in Excel via multiple approaches http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/ provides a tool to make R usable from Excel http://www.statconn.com/ Connect to MySQL from R http://erikvold.com/blog/index.cfm/2008/8/20/how-to-connect-to-mysql-with-r-in-wndows-using-rmysql info about pulling data from SAS, STATA, SPSS, etc. http://www.statmethods.net/input/importingdata.html Latex http://www.stat.uni-muenchen.de/~leisch/Sweave/ R2HTML http://www.feferraz.net/en/P/R2HTML Blogs, newsletters, etc.: A very informative blog http://blog.revolutionanalytics.com/ A blog aggregator for posts about R http://www.r-bloggers.com/ R mailing lists http://www.r-project.org/mail.html R newsletter (old) http://cran.r-project.org/doc/Rnews/ R journal (current) http://journal.r-project.org/ Other / uncategorized: (as of yet) Web Scraping in R http://www.programmingr.com/content/webscraping-using-readlines-and-rcurl a very interesting list of packages that is seriously worth a look http://www.omegahat.org/ Commercial versions of R @ http://www.revolutionanalytics.com/ Red R for R tasks http://code.google.com/p/r-orange/ KNIME for R (worth a serious look) http://www.knime.org/introduction/screenshots R Tutorial for Titanic https://statsguys.wordpress.com/
Free resources for learning R
Some useful R links (find out the link that suits you): Intro: for R basics http://cran.r-project.org/doc/contrib/usingR.pdf for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf http://p
Free resources for learning R Some useful R links (find out the link that suits you): Intro: for R basics http://cran.r-project.org/doc/contrib/usingR.pdf for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf http://portal.stats.ox.ac.uk/userdata/ruth/APTS2012/APTS.html Interactive intro to R programming language https://www.datacamp.com/courses/introduction-to-r Application focused R tutorial https://www.teamleada.com/tutorials/introduction-to-statistical-programming-in-r In-browser learning for R http://tryr.codeschool.com/ with a focus on economics: lecture notes with R code http://www.econ.uiuc.edu/~econ472/e-Tutorial.html A brief guide to R and Economics http://people.su.se/~ma/R_intro/R_intro.pdf Graphics: plots, maps, etc.: tutorial with info on plots http://cran.r-project.org/doc/contrib/Rossiter-RIntro-ITC.pdf a graph gallery of R plots and charts with supporting code http://addictedtor.free.fr/graphiques/ A tutorial for Lattice http://osiris.sunderland.ac.uk/~cs0her/Statistics/UsingLatticeGraphicsInR.htm Ggplot R graphics http://had.co.nz/ggplot2/ Ggplot Vs Lattice @ http://had.co.nz/ggplot/vs-lattice.html Multiple tutorials for using ggplot2 and Lattice http://learnr.wordpress.com/tag/ggplot2/ Google Charts with R http://www.iq.harvard.edu/blog/sss/archives/2008/04/google_charts_f_1.shtml Introduction to using RGoogleMaps @ http://cran.r-project.org/web/packages/RgoogleMaps/vignettes/RgoogleMaps-intro.pdf Thematic Maps with R https://stackoverflow.com/questions/1260965/developing-geographic-thematic-maps-with-r geographic maps in R http://smartdatacollective.com/Home/22052 GUIs: Poor Man GUI for R http://wiener.math.csi.cuny.edu/pmg/ R Commander is a robust GUI for R http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/installation-notes.html JGR is a Java-based GUI for R http://jgr.markushelbig.org/Screenshots.html Time series & finance: a good beginner’s tutorial for Time Series http://www.stat.pitt.edu/stoffer/tsa2/index.html Interesting time series packages in R http://robjhyndman.com/software advanced time series in R http://www.wise.xmu.edu.cn/2007summerworkshop/download/Advanced%20Topics%20in%20Time%20Series%20Econometrics%20Using%20R1_ZongwuCAI.pdf provides a great analysis and visualization framework for quantitative trading http://www.quantmod.com/ Guide to Credit Scoring using R http://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf an Open Source framework for Financial Analysis http://www.rmetrics.org/ Data / text mining: A Data Mining tool in R http://rattle.togaware.com/ An online e-book for Data Mining with R http://www.liaad.up.pt/~ltorgo/DataMiningWithR/ Introduction to the Text Mining package in R http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf Other statistical techniques: Quick-R http://www.statmethods.net/ annotated guides for a variety of models http://www.ats.ucla.edu/stat/r/dae/default.htm Social Network Analysis http://www.r-project.org/conferences/useR-2008/slides/Bojanowski.pdf Editors: Komodo Edit R editor http://www.sciviews.org/SciViews-K/index.html Tinn-R makes for a good R editor http://www.sciviews.org/Tinn-R/ An Eclipse plugin for R @ http://www.walware.de/goto/statet Instructions to install StatET in Eclipse http://www.splusbook.com/Rintro/R_Eclipse_StatET.pdf RStudio http://rstudio.org/ Emacs Speaks Statistics, a statistical language package for Emacs http://ess.r-project.org/ Interfacing w/ other languages / software: to embed R data frames in Excel via multiple approaches http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/ provides a tool to make R usable from Excel http://www.statconn.com/ Connect to MySQL from R http://erikvold.com/blog/index.cfm/2008/8/20/how-to-connect-to-mysql-with-r-in-wndows-using-rmysql info about pulling data from SAS, STATA, SPSS, etc. http://www.statmethods.net/input/importingdata.html Latex http://www.stat.uni-muenchen.de/~leisch/Sweave/ R2HTML http://www.feferraz.net/en/P/R2HTML Blogs, newsletters, etc.: A very informative blog http://blog.revolutionanalytics.com/ A blog aggregator for posts about R http://www.r-bloggers.com/ R mailing lists http://www.r-project.org/mail.html R newsletter (old) http://cran.r-project.org/doc/Rnews/ R journal (current) http://journal.r-project.org/ Other / uncategorized: (as of yet) Web Scraping in R http://www.programmingr.com/content/webscraping-using-readlines-and-rcurl a very interesting list of packages that is seriously worth a look http://www.omegahat.org/ Commercial versions of R @ http://www.revolutionanalytics.com/ Red R for R tasks http://code.google.com/p/r-orange/ KNIME for R (worth a serious look) http://www.knime.org/introduction/screenshots R Tutorial for Titanic https://statsguys.wordpress.com/
Free resources for learning R Some useful R links (find out the link that suits you): Intro: for R basics http://cran.r-project.org/doc/contrib/usingR.pdf for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf http://p
2,780
Free resources for learning R
If I had to choose one thing, make sure that you read "The R Inferno". There are many good resources on the R homepage, but in particular, read "An Introduction to R" and "The R Language Definition".
Free resources for learning R
If I had to choose one thing, make sure that you read "The R Inferno". There are many good resources on the R homepage, but in particular, read "An Introduction to R" and "The R Language Definition".
Free resources for learning R If I had to choose one thing, make sure that you read "The R Inferno". There are many good resources on the R homepage, but in particular, read "An Introduction to R" and "The R Language Definition".
Free resources for learning R If I had to choose one thing, make sure that you read "The R Inferno". There are many good resources on the R homepage, but in particular, read "An Introduction to R" and "The R Language Definition".
2,781
Free resources for learning R
Quick-R can be a good place to start. A little bit data mining oriented R and Data Mining resources: Examples and Case Studies and R Reference Card for Data Mining.
Free resources for learning R
Quick-R can be a good place to start. A little bit data mining oriented R and Data Mining resources: Examples and Case Studies and R Reference Card for Data Mining.
Free resources for learning R Quick-R can be a good place to start. A little bit data mining oriented R and Data Mining resources: Examples and Case Studies and R Reference Card for Data Mining.
Free resources for learning R Quick-R can be a good place to start. A little bit data mining oriented R and Data Mining resources: Examples and Case Studies and R Reference Card for Data Mining.
2,782
Free resources for learning R
If you like learning through videos, I collated a list of R training videos. I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, and more.
Free resources for learning R
If you like learning through videos, I collated a list of R training videos. I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, an
Free resources for learning R If you like learning through videos, I collated a list of R training videos. I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, and more.
Free resources for learning R If you like learning through videos, I collated a list of R training videos. I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, an
2,783
Free resources for learning R
Try IPSUR, Introduction to Probability and Statistics Using R. It's a free book, free in the GNU sense of the word. http://ipsur.r-forge.r-project.org/book/index.php It's definitely open source - on the download page you can download the LaTeX source or the lyx source used to generate this.
Free resources for learning R
Try IPSUR, Introduction to Probability and Statistics Using R. It's a free book, free in the GNU sense of the word. http://ipsur.r-forge.r-project.org/book/index.php It's definitely open source - on
Free resources for learning R Try IPSUR, Introduction to Probability and Statistics Using R. It's a free book, free in the GNU sense of the word. http://ipsur.r-forge.r-project.org/book/index.php It's definitely open source - on the download page you can download the LaTeX source or the lyx source used to generate this.
Free resources for learning R Try IPSUR, Introduction to Probability and Statistics Using R. It's a free book, free in the GNU sense of the word. http://ipsur.r-forge.r-project.org/book/index.php It's definitely open source - on
2,784
Free resources for learning R
The official guides are pretty nice; check out http://cran.r-project.org/manuals.html . There is also a lot of contributed documentation there.
Free resources for learning R
The official guides are pretty nice; check out http://cran.r-project.org/manuals.html . There is also a lot of contributed documentation there.
Free resources for learning R The official guides are pretty nice; check out http://cran.r-project.org/manuals.html . There is also a lot of contributed documentation there.
Free resources for learning R The official guides are pretty nice; check out http://cran.r-project.org/manuals.html . There is also a lot of contributed documentation there.
2,785
Free resources for learning R
If you're an economist/econometrician then Grant Farnworth's paper on using R is indispensable and is available on CRAN at: http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
Free resources for learning R
If you're an economist/econometrician then Grant Farnworth's paper on using R is indispensable and is available on CRAN at: http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
Free resources for learning R If you're an economist/econometrician then Grant Farnworth's paper on using R is indispensable and is available on CRAN at: http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
Free resources for learning R If you're an economist/econometrician then Grant Farnworth's paper on using R is indispensable and is available on CRAN at: http://cran.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
2,786
Free resources for learning R
If you have experience in other languages, these "R Rosetta Stone" videos may be useful: Python MATLAB SQL These are all included in the video list added by Jeromy, so big +1 for his list.
Free resources for learning R
If you have experience in other languages, these "R Rosetta Stone" videos may be useful: Python MATLAB SQL These are all included in the video list added by Jeromy, so big +1 for his list.
Free resources for learning R If you have experience in other languages, these "R Rosetta Stone" videos may be useful: Python MATLAB SQL These are all included in the video list added by Jeromy, so big +1 for his list.
Free resources for learning R If you have experience in other languages, these "R Rosetta Stone" videos may be useful: Python MATLAB SQL These are all included in the video list added by Jeromy, so big +1 for his list.
2,787
Free resources for learning R
One resource is 'Some hints for the R beginner' at http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
Free resources for learning R
One resource is 'Some hints for the R beginner' at http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
Free resources for learning R One resource is 'Some hints for the R beginner' at http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
Free resources for learning R One resource is 'Some hints for the R beginner' at http://www.burns-stat.com/pages/Tutor/hints_R_begin.html
2,788
Free resources for learning R
I have written a document that is freely available at my website and on CRAN. See the linked page: icebreakeR The datasets that are used in the document are also linked from that page. Feedback is welcome and appreciated! Andrew
Free resources for learning R
I have written a document that is freely available at my website and on CRAN. See the linked page: icebreakeR The datasets that are used in the document are also linked from that page. Feedback is we
Free resources for learning R I have written a document that is freely available at my website and on CRAN. See the linked page: icebreakeR The datasets that are used in the document are also linked from that page. Feedback is welcome and appreciated! Andrew
Free resources for learning R I have written a document that is freely available at my website and on CRAN. See the linked page: icebreakeR The datasets that are used in the document are also linked from that page. Feedback is we
2,789
Free resources for learning R
After you learn the basics, I find the following sites very useful: R-bloggers. Subscribing to the Stack overflow R tag.
Free resources for learning R
After you learn the basics, I find the following sites very useful: R-bloggers. Subscribing to the Stack overflow R tag.
Free resources for learning R After you learn the basics, I find the following sites very useful: R-bloggers. Subscribing to the Stack overflow R tag.
Free resources for learning R After you learn the basics, I find the following sites very useful: R-bloggers. Subscribing to the Stack overflow R tag.
2,790
Free resources for learning R
A large number of short videos that cover a lot of useful tasks with R (91 videos as of March 2013): http://www.twotorials.com/ Here's a nice new interactive online tutorial on the basics of R: http://tryr.codeschool.com/
Free resources for learning R
A large number of short videos that cover a lot of useful tasks with R (91 videos as of March 2013): http://www.twotorials.com/ Here's a nice new interactive online tutorial on the basics of R: http:
Free resources for learning R A large number of short videos that cover a lot of useful tasks with R (91 videos as of March 2013): http://www.twotorials.com/ Here's a nice new interactive online tutorial on the basics of R: http://tryr.codeschool.com/
Free resources for learning R A large number of short videos that cover a lot of useful tasks with R (91 videos as of March 2013): http://www.twotorials.com/ Here's a nice new interactive online tutorial on the basics of R: http:
2,791
Free resources for learning R
The R project website has lots of manuals to start, and I suggest you the Nabble R forum and the R-bloggers site as well.
Free resources for learning R
The R project website has lots of manuals to start, and I suggest you the Nabble R forum and the R-bloggers site as well.
Free resources for learning R The R project website has lots of manuals to start, and I suggest you the Nabble R forum and the R-bloggers site as well.
Free resources for learning R The R project website has lots of manuals to start, and I suggest you the Nabble R forum and the R-bloggers site as well.
2,792
Free resources for learning R
If you already know another programming language, these notes may help point out some of the ways R might surprise you.
Free resources for learning R
If you already know another programming language, these notes may help point out some of the ways R might surprise you.
Free resources for learning R If you already know another programming language, these notes may help point out some of the ways R might surprise you.
Free resources for learning R If you already know another programming language, these notes may help point out some of the ways R might surprise you.
2,793
Free resources for learning R
I liked these lectures: Statistical Aspects of Data Mining. The lecturer is solving example problems using R.
Free resources for learning R
I liked these lectures: Statistical Aspects of Data Mining. The lecturer is solving example problems using R.
Free resources for learning R I liked these lectures: Statistical Aspects of Data Mining. The lecturer is solving example problems using R.
Free resources for learning R I liked these lectures: Statistical Aspects of Data Mining. The lecturer is solving example problems using R.
2,794
Free resources for learning R
If you are coming from a SAS or SPSS background, check out: http://sites.google.com/site/r4statistics/ This is the companion site to the book, R for SAS and SPSS Users by Robert Muenchen and a free version of the book can be found here.
Free resources for learning R
If you are coming from a SAS or SPSS background, check out: http://sites.google.com/site/r4statistics/ This is the companion site to the book, R for SAS and SPSS Users by Robert Muenchen and a free ve
Free resources for learning R If you are coming from a SAS or SPSS background, check out: http://sites.google.com/site/r4statistics/ This is the companion site to the book, R for SAS and SPSS Users by Robert Muenchen and a free version of the book can be found here.
Free resources for learning R If you are coming from a SAS or SPSS background, check out: http://sites.google.com/site/r4statistics/ This is the companion site to the book, R for SAS and SPSS Users by Robert Muenchen and a free ve
2,795
Free resources for learning R
One more: R bloggers has many posts with tutorials materials: http://www.r-bloggers.com/?s=tutorial
Free resources for learning R
One more: R bloggers has many posts with tutorials materials: http://www.r-bloggers.com/?s=tutorial
Free resources for learning R One more: R bloggers has many posts with tutorials materials: http://www.r-bloggers.com/?s=tutorial
Free resources for learning R One more: R bloggers has many posts with tutorials materials: http://www.r-bloggers.com/?s=tutorial
2,796
Free resources for learning R
There are some very good learning materials here: http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/
Free resources for learning R
There are some very good learning materials here: http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/
Free resources for learning R There are some very good learning materials here: http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/
Free resources for learning R There are some very good learning materials here: http://scc.stat.ucla.edu/mini-courses/materials-from-past-mini-courses/spring-2009-mini-course-materials/
2,797
Free resources for learning R
Look for R Users Groups in your area. They are growing around the world. http://blog.revolutionanalytics.com/local-r-groups.html If you don't have one then help get one started. I'm sure you will be able to find like minded interested folks. As for helpful links the Dallas R Users Group has a nice list. http://www.meetup.com/Dallas-R-Users-Group/pages/R_Helpful_Links/
Free resources for learning R
Look for R Users Groups in your area. They are growing around the world. http://blog.revolutionanalytics.com/local-r-groups.html If you don't have one then help get one started. I'm sure you will be
Free resources for learning R Look for R Users Groups in your area. They are growing around the world. http://blog.revolutionanalytics.com/local-r-groups.html If you don't have one then help get one started. I'm sure you will be able to find like minded interested folks. As for helpful links the Dallas R Users Group has a nice list. http://www.meetup.com/Dallas-R-Users-Group/pages/R_Helpful_Links/
Free resources for learning R Look for R Users Groups in your area. They are growing around the world. http://blog.revolutionanalytics.com/local-r-groups.html If you don't have one then help get one started. I'm sure you will be
2,798
Free resources for learning R
http://www.datamind.org offers interactive R tutorials, currently focused at real beginners
Free resources for learning R
http://www.datamind.org offers interactive R tutorials, currently focused at real beginners
Free resources for learning R http://www.datamind.org offers interactive R tutorials, currently focused at real beginners
Free resources for learning R http://www.datamind.org offers interactive R tutorials, currently focused at real beginners
2,799
Free resources for learning R
If you'd like a beginners tutorial to R in the context of Econometrics this may be a good starting point as well: http://www.quandl.com/learn/working-with-quandl-and-r
Free resources for learning R
If you'd like a beginners tutorial to R in the context of Econometrics this may be a good starting point as well: http://www.quandl.com/learn/working-with-quandl-and-r
Free resources for learning R If you'd like a beginners tutorial to R in the context of Econometrics this may be a good starting point as well: http://www.quandl.com/learn/working-with-quandl-and-r
Free resources for learning R If you'd like a beginners tutorial to R in the context of Econometrics this may be a good starting point as well: http://www.quandl.com/learn/working-with-quandl-and-r
2,800
Performance metrics to evaluate unsupervised learning
In some sense I think this question is unanswerable. I say this because how well a particular unsupervised method performs will largely depend on why one is doing unsupervised learning in the first place, i.e., does the method perform well in the context of your end goal? Obviously this isn't completely true, people work on these problems and publish results which include some sort of evaluation. I'll outline a few of the approaches I'm familiar with below. A good resource (with references) for clustering is sklearn's documentation page, Clustering Performance Evaluation. This covers several method, but all but one, the Silhouette Coefficient, assumes ground truth labels are available. This method is also mentioned in the question Evaluation measure of clustering, linked in the comments for this question. If your unsupervised learning method is probabilistic, another option is to evaluate some probability measure (log-likelihood, perplexity, etc) on held out data. The motivation here is that if your unsupervised learning method assigns high probability to similar data that wasn't used to fit parameters, then it has probably done a good job of capturing the distribution of interest. A domain where this type of evaluation is commonly used is language modeling. The last option I'll mention is using a supervised learner on a related auxiliary task. If you're unsupervised method produces latent variables, you can think of these latent variables as being a representation of the input. Thus, it is sensible to use these latent variables as input for a supervised classifier performing some task related to the domain the data is from. The performance of the supervised method can then serve as a surrogate for the performance of the unsupervised learner. This is essentially the setup you see in most work on representation learning. This description is probably a little nebulous, so I'll give a concrete example. Nearly all of the work on word representation learning uses the following approach for evaluation: Learn representations of words using an unsupervised learner. Use the learned representations as input for a supervised learner performing some NLP task like parts of speech tagging or named entity recognition. Assess the performance of the unsupervised learner by its ability to improve the performance of the supervised learner compared to a baseline using a standard representation, like binary word presence features, as input. For an example of this approach in action see the paper Training Restricted Boltzmann Machines on Word Observations by Dahl et al.
Performance metrics to evaluate unsupervised learning
In some sense I think this question is unanswerable. I say this because how well a particular unsupervised method performs will largely depend on why one is doing unsupervised learning in the first pl
Performance metrics to evaluate unsupervised learning In some sense I think this question is unanswerable. I say this because how well a particular unsupervised method performs will largely depend on why one is doing unsupervised learning in the first place, i.e., does the method perform well in the context of your end goal? Obviously this isn't completely true, people work on these problems and publish results which include some sort of evaluation. I'll outline a few of the approaches I'm familiar with below. A good resource (with references) for clustering is sklearn's documentation page, Clustering Performance Evaluation. This covers several method, but all but one, the Silhouette Coefficient, assumes ground truth labels are available. This method is also mentioned in the question Evaluation measure of clustering, linked in the comments for this question. If your unsupervised learning method is probabilistic, another option is to evaluate some probability measure (log-likelihood, perplexity, etc) on held out data. The motivation here is that if your unsupervised learning method assigns high probability to similar data that wasn't used to fit parameters, then it has probably done a good job of capturing the distribution of interest. A domain where this type of evaluation is commonly used is language modeling. The last option I'll mention is using a supervised learner on a related auxiliary task. If you're unsupervised method produces latent variables, you can think of these latent variables as being a representation of the input. Thus, it is sensible to use these latent variables as input for a supervised classifier performing some task related to the domain the data is from. The performance of the supervised method can then serve as a surrogate for the performance of the unsupervised learner. This is essentially the setup you see in most work on representation learning. This description is probably a little nebulous, so I'll give a concrete example. Nearly all of the work on word representation learning uses the following approach for evaluation: Learn representations of words using an unsupervised learner. Use the learned representations as input for a supervised learner performing some NLP task like parts of speech tagging or named entity recognition. Assess the performance of the unsupervised learner by its ability to improve the performance of the supervised learner compared to a baseline using a standard representation, like binary word presence features, as input. For an example of this approach in action see the paper Training Restricted Boltzmann Machines on Word Observations by Dahl et al.
Performance metrics to evaluate unsupervised learning In some sense I think this question is unanswerable. I say this because how well a particular unsupervised method performs will largely depend on why one is doing unsupervised learning in the first pl