idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
3,501
Why do neural networks need so many training examples to perform?
We don't learn to "see cars" until we learn to see It takes quite a long time and lots of examples for a child to learn how to see objects as such. After that, a child can learn to identify a particular type of object from just a few examples. If you compare a two year old child with a learning system that literally starts from a blank slate, it's an apples and oranges comparison; at that age child has seen thousands of hours of "video footage". In a similar manner, it takes artificial neural networks a lot of examples to learn "how to see" but after that it's possible to transfer that knowledge to new examples. Transfer learning is a whole domain of machine learning, and things like "one shot learning" are possible - you can build ANNs that will learn to identify new types of objects that it hasn't seen before from a single example, or to identify a particular person from a single photo of their face. But doing this initial "learning to see" part well requires quite a lot of data. Furthermore, there's some evidence that not all training data is equal, namely, that data which you "choose" while learning is more effective than data that's simply provided to you. E.g. Held & Hein twin kitten experiment. https://www.lri.fr/~mbl/ENS/FONDIHM/2013/papers/about-HeldHein63.pdf
Why do neural networks need so many training examples to perform?
We don't learn to "see cars" until we learn to see It takes quite a long time and lots of examples for a child to learn how to see objects as such. After that, a child can learn to identify a particul
Why do neural networks need so many training examples to perform? We don't learn to "see cars" until we learn to see It takes quite a long time and lots of examples for a child to learn how to see objects as such. After that, a child can learn to identify a particular type of object from just a few examples. If you compare a two year old child with a learning system that literally starts from a blank slate, it's an apples and oranges comparison; at that age child has seen thousands of hours of "video footage". In a similar manner, it takes artificial neural networks a lot of examples to learn "how to see" but after that it's possible to transfer that knowledge to new examples. Transfer learning is a whole domain of machine learning, and things like "one shot learning" are possible - you can build ANNs that will learn to identify new types of objects that it hasn't seen before from a single example, or to identify a particular person from a single photo of their face. But doing this initial "learning to see" part well requires quite a lot of data. Furthermore, there's some evidence that not all training data is equal, namely, that data which you "choose" while learning is more effective than data that's simply provided to you. E.g. Held & Hein twin kitten experiment. https://www.lri.fr/~mbl/ENS/FONDIHM/2013/papers/about-HeldHein63.pdf
Why do neural networks need so many training examples to perform? We don't learn to "see cars" until we learn to see It takes quite a long time and lots of examples for a child to learn how to see objects as such. After that, a child can learn to identify a particul
3,502
Why do neural networks need so many training examples to perform?
I would argue the performance is not that different as you might expect, but you ask a great question (see the last paragraph). As you mention transfer learning: To compare apples with apples we have to look how many pictures in total and how many pictures of the class of interest a human / neural net "sees". 1. How many pictures does a human look at? Human´s eye movement takes around 200ms which could be seen as kind of an "biological photo". See the talk by computer vision expert Fei-Fei Li: https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures#t-362785. She adds: So by age 3 a child would have seen hundreds of millions of pictures. In ImageNet, the leading database for object detection, there are ~14million labeled pictures. So a neural network being trained on ImageNet would have seen as many pictures as a 14000000/5/60/60/24*2 ~ 64 days old baby, so two months old (assuming the baby is awake half of her life). To be fair its hard to tell how many of this pictures are labeled. Moreover, the pictures, a baby sees, are not that diverse like in ImageNet. (Probably the baby sees her mother have of the time,... ;). However, i think its fair to say that your son will have seen hundreds of millions of pictures (and then applies transfer learning). So how many pictures do we need to learn a new category given a solid base of related pictures that can be (transfer) learned from? First blog post i found was this: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html. They use 1000 examples per class. I could imagine 2.5 years later even way less is required. However, 1000 pictures can be seen by a human in 1000/5/60 in 3.3 minutes. You wrote: A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. That would be equivilant to forty seconds per instance (with various angles of that object to make it comparable). To sum up: As i mentioned, I had to make a few assumptions. But i think, one can see that the performance is not that different as one might expect. However, i believe you ask a great question and here is why: 2. Would neural network perform better/different if they would work more like brains? (Geoffrey Hinton says yes). In an interview https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/, in late 2018, he compares the current implementations of neural networks with the brain. He mentions, in terms of weights, the artificial neural networks are smaller than the brain by a factor of 10.000. Therefore, the brain needs way less iterations of trainings to learn. In order to enable artificial neural networks, to work more like our brains, he follows another trend in hardware, a UK based startup called Graphcore. It reduces the calculation time by a smart way of storing the weights of a neural network. Therefore, more weights can be used and the training time of the artificial neural networks might get reduced.
Why do neural networks need so many training examples to perform?
I would argue the performance is not that different as you might expect, but you ask a great question (see the last paragraph). As you mention transfer learning: To compare apples with apples we have
Why do neural networks need so many training examples to perform? I would argue the performance is not that different as you might expect, but you ask a great question (see the last paragraph). As you mention transfer learning: To compare apples with apples we have to look how many pictures in total and how many pictures of the class of interest a human / neural net "sees". 1. How many pictures does a human look at? Human´s eye movement takes around 200ms which could be seen as kind of an "biological photo". See the talk by computer vision expert Fei-Fei Li: https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures#t-362785. She adds: So by age 3 a child would have seen hundreds of millions of pictures. In ImageNet, the leading database for object detection, there are ~14million labeled pictures. So a neural network being trained on ImageNet would have seen as many pictures as a 14000000/5/60/60/24*2 ~ 64 days old baby, so two months old (assuming the baby is awake half of her life). To be fair its hard to tell how many of this pictures are labeled. Moreover, the pictures, a baby sees, are not that diverse like in ImageNet. (Probably the baby sees her mother have of the time,... ;). However, i think its fair to say that your son will have seen hundreds of millions of pictures (and then applies transfer learning). So how many pictures do we need to learn a new category given a solid base of related pictures that can be (transfer) learned from? First blog post i found was this: https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html. They use 1000 examples per class. I could imagine 2.5 years later even way less is required. However, 1000 pictures can be seen by a human in 1000/5/60 in 3.3 minutes. You wrote: A human child at age 2 needs around 5 instances of a car to be able to identify it with reasonable accuracy regardless of color, make, etc. That would be equivilant to forty seconds per instance (with various angles of that object to make it comparable). To sum up: As i mentioned, I had to make a few assumptions. But i think, one can see that the performance is not that different as one might expect. However, i believe you ask a great question and here is why: 2. Would neural network perform better/different if they would work more like brains? (Geoffrey Hinton says yes). In an interview https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/, in late 2018, he compares the current implementations of neural networks with the brain. He mentions, in terms of weights, the artificial neural networks are smaller than the brain by a factor of 10.000. Therefore, the brain needs way less iterations of trainings to learn. In order to enable artificial neural networks, to work more like our brains, he follows another trend in hardware, a UK based startup called Graphcore. It reduces the calculation time by a smart way of storing the weights of a neural network. Therefore, more weights can be used and the training time of the artificial neural networks might get reduced.
Why do neural networks need so many training examples to perform? I would argue the performance is not that different as you might expect, but you ask a great question (see the last paragraph). As you mention transfer learning: To compare apples with apples we have
3,503
Why do neural networks need so many training examples to perform?
I am an expert in this. I am human, I was a baby, I have a car, and I do AI. The reason why babies pick up cars with far more limited examples is intuition. The human brain already has structures to deal with 3D rotations. Also, there are two eyes which provide parallax for depth mapping which really helps. You can intuit between a car and a picture of a car, because there is no actual depth to the picture. Hinton (AI researcher) has proposed the idea of Capsule Networks, which would be able to handle things more intuitively. Unfortunately for computers, the training data is (usually) 2D images, arrays of flat pixels. In order to not over-fit, much data is required so the orientation of the cars in the images is generalized. The baby brain can do this already and can recognize a car at any orientation.
Why do neural networks need so many training examples to perform?
I am an expert in this. I am human, I was a baby, I have a car, and I do AI. The reason why babies pick up cars with far more limited examples is intuition. The human brain already has structures to
Why do neural networks need so many training examples to perform? I am an expert in this. I am human, I was a baby, I have a car, and I do AI. The reason why babies pick up cars with far more limited examples is intuition. The human brain already has structures to deal with 3D rotations. Also, there are two eyes which provide parallax for depth mapping which really helps. You can intuit between a car and a picture of a car, because there is no actual depth to the picture. Hinton (AI researcher) has proposed the idea of Capsule Networks, which would be able to handle things more intuitively. Unfortunately for computers, the training data is (usually) 2D images, arrays of flat pixels. In order to not over-fit, much data is required so the orientation of the cars in the images is generalized. The baby brain can do this already and can recognize a car at any orientation.
Why do neural networks need so many training examples to perform? I am an expert in this. I am human, I was a baby, I have a car, and I do AI. The reason why babies pick up cars with far more limited examples is intuition. The human brain already has structures to
3,504
Find expected value using CDF
Edited for the comment from probabilityislogic Note that $F(1)=0$ in this case so the distribution has probability $0$ of being less than $1$, so $x \ge 1$, and you will also need $\alpha > 0$ for an increasing cdf. If you have the cdf then you want the anti-integral or derivative which with a continuous distribution like this $$f(x) = \frac{dF(x)}{dx}$$ and in reverse $F(x) = \int_{1}^x f(t)\,dt$ for $x \ge 1$. Then to find the expectation you need to find $$E[X] = \int_{1}^{\infty} x f(x)\,dx$$ providing that this exists. I will leave the calculus to you.
Find expected value using CDF
Edited for the comment from probabilityislogic Note that $F(1)=0$ in this case so the distribution has probability $0$ of being less than $1$, so $x \ge 1$, and you will also need $\alpha > 0$ for an
Find expected value using CDF Edited for the comment from probabilityislogic Note that $F(1)=0$ in this case so the distribution has probability $0$ of being less than $1$, so $x \ge 1$, and you will also need $\alpha > 0$ for an increasing cdf. If you have the cdf then you want the anti-integral or derivative which with a continuous distribution like this $$f(x) = \frac{dF(x)}{dx}$$ and in reverse $F(x) = \int_{1}^x f(t)\,dt$ for $x \ge 1$. Then to find the expectation you need to find $$E[X] = \int_{1}^{\infty} x f(x)\,dx$$ providing that this exists. I will leave the calculus to you.
Find expected value using CDF Edited for the comment from probabilityislogic Note that $F(1)=0$ in this case so the distribution has probability $0$ of being less than $1$, so $x \ge 1$, and you will also need $\alpha > 0$ for an
3,505
Find expected value using CDF
Usage of the density function is not necessary Integrate 1 minus the CDF When you have a random variable $X$ that has a support that is non-negative (that is, the variable has nonzero density/probability for only positive values), you can use the following property: $$ E(X) = \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x $$ A similar property applies in the case of a discrete random variable. Proof Since $1 - F_X(x) = P(X\geq x) = \int_x^\infty f_X(t) \,\mathrm{d}t$, $$ \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x = \int_0^\infty P(X\geq x) \,\mathrm{d}x = \int_0^\infty \int_x^\infty f_X(t) \,\mathrm{d}t \mathrm{d}x $$ Then change the order of integration: $$ = \int_0^\infty \int_0^t f_X(t) \,\mathrm{d}x \mathrm{d}t = \int_0^\infty \left[xf_X(t)\right]_0^t \,\mathrm{d}t = \int_0^\infty t f_X(t) \,\mathrm{d}t $$ Recognizing that $t$ is a dummy variable, or taking the simple substitution $t=x$ and $\mathrm{d}t = \mathrm{d}x$, $$ = \int_0^\infty x f_X(x) \,\mathrm{d}x = \mathrm{E}(X) $$ Attribution I used the Formulas for special cases section of the Expected value article on Wikipedia to refresh my memory on the proof. That section also contains proofs for the discrete random variable case and also for the case that no density function exists.
Find expected value using CDF
Usage of the density function is not necessary Integrate 1 minus the CDF When you have a random variable $X$ that has a support that is non-negative (that is, the variable has nonzero density/probabil
Find expected value using CDF Usage of the density function is not necessary Integrate 1 minus the CDF When you have a random variable $X$ that has a support that is non-negative (that is, the variable has nonzero density/probability for only positive values), you can use the following property: $$ E(X) = \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x $$ A similar property applies in the case of a discrete random variable. Proof Since $1 - F_X(x) = P(X\geq x) = \int_x^\infty f_X(t) \,\mathrm{d}t$, $$ \int_0^\infty \left( 1 - F_X(x) \right) \,\mathrm{d}x = \int_0^\infty P(X\geq x) \,\mathrm{d}x = \int_0^\infty \int_x^\infty f_X(t) \,\mathrm{d}t \mathrm{d}x $$ Then change the order of integration: $$ = \int_0^\infty \int_0^t f_X(t) \,\mathrm{d}x \mathrm{d}t = \int_0^\infty \left[xf_X(t)\right]_0^t \,\mathrm{d}t = \int_0^\infty t f_X(t) \,\mathrm{d}t $$ Recognizing that $t$ is a dummy variable, or taking the simple substitution $t=x$ and $\mathrm{d}t = \mathrm{d}x$, $$ = \int_0^\infty x f_X(x) \,\mathrm{d}x = \mathrm{E}(X) $$ Attribution I used the Formulas for special cases section of the Expected value article on Wikipedia to refresh my memory on the proof. That section also contains proofs for the discrete random variable case and also for the case that no density function exists.
Find expected value using CDF Usage of the density function is not necessary Integrate 1 minus the CDF When you have a random variable $X$ that has a support that is non-negative (that is, the variable has nonzero density/probabil
3,506
Find expected value using CDF
The result extends to the $k$th moment of $X$ as well. Here is a graphical representation:
Find expected value using CDF
The result extends to the $k$th moment of $X$ as well. Here is a graphical representation:
Find expected value using CDF The result extends to the $k$th moment of $X$ as well. Here is a graphical representation:
Find expected value using CDF The result extends to the $k$th moment of $X$ as well. Here is a graphical representation:
3,507
Find expected value using CDF
I think you actually mean $x\geq 1$, otherwise the CDF is vacuous, as $F(1)=1-1^{-\alpha}=1-1=0$. What you "know" about CDFs is that they eventually approach zero as the argument $x$ decreases without bound and eventually approach one as $x \to \infty$. They are also non-decreasing, so this means $0\leq F(y)\leq F(x)\leq 1$ for all $y\leq x$. So if we plug in the CDF we get: $$0\leq 1-x^{-\alpha}\leq 1\implies 1\geq \frac{1}{x^{\alpha}}\geq 0\implies x^{\alpha}\geq 1 > 0\implies x\geq 1 \>.$$ From this we conclude that the support for $x$ is $x\geq 1$. Now we also require $\lim_{x\to\infty} F(x)=1$ which implies that $\alpha>0$ To work out what values the expectation exists, we require: $$\newcommand{\rd}{\mathrm{d}}E(X)=\int_{1}^{\infty}x\frac{\rd F(x)}{\rd x}\rd x=\alpha\int_{1}^{\infty}x^{-\alpha} \rd x$$ And this last expression shows that for $E(X)$ to exist, we must have $-\alpha<-1$, which in turn implies $\alpha>1$. This can easily be extended to determine the values of $\alpha$ for which the $r$'th raw moment $E(X^{r})$ exists.
Find expected value using CDF
I think you actually mean $x\geq 1$, otherwise the CDF is vacuous, as $F(1)=1-1^{-\alpha}=1-1=0$. What you "know" about CDFs is that they eventually approach zero as the argument $x$ decreases without
Find expected value using CDF I think you actually mean $x\geq 1$, otherwise the CDF is vacuous, as $F(1)=1-1^{-\alpha}=1-1=0$. What you "know" about CDFs is that they eventually approach zero as the argument $x$ decreases without bound and eventually approach one as $x \to \infty$. They are also non-decreasing, so this means $0\leq F(y)\leq F(x)\leq 1$ for all $y\leq x$. So if we plug in the CDF we get: $$0\leq 1-x^{-\alpha}\leq 1\implies 1\geq \frac{1}{x^{\alpha}}\geq 0\implies x^{\alpha}\geq 1 > 0\implies x\geq 1 \>.$$ From this we conclude that the support for $x$ is $x\geq 1$. Now we also require $\lim_{x\to\infty} F(x)=1$ which implies that $\alpha>0$ To work out what values the expectation exists, we require: $$\newcommand{\rd}{\mathrm{d}}E(X)=\int_{1}^{\infty}x\frac{\rd F(x)}{\rd x}\rd x=\alpha\int_{1}^{\infty}x^{-\alpha} \rd x$$ And this last expression shows that for $E(X)$ to exist, we must have $-\alpha<-1$, which in turn implies $\alpha>1$. This can easily be extended to determine the values of $\alpha$ for which the $r$'th raw moment $E(X^{r})$ exists.
Find expected value using CDF I think you actually mean $x\geq 1$, otherwise the CDF is vacuous, as $F(1)=1-1^{-\alpha}=1-1=0$. What you "know" about CDFs is that they eventually approach zero as the argument $x$ decreases without
3,508
Find expected value using CDF
The Answer requiring change of order is unnecessarily ugly. Here's a more elegant 2 line proof. $\int udv = uv - \int vdu$ Now take $du = dx$ and $v = 1- F(x)$ $\int_{0}^{\infty} [ 1- F(x)] dx = [x(1-F(x)) ]_{0}^{\infty} + \int_{0}^{\infty} x f(x)dx$ $= 0 + \int_{0}^{\infty} x f(x)dx$ $= \mathbb{E}[X] \qquad \blacksquare$
Find expected value using CDF
The Answer requiring change of order is unnecessarily ugly. Here's a more elegant 2 line proof. $\int udv = uv - \int vdu$ Now take $du = dx$ and $v = 1- F(x)$ $\int_{0}^{\infty} [ 1- F(x)] dx = [x(1
Find expected value using CDF The Answer requiring change of order is unnecessarily ugly. Here's a more elegant 2 line proof. $\int udv = uv - \int vdu$ Now take $du = dx$ and $v = 1- F(x)$ $\int_{0}^{\infty} [ 1- F(x)] dx = [x(1-F(x)) ]_{0}^{\infty} + \int_{0}^{\infty} x f(x)dx$ $= 0 + \int_{0}^{\infty} x f(x)dx$ $= \mathbb{E}[X] \qquad \blacksquare$
Find expected value using CDF The Answer requiring change of order is unnecessarily ugly. Here's a more elegant 2 line proof. $\int udv = uv - \int vdu$ Now take $du = dx$ and $v = 1- F(x)$ $\int_{0}^{\infty} [ 1- F(x)] dx = [x(1
3,509
Find expected value using CDF
In case when a conditional expectation using only CDF is needed, we can formulate two cases, $\mathbb{E}\left(x|x\geq y\right)=y+\frac{\int_{y}^{\infty}\left(1-F(x)\right)dx}{\left(1-F(y)\right)}$ $\mathbb{E}\left(x|x\leq y\right)=y-\frac{\int_{-\infty}^{y}F(x)dx}{F(y)}$ The derivation leverages on previous post such that we first define following integral, $\int_{y}^{\infty} [ 1- F(x)] dx = [x(1-F(x)) ]_{y}^{\infty} + \int_{y}^{\infty} x f(x)dx$ $\int_{y}^{\infty} x f(x)dx=\mathbb{E}\left(x|x\geq y\right)(1-F(y))$ Then using this definition and some algebra we arrive at first result. The second result can be obtained in the same way.
Find expected value using CDF
In case when a conditional expectation using only CDF is needed, we can formulate two cases, $\mathbb{E}\left(x|x\geq y\right)=y+\frac{\int_{y}^{\infty}\left(1-F(x)\right)dx}{\left(1-F(y)\right)}$ $\m
Find expected value using CDF In case when a conditional expectation using only CDF is needed, we can formulate two cases, $\mathbb{E}\left(x|x\geq y\right)=y+\frac{\int_{y}^{\infty}\left(1-F(x)\right)dx}{\left(1-F(y)\right)}$ $\mathbb{E}\left(x|x\leq y\right)=y-\frac{\int_{-\infty}^{y}F(x)dx}{F(y)}$ The derivation leverages on previous post such that we first define following integral, $\int_{y}^{\infty} [ 1- F(x)] dx = [x(1-F(x)) ]_{y}^{\infty} + \int_{y}^{\infty} x f(x)dx$ $\int_{y}^{\infty} x f(x)dx=\mathbb{E}\left(x|x\geq y\right)(1-F(y))$ Then using this definition and some algebra we arrive at first result. The second result can be obtained in the same way.
Find expected value using CDF In case when a conditional expectation using only CDF is needed, we can formulate two cases, $\mathbb{E}\left(x|x\geq y\right)=y+\frac{\int_{y}^{\infty}\left(1-F(x)\right)dx}{\left(1-F(y)\right)}$ $\m
3,510
Is the R language reliable for the field of economics?
Let me share a contrasting view point. I'm an economist. I was trained in econometrics using SAS. I work in financial services and just tonight I updated R based models which we will use tomorrow to put millions of dollars at risk. Your professor is just plain wrong. But the mistake he's making is VERY common and is worth discussing. What your professor seems to be doing is commingling the idea of the R software (the GNU implementation of the S language) vs. packages (or other code) implemented in R. I can write crap implementations of a linear regression using SAS IML. As a matter of fact, I've done that very thing. Does that mean SAS is crap? Of course not. SAS is crap because their pricing is non-transparent, ridiculously expensive, and their in house consultants over promise, under deliver, and charge a premium for the pleasure. But I digress... The openness of R is a double edged sword: Openness allows any Tom, Dick, or Harry to write a crap implementation of any algorithm they think up while smoking pot in the basement of the economics building. The same openness allows practicing economists to share code openly and improve on each other's code. The licensing rules with R mean that I can write parallelization code for running R in parallel on Amazon's cloud and not have to worry about licensing fees for a 30 node cluster. This is a HUGE win for simulation based analysis which is a big part of what I do. Your professor's comment that "many packages are built by people who know a lot about programming, but not much about economics" is, no doubt, correct. But there are 3716 packages on CRAN. You can be damn sure many of them were not written by economists. In the same way that you can be sure many of the 105,089 modules in CPAN were not written by economists. Choose your software carefully. Make sure you understand and have tested the tools you're using. Also make sure you understand the true economics behind which ever implementation you chose. Getting locked into a closed software solution is more costly than just the licensing fees.
Is the R language reliable for the field of economics?
Let me share a contrasting view point. I'm an economist. I was trained in econometrics using SAS. I work in financial services and just tonight I updated R based models which we will use tomorrow to p
Is the R language reliable for the field of economics? Let me share a contrasting view point. I'm an economist. I was trained in econometrics using SAS. I work in financial services and just tonight I updated R based models which we will use tomorrow to put millions of dollars at risk. Your professor is just plain wrong. But the mistake he's making is VERY common and is worth discussing. What your professor seems to be doing is commingling the idea of the R software (the GNU implementation of the S language) vs. packages (or other code) implemented in R. I can write crap implementations of a linear regression using SAS IML. As a matter of fact, I've done that very thing. Does that mean SAS is crap? Of course not. SAS is crap because their pricing is non-transparent, ridiculously expensive, and their in house consultants over promise, under deliver, and charge a premium for the pleasure. But I digress... The openness of R is a double edged sword: Openness allows any Tom, Dick, or Harry to write a crap implementation of any algorithm they think up while smoking pot in the basement of the economics building. The same openness allows practicing economists to share code openly and improve on each other's code. The licensing rules with R mean that I can write parallelization code for running R in parallel on Amazon's cloud and not have to worry about licensing fees for a 30 node cluster. This is a HUGE win for simulation based analysis which is a big part of what I do. Your professor's comment that "many packages are built by people who know a lot about programming, but not much about economics" is, no doubt, correct. But there are 3716 packages on CRAN. You can be damn sure many of them were not written by economists. In the same way that you can be sure many of the 105,089 modules in CPAN were not written by economists. Choose your software carefully. Make sure you understand and have tested the tools you're using. Also make sure you understand the true economics behind which ever implementation you chose. Getting locked into a closed software solution is more costly than just the licensing fees.
Is the R language reliable for the field of economics? Let me share a contrasting view point. I'm an economist. I was trained in econometrics using SAS. I work in financial services and just tonight I updated R based models which we will use tomorrow to p
3,511
Is the R language reliable for the field of economics?
It is not more or less reliable than other software. Base and recommended R is probably less prone to errors than contributed packages might be, but it depends on the authors. But R's biggest advantage is that you can check yourself whether it is! It is free software, not like Stata or SPSS or similar. Hence even if it was unreliable, it would be detected eventually. That may not be the case for proprietary software. And you can even help make it more reliable. For the rest of your professor's comments, he's clearly wrong and a person spreading FUD. But allow me to say that unreliable software should be the least of economist's concerns judging by the models and assumptions used and predictions made in this field. Stick with R if you like it and maybe you and the professor can even contribute to developing good software for economics. Here's a possibly interesting starting point http://cran.r-project.org/web/views/Econometrics.html and http://cran.r-project.org/web/views/TimeSeries.html
Is the R language reliable for the field of economics?
It is not more or less reliable than other software. Base and recommended R is probably less prone to errors than contributed packages might be, but it depends on the authors. But R's biggest advanta
Is the R language reliable for the field of economics? It is not more or less reliable than other software. Base and recommended R is probably less prone to errors than contributed packages might be, but it depends on the authors. But R's biggest advantage is that you can check yourself whether it is! It is free software, not like Stata or SPSS or similar. Hence even if it was unreliable, it would be detected eventually. That may not be the case for proprietary software. And you can even help make it more reliable. For the rest of your professor's comments, he's clearly wrong and a person spreading FUD. But allow me to say that unreliable software should be the least of economist's concerns judging by the models and assumptions used and predictions made in this field. Stick with R if you like it and maybe you and the professor can even contribute to developing good software for economics. Here's a possibly interesting starting point http://cran.r-project.org/web/views/Econometrics.html and http://cran.r-project.org/web/views/TimeSeries.html
Is the R language reliable for the field of economics? It is not more or less reliable than other software. Base and recommended R is probably less prone to errors than contributed packages might be, but it depends on the authors. But R's biggest advanta
3,512
Is the R language reliable for the field of economics?
Your professor makes some bold claims. I suspect that the problem was unfamiliarity with R language, not the actual results produced. I work in a company which does a lot of econometric modeling and we do everything in R. I also converted my economist colleague into using R. Concerning field of economics in my personal experience, reliability issue might go the other way around. For example EVIEWS version 5 had some strange bugs when working with panel data. And it reported usual Durbin-Watson statistic for pooled OLS, which in panel-data setting is plain wrong. R package for working with panel data has its issues too, but the money argument here plays strongly in R favor. Recently I was in course on non-stationary panel time series methods. The lecturer used RATS software. When demonstrating some code he advised clicking on some icon which cleans the workspace several times, just in case. Talk about reliability.
Is the R language reliable for the field of economics?
Your professor makes some bold claims. I suspect that the problem was unfamiliarity with R language, not the actual results produced. I work in a company which does a lot of econometric modeling and w
Is the R language reliable for the field of economics? Your professor makes some bold claims. I suspect that the problem was unfamiliarity with R language, not the actual results produced. I work in a company which does a lot of econometric modeling and we do everything in R. I also converted my economist colleague into using R. Concerning field of economics in my personal experience, reliability issue might go the other way around. For example EVIEWS version 5 had some strange bugs when working with panel data. And it reported usual Durbin-Watson statistic for pooled OLS, which in panel-data setting is plain wrong. R package for working with panel data has its issues too, but the money argument here plays strongly in R favor. Recently I was in course on non-stationary panel time series methods. The lecturer used RATS software. When demonstrating some code he advised clicking on some icon which cleans the workspace several times, just in case. Talk about reliability.
Is the R language reliable for the field of economics? Your professor makes some bold claims. I suspect that the problem was unfamiliarity with R language, not the actual results produced. I work in a company which does a lot of econometric modeling and w
3,513
Is the R language reliable for the field of economics?
I am an economist and I have been working in research for 4 years now, mostly doing applied econometrics. There are plenty of econometrics packages out there, and there is room for all of them. In my view, in economics, Stata is used for almost everything but time series, Rats, Eviews and Ox are used for time series, Matlab and Gauss are used for more low level programming. The advantage of R is that it is capable of doing almost everything the other programs do, and it's free and open. It requires some more programming and has less canned procedures, but it gets things done at the end. I use Stata most of the time, but if I had to choose one software to do everything, I would choose R. R is pretty reliable on most econometrics problems, but I can provide examples of some routines written for R that are not reliable. I have had problems with 3SLS and demand system estimation routines. Numerical optimization routines are not as robust as in Stata or Gauss. On the other hand, R is much better at problems like quantile regression. Still, with a good working knowledge of R, you can find out what is the problem in R's user written routines, fix it, and continue working. So I don't think the lack of reliability in some specific routines is a compelling reason not to use R at all. My advice would be to continue using R but to have experience on other program that is widely used in your field , e.g. Stata for microeconometrics or Rats for time series.
Is the R language reliable for the field of economics?
I am an economist and I have been working in research for 4 years now, mostly doing applied econometrics. There are plenty of econometrics packages out there, and there is room for all of them. In my
Is the R language reliable for the field of economics? I am an economist and I have been working in research for 4 years now, mostly doing applied econometrics. There are plenty of econometrics packages out there, and there is room for all of them. In my view, in economics, Stata is used for almost everything but time series, Rats, Eviews and Ox are used for time series, Matlab and Gauss are used for more low level programming. The advantage of R is that it is capable of doing almost everything the other programs do, and it's free and open. It requires some more programming and has less canned procedures, but it gets things done at the end. I use Stata most of the time, but if I had to choose one software to do everything, I would choose R. R is pretty reliable on most econometrics problems, but I can provide examples of some routines written for R that are not reliable. I have had problems with 3SLS and demand system estimation routines. Numerical optimization routines are not as robust as in Stata or Gauss. On the other hand, R is much better at problems like quantile regression. Still, with a good working knowledge of R, you can find out what is the problem in R's user written routines, fix it, and continue working. So I don't think the lack of reliability in some specific routines is a compelling reason not to use R at all. My advice would be to continue using R but to have experience on other program that is widely used in your field , e.g. Stata for microeconometrics or Rats for time series.
Is the R language reliable for the field of economics? I am an economist and I have been working in research for 4 years now, mostly doing applied econometrics. There are plenty of econometrics packages out there, and there is room for all of them. In my
3,514
Is the R language reliable for the field of economics?
When I was teaching graduate level statistics, I was telling my students: "I don't care what package you use, and you can use anything for your homework, as I expect you to provide substantive explanations, and will take points off if I see tr23y5m variable names in your submissions. I can support your learning very well in Stata, and reasonably well, in R. With SAS, you are on your own, as you claim you have taken a course in it. With SPSS or Minitab, God bless you". I imagine that the reasonable employers would think the same. What matters is your productivity in terms of the project outcomes. If you can achieve the goal in R with 40 hours of work, fine; if you can achieve it in C++ in 40 hours of work, fine; if you know how to do this in R in 40 hours, but your supervisor wants you to do this in SAS, and you have to spend 60 hours just to learn some basics and where the semicolons go, that can only be wise in the context of the large picture of the rest of the code being in SAS... and then the manager was not very wise in having hired an R programmer. From this perspective of the total cost, "free" R is a hugely overblown myth. Any serious project requires custom code, if just for the data input and formatting the output, and that's a non-zero cost of professional time. If this data input and formatting requires 10 hours of SAS code and 20 hours of R code, R is a more expensive software at the margin, as an economist would say, i.e., in terms of the additional cost to produce a given piece of functionality. If a big project requires 200 hours of R programmer's time and 100 hours of Stata programmer's time to provide identical functionality, Stata is cheaper overall, even accounting for the ~$1K license that you need to buy. It would be interesting to see such direct comparisons; I was involved in re-writing a huge mess of 2Mb of SPSS code that was said to have been accumulated over about 10 person-years into ~150K of Stata code that ran about as fast, may be a tad faster; that was about 1 person-year project. I don't know if this 10:1 efficiency ratio is typical for SPSS:Stata comparisons, but I won't be surprised if it were. For me, working with R is always a large expense because of the search costs: I have to determine which of the five packages with similar names does what I need to do, and gauge whether it does it reliably enough for me to use it in my work. It often means that it is cheaper for me to write my own Stata code in less time that I would be spending figuring out how to make R work in a given task. It should be understood that this is my personal idiosyncrasy; most people on this site are better useRs than I am. Funny that your prof would prefer Stata or GAUSS over R because "R was not written by economists". Neither were Stata or GAUSS; they are written by computer scientists using computer scientists' tools. If your prof gets ideas about programming from CodeAcademy.com, that's better than nothing, but professional grade software development is as different from typing in CodeAcademy.com text box as driving a freight truck is different from biking. (Stata was started by a labor econometrician converted computer scientist though, but he has not been doing this labor econometrics thing for about 25 years by now.) Update: As AndyW commented below, you can write terrible code in any language. The question of cost then becomes, which language is easier to debug. To me this looks like a combination of how accurate and informative the output is, and how easy and transparent the syntax itself is, and I don't have a good answer for that, of course. For instance, Python enforces code indenting, which is a good idea. Stata and R code can be folded over the brackets, and that's not going to work with SAS. Use of subroutines is a two-edged sword: the use of *apply() with ad-hoc functions in R is obviously very efficient, but harder to debug. By a similar token, Stata locals can mask nearly anything, and defaulting to an empty string, while useful, may also lead to difficult-to-catch errors.
Is the R language reliable for the field of economics?
When I was teaching graduate level statistics, I was telling my students: "I don't care what package you use, and you can use anything for your homework, as I expect you to provide substantive explana
Is the R language reliable for the field of economics? When I was teaching graduate level statistics, I was telling my students: "I don't care what package you use, and you can use anything for your homework, as I expect you to provide substantive explanations, and will take points off if I see tr23y5m variable names in your submissions. I can support your learning very well in Stata, and reasonably well, in R. With SAS, you are on your own, as you claim you have taken a course in it. With SPSS or Minitab, God bless you". I imagine that the reasonable employers would think the same. What matters is your productivity in terms of the project outcomes. If you can achieve the goal in R with 40 hours of work, fine; if you can achieve it in C++ in 40 hours of work, fine; if you know how to do this in R in 40 hours, but your supervisor wants you to do this in SAS, and you have to spend 60 hours just to learn some basics and where the semicolons go, that can only be wise in the context of the large picture of the rest of the code being in SAS... and then the manager was not very wise in having hired an R programmer. From this perspective of the total cost, "free" R is a hugely overblown myth. Any serious project requires custom code, if just for the data input and formatting the output, and that's a non-zero cost of professional time. If this data input and formatting requires 10 hours of SAS code and 20 hours of R code, R is a more expensive software at the margin, as an economist would say, i.e., in terms of the additional cost to produce a given piece of functionality. If a big project requires 200 hours of R programmer's time and 100 hours of Stata programmer's time to provide identical functionality, Stata is cheaper overall, even accounting for the ~$1K license that you need to buy. It would be interesting to see such direct comparisons; I was involved in re-writing a huge mess of 2Mb of SPSS code that was said to have been accumulated over about 10 person-years into ~150K of Stata code that ran about as fast, may be a tad faster; that was about 1 person-year project. I don't know if this 10:1 efficiency ratio is typical for SPSS:Stata comparisons, but I won't be surprised if it were. For me, working with R is always a large expense because of the search costs: I have to determine which of the five packages with similar names does what I need to do, and gauge whether it does it reliably enough for me to use it in my work. It often means that it is cheaper for me to write my own Stata code in less time that I would be spending figuring out how to make R work in a given task. It should be understood that this is my personal idiosyncrasy; most people on this site are better useRs than I am. Funny that your prof would prefer Stata or GAUSS over R because "R was not written by economists". Neither were Stata or GAUSS; they are written by computer scientists using computer scientists' tools. If your prof gets ideas about programming from CodeAcademy.com, that's better than nothing, but professional grade software development is as different from typing in CodeAcademy.com text box as driving a freight truck is different from biking. (Stata was started by a labor econometrician converted computer scientist though, but he has not been doing this labor econometrics thing for about 25 years by now.) Update: As AndyW commented below, you can write terrible code in any language. The question of cost then becomes, which language is easier to debug. To me this looks like a combination of how accurate and informative the output is, and how easy and transparent the syntax itself is, and I don't have a good answer for that, of course. For instance, Python enforces code indenting, which is a good idea. Stata and R code can be folded over the brackets, and that's not going to work with SAS. Use of subroutines is a two-edged sword: the use of *apply() with ad-hoc functions in R is obviously very efficient, but harder to debug. By a similar token, Stata locals can mask nearly anything, and defaulting to an empty string, while useful, may also lead to difficult-to-catch errors.
Is the R language reliable for the field of economics? When I was teaching graduate level statistics, I was telling my students: "I don't care what package you use, and you can use anything for your homework, as I expect you to provide substantive explana
3,515
Is the R language reliable for the field of economics?
I'd be very careful of anyone who claims a fact but never backs it up with anything substantial. You can easily turn his arguments around. For example, people getting paid to write code could have LESS incentive to get it right because there is an expectation that their code will be correct, whereas the typical basement dweller wants to make a commit that will impress the project leaders. Maybe he couldn't care less about how much extra time he spends doing it for free if it means quality work gets done. If the random number generator is 'messy' (which is a vague term; easily replacing a real fact to back up his argument), then he should be able to prove it or show you someone who can. If he gets incoherent results from a package, he should be able to point out the steps he took to get that result. If it's really a bug and you have good programming skills, you can even try and fix it for him! I realize my answer doesn't answer your question directly (sorry). Simply from the way he words his points, you can see there is no meat behind it. If there is, feel free to edit it in your question for people here to discuss it further!
Is the R language reliable for the field of economics?
I'd be very careful of anyone who claims a fact but never backs it up with anything substantial. You can easily turn his arguments around. For example, people getting paid to write code could have LE
Is the R language reliable for the field of economics? I'd be very careful of anyone who claims a fact but never backs it up with anything substantial. You can easily turn his arguments around. For example, people getting paid to write code could have LESS incentive to get it right because there is an expectation that their code will be correct, whereas the typical basement dweller wants to make a commit that will impress the project leaders. Maybe he couldn't care less about how much extra time he spends doing it for free if it means quality work gets done. If the random number generator is 'messy' (which is a vague term; easily replacing a real fact to back up his argument), then he should be able to prove it or show you someone who can. If he gets incoherent results from a package, he should be able to point out the steps he took to get that result. If it's really a bug and you have good programming skills, you can even try and fix it for him! I realize my answer doesn't answer your question directly (sorry). Simply from the way he words his points, you can see there is no meat behind it. If there is, feel free to edit it in your question for people here to discuss it further!
Is the R language reliable for the field of economics? I'd be very careful of anyone who claims a fact but never backs it up with anything substantial. You can easily turn his arguments around. For example, people getting paid to write code could have LE
3,516
Is the R language reliable for the field of economics?
In the ReplicationWiki (that I work on) you can see that R was one of the software packages used most often for some 2000 empirical studies published in some well established journals already in the years 2000-2013. It seems that it was more used in more recent years. Stata was used by far most often (>900 times), followed by MATLAB (280), SAS (60), GAUSS (60), Excel (50), R (30), FORTRAN (30), Mathematica (19), EViews (18), z-Tree (16), dynare (15), RATS (12), C (8), C++ (6), python (5, more recent studies), SPSS (5) and some others. Often times more than one package is used.
Is the R language reliable for the field of economics?
In the ReplicationWiki (that I work on) you can see that R was one of the software packages used most often for some 2000 empirical studies published in some well established journals already in the y
Is the R language reliable for the field of economics? In the ReplicationWiki (that I work on) you can see that R was one of the software packages used most often for some 2000 empirical studies published in some well established journals already in the years 2000-2013. It seems that it was more used in more recent years. Stata was used by far most often (>900 times), followed by MATLAB (280), SAS (60), GAUSS (60), Excel (50), R (30), FORTRAN (30), Mathematica (19), EViews (18), z-Tree (16), dynare (15), RATS (12), C (8), C++ (6), python (5, more recent studies), SPSS (5) and some others. Often times more than one package is used.
Is the R language reliable for the field of economics? In the ReplicationWiki (that I work on) you can see that R was one of the software packages used most often for some 2000 empirical studies published in some well established journals already in the y
3,517
Is the R language reliable for the field of economics?
I have been using R for half a decade and also use SAS, SPSS, Calc, WEKA and a couple of other tools. I never enjoyed with any tool as much as it was through R. Basically R is for those who think independently and try something on their own learning. When it comes to statistics it is all about methods. Users might not be knowing as how methods were defined and modeled in commercial software and they might be correct or wrong. R is for those who would like to define methods and use those methods that befits for their needs. It is all about freedom. This freedom is not there with commercial software despite of spending money and buying them. Knowledge is the property of community (society) nobody can claim authorship on the same. Research is all about finding solutions for problems. As far as R is concerned one need not worry about methods for the users are free to define and revamp. For instance, if there exists any model specific problem or erratically defined methods that can be fixed by either fixing or developing a new code. By doing so a researcher not only develops knowledge but also evolves. The advantage of R is that one need not be a computer programmer. Statistical methods are all about writing functions just with control statements and loops (to start with, The higher level things comes later). R has very easy programming environment for newbies.
Is the R language reliable for the field of economics?
I have been using R for half a decade and also use SAS, SPSS, Calc, WEKA and a couple of other tools. I never enjoyed with any tool as much as it was through R. Basically R is for those who think inde
Is the R language reliable for the field of economics? I have been using R for half a decade and also use SAS, SPSS, Calc, WEKA and a couple of other tools. I never enjoyed with any tool as much as it was through R. Basically R is for those who think independently and try something on their own learning. When it comes to statistics it is all about methods. Users might not be knowing as how methods were defined and modeled in commercial software and they might be correct or wrong. R is for those who would like to define methods and use those methods that befits for their needs. It is all about freedom. This freedom is not there with commercial software despite of spending money and buying them. Knowledge is the property of community (society) nobody can claim authorship on the same. Research is all about finding solutions for problems. As far as R is concerned one need not worry about methods for the users are free to define and revamp. For instance, if there exists any model specific problem or erratically defined methods that can be fixed by either fixing or developing a new code. By doing so a researcher not only develops knowledge but also evolves. The advantage of R is that one need not be a computer programmer. Statistical methods are all about writing functions just with control statements and loops (to start with, The higher level things comes later). R has very easy programming environment for newbies.
Is the R language reliable for the field of economics? I have been using R for half a decade and also use SAS, SPSS, Calc, WEKA and a couple of other tools. I never enjoyed with any tool as much as it was through R. Basically R is for those who think inde
3,518
A generalization of the Law of Iterated Expectations
INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these random variables generate. In other words $E[Y\mid X]$ is meant to mean $E[Y\mid \sigma(X)]$. This remark may seem out of place in an "Informal Treatment", but it reminds us that our conditioning entities are collections of sets (and when we condition on a single value, then this is a singleton set). And what do these sets contain? They contain the information with which the possible values of the random variable $X$ supply us about what may happen with the realization of $Y$. Bringing in the concept of Information, permits us to think about (and use) the Law of Iterated Expectations (sometimes called the "Tower Property") in a very intuitive way: The sigma-algebra generated by two random variables, is at least as large as that generated by one random variable: $\sigma (X) \subseteq \sigma(X,Z)$ in the proper set-theoretic meaning. So the information about $Y$ contained in $\sigma(X,Z)$ is at least as great as the corresponding information in $\sigma (X)$. Now, as notational innuendo, set $\sigma (X) \equiv I_x$ and $\sigma(X,Z) \equiv I_{xz}$. Then the LHS of the equation we are looking at, can be written $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right]$$ Describing verbally the above expression we have : "what is the expectation of {the expected value of $Y$ given Information $I_{xz}$} given that we have available information $I_x$ only?" Can we somehow "take into account" $I_{xz}$? No - we only know $I_x$. But if we use what we have (as we are obliged by the expression we want to resolve), then we are essentially saying things about $Y$ under the expectations operator, i.e. we say "$E(Y\mid I_x)$", no more -we have just exhausted our information. Hence $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right] = E\left(Y|I_{x} \right)$$ If somebody else doesn't, I will return for the formal treatment. A (bit more) FORMAL TREATMENT Let's see how two very important books of probability theory, P. Billingsley's Probability and Measure (3d ed.-1995) and D. Williams "Probability with Martingales" (1991), treat the matter of proving the "Law Of Iterated Expectations": Billingsley devotes exactly three lines to the proof. Williams, and I quote, says "(the Tower Property) is virtually immediate from the definition of conditional expectation". That's one line of text. Billingsley's proof is not less opaque. They are of course right: this important and very intuitive property of conditional expectation derives essentially directly (and almost immediately) from its definition -the only problem is, I suspect that this definition is not usually taught, or at least not highlighted, outside probability or measure theoretic circles. But in order to show in (almost) three lines that the Law of Iterated Expectations holds, we need the definition of conditional expectation, or rather, its defining property. Let a probability space $(\Omega, \mathcal F, \mathbf P)$, and an integrable random variable $Y$. Let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal F$, $\mathcal G \subseteq \mathcal F$. Then there exists a function $W$ that is $\mathcal G$-measurable, is integrable and (this is the defining property) $$E(W\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal G \qquad [1]$$ where $1_{G}$ is the indicator function of the set $G$. We say that $W$ is ("a version of") the conditional expectation of $Y$ given $\mathcal G$, and we write $W = E(Y\mid \mathcal G) \;a.s.$ The critical detail to note here is that the conditional expectation, has the same expected value as $Y$ does, not just over the whole $\mathcal G$, but in every subset $G$ of $\mathcal G$. (I will try now to present how the Tower property derives from the definition of conditional expectation). $W$ is a $\mathcal G$-measurable random variable. Consider then some sub-$\sigma$-algebra, say $\mathcal H \subseteq \mathcal G$. Then $G\in \mathcal H \Rightarrow G\in \mathcal G$. So, in an analogous manner as previously, we have the conditional expectation of $W$ given $\mathcal H$, say $U=E(W\mid \mathcal H) \;a.s.$ that is characterized by $$E(U\cdot\mathbb 1_{G}) = E(W\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [2]$$ Since $\mathcal H \subseteq \mathcal G$, equations $[1]$ and $[2]$ give us $$E(U\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [3]$$ But this is the defining property of the conditional expectation of $Y$ given $\mathcal H$. So we are entitled to write $U=E(Y\mid \mathcal H)\; a.s.$ Since we have also by construction $U = E(W\mid \mathcal H) = E\big(E[Y\mid \mathcal G]\mid \mathcal H\big)$, we just proved the Tower property, or the general form of the Law of Iterated Expectations - in eight lines.
A generalization of the Law of Iterated Expectations
INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these
A generalization of the Law of Iterated Expectations INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these random variables generate. In other words $E[Y\mid X]$ is meant to mean $E[Y\mid \sigma(X)]$. This remark may seem out of place in an "Informal Treatment", but it reminds us that our conditioning entities are collections of sets (and when we condition on a single value, then this is a singleton set). And what do these sets contain? They contain the information with which the possible values of the random variable $X$ supply us about what may happen with the realization of $Y$. Bringing in the concept of Information, permits us to think about (and use) the Law of Iterated Expectations (sometimes called the "Tower Property") in a very intuitive way: The sigma-algebra generated by two random variables, is at least as large as that generated by one random variable: $\sigma (X) \subseteq \sigma(X,Z)$ in the proper set-theoretic meaning. So the information about $Y$ contained in $\sigma(X,Z)$ is at least as great as the corresponding information in $\sigma (X)$. Now, as notational innuendo, set $\sigma (X) \equiv I_x$ and $\sigma(X,Z) \equiv I_{xz}$. Then the LHS of the equation we are looking at, can be written $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right]$$ Describing verbally the above expression we have : "what is the expectation of {the expected value of $Y$ given Information $I_{xz}$} given that we have available information $I_x$ only?" Can we somehow "take into account" $I_{xz}$? No - we only know $I_x$. But if we use what we have (as we are obliged by the expression we want to resolve), then we are essentially saying things about $Y$ under the expectations operator, i.e. we say "$E(Y\mid I_x)$", no more -we have just exhausted our information. Hence $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right] = E\left(Y|I_{x} \right)$$ If somebody else doesn't, I will return for the formal treatment. A (bit more) FORMAL TREATMENT Let's see how two very important books of probability theory, P. Billingsley's Probability and Measure (3d ed.-1995) and D. Williams "Probability with Martingales" (1991), treat the matter of proving the "Law Of Iterated Expectations": Billingsley devotes exactly three lines to the proof. Williams, and I quote, says "(the Tower Property) is virtually immediate from the definition of conditional expectation". That's one line of text. Billingsley's proof is not less opaque. They are of course right: this important and very intuitive property of conditional expectation derives essentially directly (and almost immediately) from its definition -the only problem is, I suspect that this definition is not usually taught, or at least not highlighted, outside probability or measure theoretic circles. But in order to show in (almost) three lines that the Law of Iterated Expectations holds, we need the definition of conditional expectation, or rather, its defining property. Let a probability space $(\Omega, \mathcal F, \mathbf P)$, and an integrable random variable $Y$. Let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal F$, $\mathcal G \subseteq \mathcal F$. Then there exists a function $W$ that is $\mathcal G$-measurable, is integrable and (this is the defining property) $$E(W\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal G \qquad [1]$$ where $1_{G}$ is the indicator function of the set $G$. We say that $W$ is ("a version of") the conditional expectation of $Y$ given $\mathcal G$, and we write $W = E(Y\mid \mathcal G) \;a.s.$ The critical detail to note here is that the conditional expectation, has the same expected value as $Y$ does, not just over the whole $\mathcal G$, but in every subset $G$ of $\mathcal G$. (I will try now to present how the Tower property derives from the definition of conditional expectation). $W$ is a $\mathcal G$-measurable random variable. Consider then some sub-$\sigma$-algebra, say $\mathcal H \subseteq \mathcal G$. Then $G\in \mathcal H \Rightarrow G\in \mathcal G$. So, in an analogous manner as previously, we have the conditional expectation of $W$ given $\mathcal H$, say $U=E(W\mid \mathcal H) \;a.s.$ that is characterized by $$E(U\cdot\mathbb 1_{G}) = E(W\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [2]$$ Since $\mathcal H \subseteq \mathcal G$, equations $[1]$ and $[2]$ give us $$E(U\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [3]$$ But this is the defining property of the conditional expectation of $Y$ given $\mathcal H$. So we are entitled to write $U=E(Y\mid \mathcal H)\; a.s.$ Since we have also by construction $U = E(W\mid \mathcal H) = E\big(E[Y\mid \mathcal G]\mid \mathcal H\big)$, we just proved the Tower property, or the general form of the Law of Iterated Expectations - in eight lines.
A generalization of the Law of Iterated Expectations INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these
3,519
A generalization of the Law of Iterated Expectations
The way I understand conditional expectation and teach my students is the following: conditional expectation $E[Y|\sigma(X)]$ is a picture taken by a camera with resolution $\sigma(X)$ As mentioned by Alecos Papadopoulos, the notation $E[Y|\sigma(X)]$ is more precise than $E[Y|X]$. Along the line of camera, one can think of $Y$ as the original object, e.g., a landscape, scenery. $E[Y|\sigma(X,Z)]$ is a picture taken by a camera with resolution $\sigma(X,Z)$. Expectation is an averaging operator ("blurring" operator?). The scenary may contain a lot of stuff, but the picture you took using a camera with low resolution will certainly make some detail go away, e.g., there may be an UFO in the sky that can be seen by your naked eye but it does not appear in your picture taken by (iphone 3?) If the resolution is so high such that $\sigma(X,Z)=\sigma(Y)$, then this picture is able to capture every detail of the real scenery. In this case, we have $E[Y|\sigma(Y)]=Y$. Now, $E[E[Y|\sigma(X,Z)]|\sigma(X)]$ can be viewed as: using another camera with resolution $\sigma(X)$ (e.g., iphone 1) which is lower than $\sigma(X,Z)$ (e.g., iphone 3) and take a picture on that picture generated by camera with resolution $\sigma(X,Z)$, then it should be clear that this picture on a picture should be the same as if you originally just use a camera with low resolution $\sigma(X)$ on the scenery. This provides intuition on $E[E[Y|X,Z]|X]=E[Y|X]$. In fact this same intuition tells us that $E[E[Y|X]|X,Z]=E[Y|X]$ still. This is because: if your first picture is taken by iphone 1 (i.e., low resolution), and now you want to use a better camera (e.g., iphone 3) to generate another photo on the first photo, then there is no way you can improve the quality of the first photo.
A generalization of the Law of Iterated Expectations
The way I understand conditional expectation and teach my students is the following: conditional expectation $E[Y|\sigma(X)]$ is a picture taken by a camera with resolution $\sigma(X)$ As mentioned b
A generalization of the Law of Iterated Expectations The way I understand conditional expectation and teach my students is the following: conditional expectation $E[Y|\sigma(X)]$ is a picture taken by a camera with resolution $\sigma(X)$ As mentioned by Alecos Papadopoulos, the notation $E[Y|\sigma(X)]$ is more precise than $E[Y|X]$. Along the line of camera, one can think of $Y$ as the original object, e.g., a landscape, scenery. $E[Y|\sigma(X,Z)]$ is a picture taken by a camera with resolution $\sigma(X,Z)$. Expectation is an averaging operator ("blurring" operator?). The scenary may contain a lot of stuff, but the picture you took using a camera with low resolution will certainly make some detail go away, e.g., there may be an UFO in the sky that can be seen by your naked eye but it does not appear in your picture taken by (iphone 3?) If the resolution is so high such that $\sigma(X,Z)=\sigma(Y)$, then this picture is able to capture every detail of the real scenery. In this case, we have $E[Y|\sigma(Y)]=Y$. Now, $E[E[Y|\sigma(X,Z)]|\sigma(X)]$ can be viewed as: using another camera with resolution $\sigma(X)$ (e.g., iphone 1) which is lower than $\sigma(X,Z)$ (e.g., iphone 3) and take a picture on that picture generated by camera with resolution $\sigma(X,Z)$, then it should be clear that this picture on a picture should be the same as if you originally just use a camera with low resolution $\sigma(X)$ on the scenery. This provides intuition on $E[E[Y|X,Z]|X]=E[Y|X]$. In fact this same intuition tells us that $E[E[Y|X]|X,Z]=E[Y|X]$ still. This is because: if your first picture is taken by iphone 1 (i.e., low resolution), and now you want to use a better camera (e.g., iphone 3) to generate another photo on the first photo, then there is no way you can improve the quality of the first photo.
A generalization of the Law of Iterated Expectations The way I understand conditional expectation and teach my students is the following: conditional expectation $E[Y|\sigma(X)]$ is a picture taken by a camera with resolution $\sigma(X)$ As mentioned b
3,520
A generalization of the Law of Iterated Expectations
In the Law of Iterated Expectation (LIE), $E\left[E[Y \mid X]\right] = E[Y]$, that inner expectation is a random variable which happens to be a function of $X$, say $g(X)$, and not a function of $Y$. That the expectation of this function of $X$ happens to equal the expectation of $Y$ is a consequence of a LIE. All that this is, hand-wavingly, just the assertion that the average value of $Y$ can be found by averaging the average values of $Y$ under various conditions. In effect, it is all just a direct consequence of the law of total probability. For example, if $X$ and $Y$ are discrete random variables with joint pmf $p_{X,Y}(x,y)$, then $$\begin{align} E[Y] &= \sum_y y\cdot p_Y(y) &\scriptstyle{\text{definition}}\\ &= \sum_y y \cdot \sum_x p_{X,Y}(x,y) &\scriptstyle{\text{write in terms of joint pmf}}\\ &= \sum_y y \cdot \sum_x p_{Y\mid X}(y \mid X=x)\cdot p_X(x) &\scriptstyle{\text{write in terms of conditional pmf}}\\ &= \sum_x p_X(x)\cdot \sum_y y \cdot p_{Y\mid X}(y \mid X=x) &\scriptstyle{\text{interchange order of summation}}\\ &= \sum_x p_X(x)\cdot E[Y \mid X = x] &\scriptstyle{\text{inner sum is conditional expectation}}\\ &= E\left[E[Y\mid X]\right] &\scriptstyle{\text{RV}~E[Y\mid X]~\text{has value}~E[Y\mid X=x]~\text{when}~X=x} \end{align}$$ Notice how that last expectation is with respect to $X$; $E[Y\mid X]$ is a function of $X$, not of $Y$, but nevertheless its mean is the same as the mean of $Y$. The generalized LIE that you are looking at has on the left $E\left[E[Y \mid X, Z] \mid X\right]$ in which the inner expectation is a function $h(X,Z)$ of two random variables $X$ and $Z$. The argument is similar to that outlined above but now we have to show that the random variable $E[Y\mid X]$ equals another random variable. We do this by looking at the value of $E[Y\mid X]$ when $X$ happens to have value $x$. Skipping the explanations, we have that $$\begin{align} E[Y \mid X = x] &= \sum_y y\cdot p_{Y\mid X}(y\mid X = x)\\ &= \sum_y y \cdot \frac{p_{X,Y}(x,y)}{p_X(x)}\\ &= \sum_y y \cdot \frac{\sum_z p_{X,Y,Z}(x,y,z)}{p_X(x)}\\ &= \sum_y y \cdot \frac{\sum_z p_{Y\mid X,Z}(y \mid X=x, Z=z)\cdot p_{X,Z}(x,z)}{p_X(x)}\\ &= \sum_z \frac{p_{X,Z}(x,z)}{p_X(x)}\sum_y y \cdot p_{Y\mid X,Z}(y \mid X=x, Z=z)\\ &= \sum_z p_{Z\mid X}(z \mid X=x)\cdot \sum_y y \cdot p_{Y\mid X,Z}(y \mid X=x, Z=z)\\ &= \sum_z p_{Z\mid X}(z \mid X=x)\cdot E[Y \mid X=x, Z=z)\\ &= E\left[E[Y\mid X,Z]\mid X = x\right] \end{align}$$ Note that the penultimate right side is the formula for the conditional expected value of the random variable $E[Y \mid X, Z]$ (a function of $X$ and $Z$) conditioned on the value of $X$. We are fixing $X$ to have value $x$, multiplying the values of the random variable $E[Y \mid X, Z]$ by the conditional pmf value of $Z$ given $X$, and summing all such terms. Thus, for each value $x$ of the random variable $X$, the value of the random variable $E[Y\mid X]$ (which we noted earlier is a function of $X$, not of $Y$), is the same as the value of the random variable $E\left[E[Y \mid X,Z]\mid X\right]$, that is, these two random variables are equal. Would I LIE to you?
A generalization of the Law of Iterated Expectations
In the Law of Iterated Expectation (LIE), $E\left[E[Y \mid X]\right] = E[Y]$, that inner expectation is a random variable which happens to be a function of $X$, say $g(X)$, and not a function of $Y$.
A generalization of the Law of Iterated Expectations In the Law of Iterated Expectation (LIE), $E\left[E[Y \mid X]\right] = E[Y]$, that inner expectation is a random variable which happens to be a function of $X$, say $g(X)$, and not a function of $Y$. That the expectation of this function of $X$ happens to equal the expectation of $Y$ is a consequence of a LIE. All that this is, hand-wavingly, just the assertion that the average value of $Y$ can be found by averaging the average values of $Y$ under various conditions. In effect, it is all just a direct consequence of the law of total probability. For example, if $X$ and $Y$ are discrete random variables with joint pmf $p_{X,Y}(x,y)$, then $$\begin{align} E[Y] &= \sum_y y\cdot p_Y(y) &\scriptstyle{\text{definition}}\\ &= \sum_y y \cdot \sum_x p_{X,Y}(x,y) &\scriptstyle{\text{write in terms of joint pmf}}\\ &= \sum_y y \cdot \sum_x p_{Y\mid X}(y \mid X=x)\cdot p_X(x) &\scriptstyle{\text{write in terms of conditional pmf}}\\ &= \sum_x p_X(x)\cdot \sum_y y \cdot p_{Y\mid X}(y \mid X=x) &\scriptstyle{\text{interchange order of summation}}\\ &= \sum_x p_X(x)\cdot E[Y \mid X = x] &\scriptstyle{\text{inner sum is conditional expectation}}\\ &= E\left[E[Y\mid X]\right] &\scriptstyle{\text{RV}~E[Y\mid X]~\text{has value}~E[Y\mid X=x]~\text{when}~X=x} \end{align}$$ Notice how that last expectation is with respect to $X$; $E[Y\mid X]$ is a function of $X$, not of $Y$, but nevertheless its mean is the same as the mean of $Y$. The generalized LIE that you are looking at has on the left $E\left[E[Y \mid X, Z] \mid X\right]$ in which the inner expectation is a function $h(X,Z)$ of two random variables $X$ and $Z$. The argument is similar to that outlined above but now we have to show that the random variable $E[Y\mid X]$ equals another random variable. We do this by looking at the value of $E[Y\mid X]$ when $X$ happens to have value $x$. Skipping the explanations, we have that $$\begin{align} E[Y \mid X = x] &= \sum_y y\cdot p_{Y\mid X}(y\mid X = x)\\ &= \sum_y y \cdot \frac{p_{X,Y}(x,y)}{p_X(x)}\\ &= \sum_y y \cdot \frac{\sum_z p_{X,Y,Z}(x,y,z)}{p_X(x)}\\ &= \sum_y y \cdot \frac{\sum_z p_{Y\mid X,Z}(y \mid X=x, Z=z)\cdot p_{X,Z}(x,z)}{p_X(x)}\\ &= \sum_z \frac{p_{X,Z}(x,z)}{p_X(x)}\sum_y y \cdot p_{Y\mid X,Z}(y \mid X=x, Z=z)\\ &= \sum_z p_{Z\mid X}(z \mid X=x)\cdot \sum_y y \cdot p_{Y\mid X,Z}(y \mid X=x, Z=z)\\ &= \sum_z p_{Z\mid X}(z \mid X=x)\cdot E[Y \mid X=x, Z=z)\\ &= E\left[E[Y\mid X,Z]\mid X = x\right] \end{align}$$ Note that the penultimate right side is the formula for the conditional expected value of the random variable $E[Y \mid X, Z]$ (a function of $X$ and $Z$) conditioned on the value of $X$. We are fixing $X$ to have value $x$, multiplying the values of the random variable $E[Y \mid X, Z]$ by the conditional pmf value of $Z$ given $X$, and summing all such terms. Thus, for each value $x$ of the random variable $X$, the value of the random variable $E[Y\mid X]$ (which we noted earlier is a function of $X$, not of $Y$), is the same as the value of the random variable $E\left[E[Y \mid X,Z]\mid X\right]$, that is, these two random variables are equal. Would I LIE to you?
A generalization of the Law of Iterated Expectations In the Law of Iterated Expectation (LIE), $E\left[E[Y \mid X]\right] = E[Y]$, that inner expectation is a random variable which happens to be a function of $X$, say $g(X)$, and not a function of $Y$.
3,521
Wald test for logistic regression
The estimates of the coefficients and the intercepts in logistic regression (and any GLM) are found via maximum-likelihood estimation (MLE). These estimates are denoted with a hat over the parameters, something like $\hat{\theta}$. Our parameter of interest is denoted $\theta_{0}$ and this is usually 0 as we want to test whether the coefficient differs from 0 or not. From asymptotic theory of MLE, we know that the difference between $\hat{\theta}$ and $\theta_{0}$ will be approximately normally distributed with mean 0 (details can be found in any mathematical statistics book such as Larry Wasserman's All of statistics). Recall that standard errors are nothing else than standard deviations of statistics (Sokal and Rohlf write in their book Biometry: "a statistic is any one of many computed or estimated statistical quantities", e.g. the mean, median, standard deviation, correlation coefficient, regression coefficient, ...). Dividing a normal distribution with mean 0 and standard deviation $\sigma$ by its standard deviation will yield the standard normal distribution with mean 0 and standard deviation 1. The Wald statistic is defined as (e.g. Wasserman (2006): All of Statistics, pages 153, 214-215): $$ W=\frac{(\hat{\beta}-\beta_{0})}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$ or $$ W^{2}=\frac{(\hat{\beta}-\beta_{0})^2}{\widehat{\operatorname{Var}}(\hat{\beta})}\sim \chi^{2}_{1} $$ The second form arises from the fact that the square of a standard normal distribution is the $\chi^{2}_{1}$-distribution with 1 degree of freedom (the sum of two squared standard normal distributions would be a $\chi^{2}_{2}$-distribution with 2 degrees of freedom and so on). Because the parameter of interest is usually 0 (i.e. $\beta_{0}=0$), the Wald statistic simplifies to $$ W=\frac{\hat{\beta}}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$ Which is what you described: The estimate of the coefficient divided by its standard error. When is a $z$ and when a $t$ value used? The choice between a $z$-value or a $t$-value depends on how the standard error of the coefficients has been calculated. Because the Wald statistic is asymptotically distributed as a standard normal distribution, we can use the $z$-score to calculate the $p$-value. When we, in addition to the coefficients, also have to estimate the residual variance, a $t$-value is used instead of the $z$-value. In ordinary least squares (OLS, normal linear regression), the variance-covariance matrix of the coefficients is $\operatorname{Var}[\hat{\beta}|X]=\sigma^2(X'X)^{-1}$ where $\sigma^2$ is the variance of the residuals (which is unknown and has to be estimated from the data) and $X$ is the design matrix. In OLS, the standard errors of the coefficients are the square roots of the diagonal elements of the variance-covariance matrix. Because we don't know $\sigma^2$, we have to replace it by its estimate $\hat{\sigma}^{2}=s^2$, so: $\widehat{\operatorname{se}}(\hat{\beta_{j}})=\sqrt{s^2(X'X)_{jj}^{-1}}$. Now that's the point: Because we have to estimate the variance of the residuals to calculate the standard error of the coefficients, we need to use a $t$-value and the $t$-distribution. In logistic (and poisson) regression, the variance of the residuals is related to the mean. If $Y\sim Bin(n, p)$, the mean is $E(Y)=np$ and the variance is $\operatorname{Var}(Y)=np(1-p)$ so the variance and the mean are related. In logistic and poisson regression but not in regression with gaussian errors, we know the expected variance and don't have to estimate it separately. The dispersion parameter $\phi$ indicates if we have more or less than the expected variance. If $\phi=1$ this means we observe the expected amount of variance, whereas $\phi<1$ means that we have less than the expected variance (called underdispersion) and $\phi>1$ means that we have extra variance beyond the expected (called overdispersion). The dispersion parameter in logistic and poisson regression is fixed at 1 which means that we can use the $z$-score. The dispersion parameter . In other regression types such as normal linear regression, we have to estimate the residual variance and thus, a $t$-value is used for calculating the $p$-values. In R, look at these two examples: Logistic regression mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) my.mod <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(my.mod) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Note that the dispersion parameter is fixed at 1 and thus, we get $z$-values. Normal linear regression (OLS) summary(lm(Fertility~., data=swiss)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 66.91518 10.70604 6.250 1.91e-07 *** Agriculture -0.17211 0.07030 -2.448 0.01873 * Examination -0.25801 0.25388 -1.016 0.31546 Education -0.87094 0.18303 -4.758 2.43e-05 *** Catholic 0.10412 0.03526 2.953 0.00519 ** Infant.Mortality 1.07705 0.38172 2.822 0.00734 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 7.165 on 41 degrees of freedom Here, we have to estimate the residual variance (denoted as "Residual standard error") and hence, we use $t$-values instead of $z$-values. Of course, in large samples, the $t$-distribution approximates the normal distribution and the difference doesn't matter. Another related post can be found here.
Wald test for logistic regression
The estimates of the coefficients and the intercepts in logistic regression (and any GLM) are found via maximum-likelihood estimation (MLE). These estimates are denoted with a hat over the parameters,
Wald test for logistic regression The estimates of the coefficients and the intercepts in logistic regression (and any GLM) are found via maximum-likelihood estimation (MLE). These estimates are denoted with a hat over the parameters, something like $\hat{\theta}$. Our parameter of interest is denoted $\theta_{0}$ and this is usually 0 as we want to test whether the coefficient differs from 0 or not. From asymptotic theory of MLE, we know that the difference between $\hat{\theta}$ and $\theta_{0}$ will be approximately normally distributed with mean 0 (details can be found in any mathematical statistics book such as Larry Wasserman's All of statistics). Recall that standard errors are nothing else than standard deviations of statistics (Sokal and Rohlf write in their book Biometry: "a statistic is any one of many computed or estimated statistical quantities", e.g. the mean, median, standard deviation, correlation coefficient, regression coefficient, ...). Dividing a normal distribution with mean 0 and standard deviation $\sigma$ by its standard deviation will yield the standard normal distribution with mean 0 and standard deviation 1. The Wald statistic is defined as (e.g. Wasserman (2006): All of Statistics, pages 153, 214-215): $$ W=\frac{(\hat{\beta}-\beta_{0})}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$ or $$ W^{2}=\frac{(\hat{\beta}-\beta_{0})^2}{\widehat{\operatorname{Var}}(\hat{\beta})}\sim \chi^{2}_{1} $$ The second form arises from the fact that the square of a standard normal distribution is the $\chi^{2}_{1}$-distribution with 1 degree of freedom (the sum of two squared standard normal distributions would be a $\chi^{2}_{2}$-distribution with 2 degrees of freedom and so on). Because the parameter of interest is usually 0 (i.e. $\beta_{0}=0$), the Wald statistic simplifies to $$ W=\frac{\hat{\beta}}{\widehat{\operatorname{se}}(\hat{\beta})}\sim \mathcal{N}(0,1) $$ Which is what you described: The estimate of the coefficient divided by its standard error. When is a $z$ and when a $t$ value used? The choice between a $z$-value or a $t$-value depends on how the standard error of the coefficients has been calculated. Because the Wald statistic is asymptotically distributed as a standard normal distribution, we can use the $z$-score to calculate the $p$-value. When we, in addition to the coefficients, also have to estimate the residual variance, a $t$-value is used instead of the $z$-value. In ordinary least squares (OLS, normal linear regression), the variance-covariance matrix of the coefficients is $\operatorname{Var}[\hat{\beta}|X]=\sigma^2(X'X)^{-1}$ where $\sigma^2$ is the variance of the residuals (which is unknown and has to be estimated from the data) and $X$ is the design matrix. In OLS, the standard errors of the coefficients are the square roots of the diagonal elements of the variance-covariance matrix. Because we don't know $\sigma^2$, we have to replace it by its estimate $\hat{\sigma}^{2}=s^2$, so: $\widehat{\operatorname{se}}(\hat{\beta_{j}})=\sqrt{s^2(X'X)_{jj}^{-1}}$. Now that's the point: Because we have to estimate the variance of the residuals to calculate the standard error of the coefficients, we need to use a $t$-value and the $t$-distribution. In logistic (and poisson) regression, the variance of the residuals is related to the mean. If $Y\sim Bin(n, p)$, the mean is $E(Y)=np$ and the variance is $\operatorname{Var}(Y)=np(1-p)$ so the variance and the mean are related. In logistic and poisson regression but not in regression with gaussian errors, we know the expected variance and don't have to estimate it separately. The dispersion parameter $\phi$ indicates if we have more or less than the expected variance. If $\phi=1$ this means we observe the expected amount of variance, whereas $\phi<1$ means that we have less than the expected variance (called underdispersion) and $\phi>1$ means that we have extra variance beyond the expected (called overdispersion). The dispersion parameter in logistic and poisson regression is fixed at 1 which means that we can use the $z$-score. The dispersion parameter . In other regression types such as normal linear regression, we have to estimate the residual variance and thus, a $t$-value is used for calculating the $p$-values. In R, look at these two examples: Logistic regression mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mydata$rank <- factor(mydata$rank) my.mod <- glm(admit ~ gre + gpa + rank, data = mydata, family = "binomial") summary(my.mod) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -3.989979 1.139951 -3.500 0.000465 *** gre 0.002264 0.001094 2.070 0.038465 * gpa 0.804038 0.331819 2.423 0.015388 * rank2 -0.675443 0.316490 -2.134 0.032829 * rank3 -1.340204 0.345306 -3.881 0.000104 *** rank4 -1.551464 0.417832 -3.713 0.000205 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Note that the dispersion parameter is fixed at 1 and thus, we get $z$-values. Normal linear regression (OLS) summary(lm(Fertility~., data=swiss)) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 66.91518 10.70604 6.250 1.91e-07 *** Agriculture -0.17211 0.07030 -2.448 0.01873 * Examination -0.25801 0.25388 -1.016 0.31546 Education -0.87094 0.18303 -4.758 2.43e-05 *** Catholic 0.10412 0.03526 2.953 0.00519 ** Infant.Mortality 1.07705 0.38172 2.822 0.00734 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 7.165 on 41 degrees of freedom Here, we have to estimate the residual variance (denoted as "Residual standard error") and hence, we use $t$-values instead of $z$-values. Of course, in large samples, the $t$-distribution approximates the normal distribution and the difference doesn't matter. Another related post can be found here.
Wald test for logistic regression The estimates of the coefficients and the intercepts in logistic regression (and any GLM) are found via maximum-likelihood estimation (MLE). These estimates are denoted with a hat over the parameters,
3,522
Warning in R - Chi-squared approximation may be incorrect
It gave the warning because many of the expected values will be very small and therefore the approximations of p may not be right. In R you can use chisq.test(a, simulate.p.value = TRUE) to use simulate p values. However, with such small cell sizes, all estimates will be poor. It might be good to just test pass vs. fail (deleting "no show") either with chi-square or logistic regression. Indeed, since it is pretty clear that the pass/fail grade is a dependent variable, logistic regression might be better.
Warning in R - Chi-squared approximation may be incorrect
It gave the warning because many of the expected values will be very small and therefore the approximations of p may not be right. In R you can use chisq.test(a, simulate.p.value = TRUE) to use simula
Warning in R - Chi-squared approximation may be incorrect It gave the warning because many of the expected values will be very small and therefore the approximations of p may not be right. In R you can use chisq.test(a, simulate.p.value = TRUE) to use simulate p values. However, with such small cell sizes, all estimates will be poor. It might be good to just test pass vs. fail (deleting "no show") either with chi-square or logistic regression. Indeed, since it is pretty clear that the pass/fail grade is a dependent variable, logistic regression might be better.
Warning in R - Chi-squared approximation may be incorrect It gave the warning because many of the expected values will be very small and therefore the approximations of p may not be right. In R you can use chisq.test(a, simulate.p.value = TRUE) to use simula
3,523
Warning in R - Chi-squared approximation may be incorrect
The issue is that the chi-square approximation to the distribution of the test statistic relies on the counts being roughly normally distributed. If many of the expected counts are very small, the approximation may be poor. Note that the actual distribution of the chi-square statistic for independence in contingency tables is discrete, not continuous. The noshow category will be a big contributor to the problem; one thing to consider is merging noshow and fail. You'll still get the warning but it won't affect the results nearly so much and the distribution should be quite reasonable (the rule that's being applied before the warning is given is too strict). But in any case, if you're willing to condition on the margins (as you do when running Fisher's exact test) you can deal with the problem very easily in R; set the simulate.p.value argument to TRUE; then you aren't reliant on the chi-square approximation to the distribution of the test statistic.
Warning in R - Chi-squared approximation may be incorrect
The issue is that the chi-square approximation to the distribution of the test statistic relies on the counts being roughly normally distributed. If many of the expected counts are very small, the app
Warning in R - Chi-squared approximation may be incorrect The issue is that the chi-square approximation to the distribution of the test statistic relies on the counts being roughly normally distributed. If many of the expected counts are very small, the approximation may be poor. Note that the actual distribution of the chi-square statistic for independence in contingency tables is discrete, not continuous. The noshow category will be a big contributor to the problem; one thing to consider is merging noshow and fail. You'll still get the warning but it won't affect the results nearly so much and the distribution should be quite reasonable (the rule that's being applied before the warning is given is too strict). But in any case, if you're willing to condition on the margins (as you do when running Fisher's exact test) you can deal with the problem very easily in R; set the simulate.p.value argument to TRUE; then you aren't reliant on the chi-square approximation to the distribution of the test statistic.
Warning in R - Chi-squared approximation may be incorrect The issue is that the chi-square approximation to the distribution of the test statistic relies on the counts being roughly normally distributed. If many of the expected counts are very small, the app
3,524
Warning in R - Chi-squared approximation may be incorrect
For such small counts, you could use Fisher's exact test: > fisher.test(a) Fisher's Exact Test for Count Data data: a p-value = 0.02618 alternative hypothesis: two.sided
Warning in R - Chi-squared approximation may be incorrect
For such small counts, you could use Fisher's exact test: > fisher.test(a) Fisher's Exact Test for Count Data data: a p-value = 0.02618 alternative hypothesis: two.sided
Warning in R - Chi-squared approximation may be incorrect For such small counts, you could use Fisher's exact test: > fisher.test(a) Fisher's Exact Test for Count Data data: a p-value = 0.02618 alternative hypothesis: two.sided
Warning in R - Chi-squared approximation may be incorrect For such small counts, you could use Fisher's exact test: > fisher.test(a) Fisher's Exact Test for Count Data data: a p-value = 0.02618 alternative hypothesis: two.sided
3,525
Warning in R - Chi-squared approximation may be incorrect
Please see the "Assumptions" section of Pearson's chi-squared test article. In a nutshell, when counts in any of the cells in your table are fewer than 5 then one of the assumptions is broken. I think that's what the error message is referring to. In the article linked you can also find about the correction that can be applied.
Warning in R - Chi-squared approximation may be incorrect
Please see the "Assumptions" section of Pearson's chi-squared test article. In a nutshell, when counts in any of the cells in your table are fewer than 5 then one of the assumptions is broken. I think
Warning in R - Chi-squared approximation may be incorrect Please see the "Assumptions" section of Pearson's chi-squared test article. In a nutshell, when counts in any of the cells in your table are fewer than 5 then one of the assumptions is broken. I think that's what the error message is referring to. In the article linked you can also find about the correction that can be applied.
Warning in R - Chi-squared approximation may be incorrect Please see the "Assumptions" section of Pearson's chi-squared test article. In a nutshell, when counts in any of the cells in your table are fewer than 5 then one of the assumptions is broken. I think
3,526
Warning in R - Chi-squared approximation may be incorrect
Your main question talks about the sample size, but I see that more than two groups are compared. If the p-value from the test is 0.05 or less, it would be difficult to interpret the results. Therefore, I am sharing a brief script that I use in such situations: # Load the required packages: library(MASS) # for chisq library(descr) # for crosstable CrossTable(a$exam_result, a$ethnicity fisher = T, chisq = T, expected = T, prop.c = F, prop.t = F, prop.chisq = F, sresid = T, format = 'SPSS') This code will generate both Pearson's Chi-square and Fisher's Chi square. It produces counts as well as proportions of each of the table entries. Based on the standardised residuals or z-values scores i.e., sresid If it is outside the range |1.96| i.e., less than -1.96 or greater than 1.96, then it is significant p < 0.05. The sign would then indicate whether positively related or negatively.
Warning in R - Chi-squared approximation may be incorrect
Your main question talks about the sample size, but I see that more than two groups are compared. If the p-value from the test is 0.05 or less, it would be difficult to interpret the results. Therefor
Warning in R - Chi-squared approximation may be incorrect Your main question talks about the sample size, but I see that more than two groups are compared. If the p-value from the test is 0.05 or less, it would be difficult to interpret the results. Therefore, I am sharing a brief script that I use in such situations: # Load the required packages: library(MASS) # for chisq library(descr) # for crosstable CrossTable(a$exam_result, a$ethnicity fisher = T, chisq = T, expected = T, prop.c = F, prop.t = F, prop.chisq = F, sresid = T, format = 'SPSS') This code will generate both Pearson's Chi-square and Fisher's Chi square. It produces counts as well as proportions of each of the table entries. Based on the standardised residuals or z-values scores i.e., sresid If it is outside the range |1.96| i.e., less than -1.96 or greater than 1.96, then it is significant p < 0.05. The sign would then indicate whether positively related or negatively.
Warning in R - Chi-squared approximation may be incorrect Your main question talks about the sample size, but I see that more than two groups are compared. If the p-value from the test is 0.05 or less, it would be difficult to interpret the results. Therefor
3,527
Warning in R - Chi-squared approximation may be incorrect
Your counts per cell are too low. The general rule of thumb is, if the count is bellow 5, use fisher.test. > fisher.test(a) The Fisher exact test extends well to small and large counts, while the chisq.test is generally used for larger counts. You have several values that are 0 and all are below 5, so the Fisher test is what you need!
Warning in R - Chi-squared approximation may be incorrect
Your counts per cell are too low. The general rule of thumb is, if the count is bellow 5, use fisher.test. > fisher.test(a) The Fisher exact test extends well to small and large counts, while the ch
Warning in R - Chi-squared approximation may be incorrect Your counts per cell are too low. The general rule of thumb is, if the count is bellow 5, use fisher.test. > fisher.test(a) The Fisher exact test extends well to small and large counts, while the chisq.test is generally used for larger counts. You have several values that are 0 and all are below 5, so the Fisher test is what you need!
Warning in R - Chi-squared approximation may be incorrect Your counts per cell are too low. The general rule of thumb is, if the count is bellow 5, use fisher.test. > fisher.test(a) The Fisher exact test extends well to small and large counts, while the ch
3,528
Cost function of neural network is non-convex?
The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. To make this analogous to one-variable functions, one could say that the cost function is neither shaped like the graph of $x^2$ nor like the graph of $-x^2$. Another example of a non-convex, non-concave function is $\sin(x)$ on $\mathbb{R}$. One of the most striking differences is that $\pm x^2$ has only one extremum, whereas $\sin$ has infinitely many maxima and minima. How does this relate to our neural network? A cost function $J(W,b)$ has also a number of local maxima and minima, as you can see in this picture, for example. The fact that $J$ has multiple minima can also be interpreted in a nice way. In each layer, you use multiple nodes which are assigned different parameters to make the cost function small. Except for the values of the parameters, these nodes are the same. So you could exchange the parameters of the first node in one layer with those of the second node in the same layer, and accounting for this change in the subsequent layers. You'd end up with a different set of parameters, but the value of the cost function can't be distinguished by (basically you just moved a node, to another place, but kept all the inputs/outputs the same).
Cost function of neural network is non-convex?
The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negativ
Cost function of neural network is non-convex? The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. To make this analogous to one-variable functions, one could say that the cost function is neither shaped like the graph of $x^2$ nor like the graph of $-x^2$. Another example of a non-convex, non-concave function is $\sin(x)$ on $\mathbb{R}$. One of the most striking differences is that $\pm x^2$ has only one extremum, whereas $\sin$ has infinitely many maxima and minima. How does this relate to our neural network? A cost function $J(W,b)$ has also a number of local maxima and minima, as you can see in this picture, for example. The fact that $J$ has multiple minima can also be interpreted in a nice way. In each layer, you use multiple nodes which are assigned different parameters to make the cost function small. Except for the values of the parameters, these nodes are the same. So you could exchange the parameters of the first node in one layer with those of the second node in the same layer, and accounting for this change in the subsequent layers. You'd end up with a different set of parameters, but the value of the cost function can't be distinguished by (basically you just moved a node, to another place, but kept all the inputs/outputs the same).
Cost function of neural network is non-convex? The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negativ
3,529
Cost function of neural network is non-convex?
If you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change. Hence if there is a non-zero global minimum as a function of weights, then it can't be unique since the permutation of weights gives another minimum. Hence the function is not convex.
Cost function of neural network is non-convex?
If you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change. Hence if there is a non-zero global minimum as a function
Cost function of neural network is non-convex? If you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change. Hence if there is a non-zero global minimum as a function of weights, then it can't be unique since the permutation of weights gives another minimum. Hence the function is not convex.
Cost function of neural network is non-convex? If you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change. Hence if there is a non-zero global minimum as a function
3,530
Cost function of neural network is non-convex?
Whether the objective function is convex or not depends on the details of the network. In the case where multiple local minima exist, you ask whether they're all equivalent. In general, the answer is no, but the chance of finding a local minimum with good generalization performance appears to increase with network size. This paper is of interest: Choromanska et al. (2015). The Loss Surfaces of Multilayer Networks http://arxiv.org/pdf/1412.0233v3.pdf From the introduction: For large-size networks, most local minima are equivalent and yield similar performance on a test set. The probability of finding a "bad" (high value) local minimum is non-zero for small-size networks and decreases quickly with network size. Struggling to find the global minimum on the training set (as opposed to one of the many good local ones) is not useful in practice and may lead to overfitting. They also cite some papers describing how saddle points are a bigger issue than local minima when training large networks.
Cost function of neural network is non-convex?
Whether the objective function is convex or not depends on the details of the network. In the case where multiple local minima exist, you ask whether they're all equivalent. In general, the answer is
Cost function of neural network is non-convex? Whether the objective function is convex or not depends on the details of the network. In the case where multiple local minima exist, you ask whether they're all equivalent. In general, the answer is no, but the chance of finding a local minimum with good generalization performance appears to increase with network size. This paper is of interest: Choromanska et al. (2015). The Loss Surfaces of Multilayer Networks http://arxiv.org/pdf/1412.0233v3.pdf From the introduction: For large-size networks, most local minima are equivalent and yield similar performance on a test set. The probability of finding a "bad" (high value) local minimum is non-zero for small-size networks and decreases quickly with network size. Struggling to find the global minimum on the training set (as opposed to one of the many good local ones) is not useful in practice and may lead to overfitting. They also cite some papers describing how saddle points are a bigger issue than local minima when training large networks.
Cost function of neural network is non-convex? Whether the objective function is convex or not depends on the details of the network. In the case where multiple local minima exist, you ask whether they're all equivalent. In general, the answer is
3,531
Cost function of neural network is non-convex?
Some answers for your updates: Yes, there are in general multiple local minima. (If there was only one, it would be called the global minimum.) The local minima will not necessarily be of the same value. In general, there may be no local minima sharing the same value. No, it's not convex unless it's a one-layer network. In the general multiple-layer case, the parameters of the later layers (the weights and activation parameters) can be highly recursive functions of the parameters in previous layers. Generally, multiplication of decision variables introduced by some recursive structure tends to destroy convexity. Another great example of this is MA(q) models in times series analysis. Side note: I don't really know what you mean by permuting nodes and weights. If the activation function varies across nodes, for instance, and you permute the nodes, you're essentially optimizing a different neural network. That is, while the minima of this permuted network may be the same minima, this is not the same network so you can't make a statement about the multiplicity of the same minima. For an analogy of this in the least-squares framework, you are for example swapping some rows of $y$ and $X$ and saying that since the minimum of $\|y - X\beta\|$ is the same as before that there are as many minimizers as there are permutations.
Cost function of neural network is non-convex?
Some answers for your updates: Yes, there are in general multiple local minima. (If there was only one, it would be called the global minimum.) The local minima will not necessarily be of the same va
Cost function of neural network is non-convex? Some answers for your updates: Yes, there are in general multiple local minima. (If there was only one, it would be called the global minimum.) The local minima will not necessarily be of the same value. In general, there may be no local minima sharing the same value. No, it's not convex unless it's a one-layer network. In the general multiple-layer case, the parameters of the later layers (the weights and activation parameters) can be highly recursive functions of the parameters in previous layers. Generally, multiplication of decision variables introduced by some recursive structure tends to destroy convexity. Another great example of this is MA(q) models in times series analysis. Side note: I don't really know what you mean by permuting nodes and weights. If the activation function varies across nodes, for instance, and you permute the nodes, you're essentially optimizing a different neural network. That is, while the minima of this permuted network may be the same minima, this is not the same network so you can't make a statement about the multiplicity of the same minima. For an analogy of this in the least-squares framework, you are for example swapping some rows of $y$ and $X$ and saying that since the minimum of $\|y - X\beta\|$ is the same as before that there are as many minimizers as there are permutations.
Cost function of neural network is non-convex? Some answers for your updates: Yes, there are in general multiple local minima. (If there was only one, it would be called the global minimum.) The local minima will not necessarily be of the same va
3,532
Cost function of neural network is non-convex?
You will have one global minimum if problem is convex or quasiconvex. About convex "building blocks" during building neural networks (Computer Science version) I think there are several of them which can be mentioned: max(0,x) - convex and increasing log-sum-exp - convex and increasing in each parameter y = Ax is affine and so convex in (A), maybe increasing maybe decreasing. y = Ax is affine and so convex in (x), maybe increasing maybe decreasing. Unfortunately it is not convex in (A, x) because it looks like indefinite quadratic form. Usual math discrete convolution (by "usual" I mean defined with repeating signal) Y=h*X Looks that it is affine function of h or of variable X. So it's a convex in variable h or in variable X. About both variables - I don't think so because when h and X are scalars convolution will reduce to indefinite quadratic form. max(f,g) - if f and g are convex then max(f,g) is also convex. If you substitute one function into another and create compositions then to still in the convex room for y=h(g(x),q(x)), but h should be convex and should increase (non-decrease) in each argument.... Why neural netwoks in non-convex: I think the convolution Y=h*X is not nessesary increasing in h. So if you not use any extra assumptions about kernel you will go out from convex optimization immediatly after you apply convolution. So there is no all fine with composition. Also convolution and matrix multiplication is not convex if consider couple parameters as mentioned above. So there is evean a problems with matrix multiplication: it is non-convex operation in parameters (A,x) y = Ax can be quasiconvex in (A,x) but also extra assumptions should be taken into account. Please let me know if you disagree or have any extra consideration. The question is also very interesting to me. p.s. max-pooling - which is downsamping with selecting max looks like some modification of elementwise max operations with affine precomposition (to pull need blocks) and it looks convex for me. About other questions No, logistic regression is not convex or concave, but it is log-concave. This means that after apply logarithm you will have concave function in explanatory variables. So here max log-likelihood trick is great. If there are not only one global minimum. Nothing can be said about relation between local minimums. Or at least you can not use convex optimization and it's extensions for it, because this area of math is deeply based on global underestimator. Maybe you have confusion about this. Because really people who create such schemas just do "something" and they receive "something". Unfortunately because we don't have perfect mechanism for tackle with non-convex optimization (in general). But there are even more simple things beside Neural Networks - which can not be solved like non-linear least squares -- https://youtu.be/l1X4tOoIHYo?t=2992 (EE263, L8, 50:10)
Cost function of neural network is non-convex?
You will have one global minimum if problem is convex or quasiconvex. About convex "building blocks" during building neural networks (Computer Science version) I think there are several of them which
Cost function of neural network is non-convex? You will have one global minimum if problem is convex or quasiconvex. About convex "building blocks" during building neural networks (Computer Science version) I think there are several of them which can be mentioned: max(0,x) - convex and increasing log-sum-exp - convex and increasing in each parameter y = Ax is affine and so convex in (A), maybe increasing maybe decreasing. y = Ax is affine and so convex in (x), maybe increasing maybe decreasing. Unfortunately it is not convex in (A, x) because it looks like indefinite quadratic form. Usual math discrete convolution (by "usual" I mean defined with repeating signal) Y=h*X Looks that it is affine function of h or of variable X. So it's a convex in variable h or in variable X. About both variables - I don't think so because when h and X are scalars convolution will reduce to indefinite quadratic form. max(f,g) - if f and g are convex then max(f,g) is also convex. If you substitute one function into another and create compositions then to still in the convex room for y=h(g(x),q(x)), but h should be convex and should increase (non-decrease) in each argument.... Why neural netwoks in non-convex: I think the convolution Y=h*X is not nessesary increasing in h. So if you not use any extra assumptions about kernel you will go out from convex optimization immediatly after you apply convolution. So there is no all fine with composition. Also convolution and matrix multiplication is not convex if consider couple parameters as mentioned above. So there is evean a problems with matrix multiplication: it is non-convex operation in parameters (A,x) y = Ax can be quasiconvex in (A,x) but also extra assumptions should be taken into account. Please let me know if you disagree or have any extra consideration. The question is also very interesting to me. p.s. max-pooling - which is downsamping with selecting max looks like some modification of elementwise max operations with affine precomposition (to pull need blocks) and it looks convex for me. About other questions No, logistic regression is not convex or concave, but it is log-concave. This means that after apply logarithm you will have concave function in explanatory variables. So here max log-likelihood trick is great. If there are not only one global minimum. Nothing can be said about relation between local minimums. Or at least you can not use convex optimization and it's extensions for it, because this area of math is deeply based on global underestimator. Maybe you have confusion about this. Because really people who create such schemas just do "something" and they receive "something". Unfortunately because we don't have perfect mechanism for tackle with non-convex optimization (in general). But there are even more simple things beside Neural Networks - which can not be solved like non-linear least squares -- https://youtu.be/l1X4tOoIHYo?t=2992 (EE263, L8, 50:10)
Cost function of neural network is non-convex? You will have one global minimum if problem is convex or quasiconvex. About convex "building blocks" during building neural networks (Computer Science version) I think there are several of them which
3,533
Cost function of neural network is non-convex?
The composition of multiple layers is what makes the cross-entropy or least-squares loss function of multi-layer neural networks non-convex with respect to the set of all weights and biases. The composition is via multiplications of functions of the weights/biases and that is the main culprit for non-convexity, not the non-linearity of activation functions nor the inherent over-parameterization (re the arguments around permutations). To understand how multiplying parameters can result in non-convexity, consider the function $f(x,y)=xy$. It is convex in $x$ when $y$ is constant and convex in $y$ when $x$ is fixed, but it is not convex in $x$ and $y$ jointly. Here is a plot of this function showing its non-convexity in $x$ and $y$: Another example is $f(x,y)=x^2y^2$, which is non-convex as it is zero on the axes and positive elsewhere, while $x^2$ and $y^2$ are strictly convex.
Cost function of neural network is non-convex?
The composition of multiple layers is what makes the cross-entropy or least-squares loss function of multi-layer neural networks non-convex with respect to the set of all weights and biases. The compo
Cost function of neural network is non-convex? The composition of multiple layers is what makes the cross-entropy or least-squares loss function of multi-layer neural networks non-convex with respect to the set of all weights and biases. The composition is via multiplications of functions of the weights/biases and that is the main culprit for non-convexity, not the non-linearity of activation functions nor the inherent over-parameterization (re the arguments around permutations). To understand how multiplying parameters can result in non-convexity, consider the function $f(x,y)=xy$. It is convex in $x$ when $y$ is constant and convex in $y$ when $x$ is fixed, but it is not convex in $x$ and $y$ jointly. Here is a plot of this function showing its non-convexity in $x$ and $y$: Another example is $f(x,y)=x^2y^2$, which is non-convex as it is zero on the axes and positive elsewhere, while $x^2$ and $y^2$ are strictly convex.
Cost function of neural network is non-convex? The composition of multiple layers is what makes the cross-entropy or least-squares loss function of multi-layer neural networks non-convex with respect to the set of all weights and biases. The compo
3,534
Cost function of neural network is non-convex?
By definition, a function $f(x)$ is convex over a convex set $S$ if for all $x, y \in S$ and $t \in [0, 1]$, $tf(x) + (1-t)f(y) \geq f(tx + (1-t)y)$. Think of this as a straight line connecting two points of $y = x^2$ always being above the curve itself. In the general case, $f$ can be shown to be convex if its Hessian is positive definite. Therefore, most of the cost functions used for training neural networks are convex with respect to the net's final output and expected value. This includes MSE, CCE. There are cherry-picked non-convex loss functions that could be used as well, such as the Rosenbrock function, $f(x, y) = 100(x^2-y)^2 + (x-1)^2$. However, I have not seen non-convex functions be used in literature or in practice unless the author is trying to show the goodness of their new update scheme. That an some $L \leq 1$ regularisation schemes. As for convexity with respect to the intermediary layer weights, unless the output of these intermediaries is non-convex, convexity is still found. Linear layers, convolutions, and activation functions like ReLU are convex, so the loss is also convex with respect to these layers. Generally you just check the convexity of activation functions. The argument about how to permute the weights and get the same loss shows that the loss isn't convex isn't true, and when it is, it's not useful. Consider again $f(x, y)=x^2$ as a loss function. This is convex. Say the net currently outputs $x=2$ yielding a loss of $4$. But, if the weights are changed so that the net now outputs $x = -2$, the loss is also $4$. This is a convex function that (assuming suitable net expressivity) has a way to get the same loss. Technically the argument also hinged on the fact that the loss you currently have is a global one - but this is an odd assumption as there's no way of knowing you've attained a global loss unless the function is cherry-picked or convex. There is also no suitable permutation of nodes to use, so there's no proof that the argument can be carried out.
Cost function of neural network is non-convex?
By definition, a function $f(x)$ is convex over a convex set $S$ if for all $x, y \in S$ and $t \in [0, 1]$, $tf(x) + (1-t)f(y) \geq f(tx + (1-t)y)$. Think of this as a straight line connecting two po
Cost function of neural network is non-convex? By definition, a function $f(x)$ is convex over a convex set $S$ if for all $x, y \in S$ and $t \in [0, 1]$, $tf(x) + (1-t)f(y) \geq f(tx + (1-t)y)$. Think of this as a straight line connecting two points of $y = x^2$ always being above the curve itself. In the general case, $f$ can be shown to be convex if its Hessian is positive definite. Therefore, most of the cost functions used for training neural networks are convex with respect to the net's final output and expected value. This includes MSE, CCE. There are cherry-picked non-convex loss functions that could be used as well, such as the Rosenbrock function, $f(x, y) = 100(x^2-y)^2 + (x-1)^2$. However, I have not seen non-convex functions be used in literature or in practice unless the author is trying to show the goodness of their new update scheme. That an some $L \leq 1$ regularisation schemes. As for convexity with respect to the intermediary layer weights, unless the output of these intermediaries is non-convex, convexity is still found. Linear layers, convolutions, and activation functions like ReLU are convex, so the loss is also convex with respect to these layers. Generally you just check the convexity of activation functions. The argument about how to permute the weights and get the same loss shows that the loss isn't convex isn't true, and when it is, it's not useful. Consider again $f(x, y)=x^2$ as a loss function. This is convex. Say the net currently outputs $x=2$ yielding a loss of $4$. But, if the weights are changed so that the net now outputs $x = -2$, the loss is also $4$. This is a convex function that (assuming suitable net expressivity) has a way to get the same loss. Technically the argument also hinged on the fact that the loss you currently have is a global one - but this is an odd assumption as there's no way of knowing you've attained a global loss unless the function is cherry-picked or convex. There is also no suitable permutation of nodes to use, so there's no proof that the argument can be carried out.
Cost function of neural network is non-convex? By definition, a function $f(x)$ is convex over a convex set $S$ if for all $x, y \in S$ and $t \in [0, 1]$, $tf(x) + (1-t)f(y) \geq f(tx + (1-t)y)$. Think of this as a straight line connecting two po
3,535
Is chi-squared always a one-sided test?
The chi-squared test is essentially always a one-sided test. Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to as such, but even when it's not, it is still often in essence a goodness of fit. For example, the chi-squared test of independence on a 2 x 2 frequency table is (sort of) a test of goodness of fit of the first row (column) to the distribution specified by the second row (column), and vice versa, simultaneously. Thus, when the realized chi-squared value is way out on the right tail of it's distribution, it indicates a poor fit, and if it is far enough, relative to some pre-specified threshold, we might conclude that it is so poor that we don't believe the data are from that reference distribution. If we were to use the chi-squared test as a two-sided test, we would also be worried if the statistic were too far into the left side of the chi-squared distribution. This would mean that we are worried the fit might be too good. This is simply not something we are typically worried about. (As a historical side-note, this is related to the controversy of whether Mendel fudged his data. The idea was that his data were too good to be true. See here for more info if you're curious.)
Is chi-squared always a one-sided test?
The chi-squared test is essentially always a one-sided test. Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to
Is chi-squared always a one-sided test? The chi-squared test is essentially always a one-sided test. Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to as such, but even when it's not, it is still often in essence a goodness of fit. For example, the chi-squared test of independence on a 2 x 2 frequency table is (sort of) a test of goodness of fit of the first row (column) to the distribution specified by the second row (column), and vice versa, simultaneously. Thus, when the realized chi-squared value is way out on the right tail of it's distribution, it indicates a poor fit, and if it is far enough, relative to some pre-specified threshold, we might conclude that it is so poor that we don't believe the data are from that reference distribution. If we were to use the chi-squared test as a two-sided test, we would also be worried if the statistic were too far into the left side of the chi-squared distribution. This would mean that we are worried the fit might be too good. This is simply not something we are typically worried about. (As a historical side-note, this is related to the controversy of whether Mendel fudged his data. The idea was that his data were too good to be true. See here for more info if you're curious.)
Is chi-squared always a one-sided test? The chi-squared test is essentially always a one-sided test. Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to
3,536
Is chi-squared always a one-sided test?
Is chi-squared always a one-sided test? That really depends on two things: what hypothesis is being tested. If you're testing variance of normal data against a specified value, it's quite possible to be dealing with the upper or lower tails of the chi-square (one-tailed), or both tails of the distribution. We have to remember that $\frac{(O-E)^2} E$ type statistics are not the only chi-square tests in town! whether people are talking about the alternative hypothesis being one- or two-sided (because some people use 'two-tailed' to refer to a two-sided alternative, irrespective of what happens with the sampling distribution of the statistic. This can sometimes be confusing. So for example, if we're looking at a two-sample proportions test, someone might in the null write that the two proportions are equal and in the alternative write that $\pi_1 \neq \pi_2$ and then speak of it as 'two-tailed', but test it using a chi-square rather than a z-test, and so only look at the upper tail of the distribution of the test statistic (so it's two tailed in terms of the distribution of the difference in sample proportions, but one tailed in terms of the distribution of the chi-square statistic obtained from that -- in much the same way that if you make your t-test statistc $|T|$, you're only looking at one tail in the distribution of $|T|$). Which is to say, we have to be very careful about what we mean to cover by the use of 'chi-square test' and precise about what we mean when we say 'one-tailed' vs 'two-tailed'. In some circumstances (two I mentioned; there may be more), it may make perfect sense to call it two-tailed, or it may be reasonable to call it two-tailed if you accept some looseness of the use of terminology. It may be a reasonable statement to say it's only ever one-tailed if you restrict discussion to particular kinds of chi-square tests.
Is chi-squared always a one-sided test?
Is chi-squared always a one-sided test? That really depends on two things: what hypothesis is being tested. If you're testing variance of normal data against a specified value, it's quite possible t
Is chi-squared always a one-sided test? Is chi-squared always a one-sided test? That really depends on two things: what hypothesis is being tested. If you're testing variance of normal data against a specified value, it's quite possible to be dealing with the upper or lower tails of the chi-square (one-tailed), or both tails of the distribution. We have to remember that $\frac{(O-E)^2} E$ type statistics are not the only chi-square tests in town! whether people are talking about the alternative hypothesis being one- or two-sided (because some people use 'two-tailed' to refer to a two-sided alternative, irrespective of what happens with the sampling distribution of the statistic. This can sometimes be confusing. So for example, if we're looking at a two-sample proportions test, someone might in the null write that the two proportions are equal and in the alternative write that $\pi_1 \neq \pi_2$ and then speak of it as 'two-tailed', but test it using a chi-square rather than a z-test, and so only look at the upper tail of the distribution of the test statistic (so it's two tailed in terms of the distribution of the difference in sample proportions, but one tailed in terms of the distribution of the chi-square statistic obtained from that -- in much the same way that if you make your t-test statistc $|T|$, you're only looking at one tail in the distribution of $|T|$). Which is to say, we have to be very careful about what we mean to cover by the use of 'chi-square test' and precise about what we mean when we say 'one-tailed' vs 'two-tailed'. In some circumstances (two I mentioned; there may be more), it may make perfect sense to call it two-tailed, or it may be reasonable to call it two-tailed if you accept some looseness of the use of terminology. It may be a reasonable statement to say it's only ever one-tailed if you restrict discussion to particular kinds of chi-square tests.
Is chi-squared always a one-sided test? Is chi-squared always a one-sided test? That really depends on two things: what hypothesis is being tested. If you're testing variance of normal data against a specified value, it's quite possible t
3,537
Is chi-squared always a one-sided test?
The chi-square test $(n-1)s^2/\sigma^2$ of the hypothesis that the variance is $\sigma^2$ can be either one- or two-tailed in exactly the same sense that the t-test $(m-\mu)\sqrt{n}/s$ of the hypothesis that the mean is $\mu$ can be either one- or two-tailed.
Is chi-squared always a one-sided test?
The chi-square test $(n-1)s^2/\sigma^2$ of the hypothesis that the variance is $\sigma^2$ can be either one- or two-tailed in exactly the same sense that the t-test $(m-\mu)\sqrt{n}/s$ of the hypothes
Is chi-squared always a one-sided test? The chi-square test $(n-1)s^2/\sigma^2$ of the hypothesis that the variance is $\sigma^2$ can be either one- or two-tailed in exactly the same sense that the t-test $(m-\mu)\sqrt{n}/s$ of the hypothesis that the mean is $\mu$ can be either one- or two-tailed.
Is chi-squared always a one-sided test? The chi-square test $(n-1)s^2/\sigma^2$ of the hypothesis that the variance is $\sigma^2$ can be either one- or two-tailed in exactly the same sense that the t-test $(m-\mu)\sqrt{n}/s$ of the hypothes
3,538
Is chi-squared always a one-sided test?
I also have had some problems to come to grips with this question as well, but after some experimentation it seemed as if my problem was simply in how the tests are named. In SPSS as an example, a 2x2 table can have an addition of a chisquare-test. There there are two columns for p-values, one for the "Pearson Chi-Sqare", "Continuity Correction" etc, and another pair of columns for Fisher's exact test where there are one column for a 2-sided test and another for a 1-sided test. I first thought the 1- and 2-sides denoted a 1- or 2-sided version of the chisquare test, which seemed odd. It turned out however that this denotes the underlying formulation of the alternate hypothesis in the test of a difference between proportions, i e the z-test. So the often reasonable 2-sided test of proportions is achieved in SPSS with the chisquare test, where the chisquare measure is compared with a value in the (1-sided) upper tail of the distribution. Guess this is what other responses to the original question already have pointed out, but it took me some time to realize just that. By the way, the same kind of formulation is used in openepi.com and possibly other systems as well.
Is chi-squared always a one-sided test?
I also have had some problems to come to grips with this question as well, but after some experimentation it seemed as if my problem was simply in how the tests are named. In SPSS as an example, a 2x
Is chi-squared always a one-sided test? I also have had some problems to come to grips with this question as well, but after some experimentation it seemed as if my problem was simply in how the tests are named. In SPSS as an example, a 2x2 table can have an addition of a chisquare-test. There there are two columns for p-values, one for the "Pearson Chi-Sqare", "Continuity Correction" etc, and another pair of columns for Fisher's exact test where there are one column for a 2-sided test and another for a 1-sided test. I first thought the 1- and 2-sides denoted a 1- or 2-sided version of the chisquare test, which seemed odd. It turned out however that this denotes the underlying formulation of the alternate hypothesis in the test of a difference between proportions, i e the z-test. So the often reasonable 2-sided test of proportions is achieved in SPSS with the chisquare test, where the chisquare measure is compared with a value in the (1-sided) upper tail of the distribution. Guess this is what other responses to the original question already have pointed out, but it took me some time to realize just that. By the way, the same kind of formulation is used in openepi.com and possibly other systems as well.
Is chi-squared always a one-sided test? I also have had some problems to come to grips with this question as well, but after some experimentation it seemed as if my problem was simply in how the tests are named. In SPSS as an example, a 2x
3,539
Is chi-squared always a one-sided test?
@gung's answer is correct and is the way discussion of $\chi^2$ should be read. However, confusion may arise from another reading: It would be easy to interpret a $\chi^2$ as 'two-sided' in the sense that the test statistic is typically composed of a sum of squared differences from both sides of an original distribution. This reading would be to confuse how the test statistic was generated with which tails of the test statistic are being looked at.
Is chi-squared always a one-sided test?
@gung's answer is correct and is the way discussion of $\chi^2$ should be read. However, confusion may arise from another reading: It would be easy to interpret a $\chi^2$ as 'two-sided' in the sense
Is chi-squared always a one-sided test? @gung's answer is correct and is the way discussion of $\chi^2$ should be read. However, confusion may arise from another reading: It would be easy to interpret a $\chi^2$ as 'two-sided' in the sense that the test statistic is typically composed of a sum of squared differences from both sides of an original distribution. This reading would be to confuse how the test statistic was generated with which tails of the test statistic are being looked at.
Is chi-squared always a one-sided test? @gung's answer is correct and is the way discussion of $\chi^2$ should be read. However, confusion may arise from another reading: It would be easy to interpret a $\chi^2$ as 'two-sided' in the sense
3,540
Is chi-squared always a one-sided test?
$\chi^2$ test of variance can be one or two sided: The test statistic is $(n-1)\frac{s^2}{\sigma^2}$, and the null hypothesis is: s (sample deviation)= $\sigma$ (a reference value). The alternative hypothesis could be: (a) $ s> \sigma$, (b) $s < \sigma$, (c) $s \neq \sigma$. p-value caculation involves the asymmetry of the distribution.
Is chi-squared always a one-sided test?
$\chi^2$ test of variance can be one or two sided: The test statistic is $(n-1)\frac{s^2}{\sigma^2}$, and the null hypothesis is: s (sample deviation)= $\sigma$ (a reference value). The alternative hy
Is chi-squared always a one-sided test? $\chi^2$ test of variance can be one or two sided: The test statistic is $(n-1)\frac{s^2}{\sigma^2}$, and the null hypothesis is: s (sample deviation)= $\sigma$ (a reference value). The alternative hypothesis could be: (a) $ s> \sigma$, (b) $s < \sigma$, (c) $s \neq \sigma$. p-value caculation involves the asymmetry of the distribution.
Is chi-squared always a one-sided test? $\chi^2$ test of variance can be one or two sided: The test statistic is $(n-1)\frac{s^2}{\sigma^2}$, and the null hypothesis is: s (sample deviation)= $\sigma$ (a reference value). The alternative hy
3,541
Is chi-squared always a one-sided test?
The $\chi^2$ and F tests are one sided tests because we never have negative values of $\chi^2$ and F. For $\chi^2$, the sum of the difference of observed and expected squared is divided by the expected ( a proportion), thus chi-square is always a positive number or it may be close to zero on the right side when there is no difference. Thus, this test is always a right sided one-sided test. The explanation for F test is similar. For the F test, we compare between group variance to sum of within group variances ( mean square error to $\frac{SSw}{dfw}$. If the between and within mean sum of squares are equal we get an F value of 1. Since it is essentially the ratio of sum of squares, the value never becomes a negative number. Thus, we don't have a left sided test and F test is always a right sided one sided test. Check the figures of $\chi^2$ and F distributions, they are always positive.For both tests, you are looking at whether the calculated statistic lies to the right of the critical value.
Is chi-squared always a one-sided test?
The $\chi^2$ and F tests are one sided tests because we never have negative values of $\chi^2$ and F. For $\chi^2$, the sum of the difference of observed and expected squared is divided by the expecte
Is chi-squared always a one-sided test? The $\chi^2$ and F tests are one sided tests because we never have negative values of $\chi^2$ and F. For $\chi^2$, the sum of the difference of observed and expected squared is divided by the expected ( a proportion), thus chi-square is always a positive number or it may be close to zero on the right side when there is no difference. Thus, this test is always a right sided one-sided test. The explanation for F test is similar. For the F test, we compare between group variance to sum of within group variances ( mean square error to $\frac{SSw}{dfw}$. If the between and within mean sum of squares are equal we get an F value of 1. Since it is essentially the ratio of sum of squares, the value never becomes a negative number. Thus, we don't have a left sided test and F test is always a right sided one sided test. Check the figures of $\chi^2$ and F distributions, they are always positive.For both tests, you are looking at whether the calculated statistic lies to the right of the critical value.
Is chi-squared always a one-sided test? The $\chi^2$ and F tests are one sided tests because we never have negative values of $\chi^2$ and F. For $\chi^2$, the sum of the difference of observed and expected squared is divided by the expecte
3,542
Are mean normalization and feature scaling needed for k-means clustering?
If your variables are of incomparable units (e.g. height in cm and weight in kg) then you should standardize variables, of course. Even if variables are of the same units but show quite different variances it is still a good idea to standardize before K-means. You see, K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance, so clusters will tend to be separated along variables with greater variance. A different thing also worth to remind is that K-means clustering results are potentially sensitive to the order of objects in the data set$^1$. A justified practice would be to run the analysis several times, randomizing objects order; then average the cluster centres of the correpondent/same clusters between those runs$^2$ and input the centres as initial ones for one final run of the analysis. Here is some general reasoning about the issue of standardizing features in cluster or other multivariate analysis. $^1$ Specifically, (1) some methods of centres initialization are sensitive to case order; (2) even when the initialization method isn't sensitive, results might depend sometimes on the order the initial centres are introduced to the program by (in particular, when there are tied, equal distances within data); (3) so-called running means version of k-means algorithm is naturaly sensitive to case order (in this version - which is not often used apart from maybe online clustering - recalculation of centroids take place after each individual case is re-asssigned to another cluster). $^2$ In practice, which clusters from different runs correspond - is often immediately seen by their relative closeness. When not easily seen, correspondence can be established by a hierarchical clustering done among the centres or by a matching algorithm such as Hungarian. But, to remark, if the correspondence is so vague that it almost vanishes, then the data either had no cluster structure detectable by K-means, or K is very wrong.
Are mean normalization and feature scaling needed for k-means clustering?
If your variables are of incomparable units (e.g. height in cm and weight in kg) then you should standardize variables, of course. Even if variables are of the same units but show quite different vari
Are mean normalization and feature scaling needed for k-means clustering? If your variables are of incomparable units (e.g. height in cm and weight in kg) then you should standardize variables, of course. Even if variables are of the same units but show quite different variances it is still a good idea to standardize before K-means. You see, K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance, so clusters will tend to be separated along variables with greater variance. A different thing also worth to remind is that K-means clustering results are potentially sensitive to the order of objects in the data set$^1$. A justified practice would be to run the analysis several times, randomizing objects order; then average the cluster centres of the correpondent/same clusters between those runs$^2$ and input the centres as initial ones for one final run of the analysis. Here is some general reasoning about the issue of standardizing features in cluster or other multivariate analysis. $^1$ Specifically, (1) some methods of centres initialization are sensitive to case order; (2) even when the initialization method isn't sensitive, results might depend sometimes on the order the initial centres are introduced to the program by (in particular, when there are tied, equal distances within data); (3) so-called running means version of k-means algorithm is naturaly sensitive to case order (in this version - which is not often used apart from maybe online clustering - recalculation of centroids take place after each individual case is re-asssigned to another cluster). $^2$ In practice, which clusters from different runs correspond - is often immediately seen by their relative closeness. When not easily seen, correspondence can be established by a hierarchical clustering done among the centres or by a matching algorithm such as Hungarian. But, to remark, if the correspondence is so vague that it almost vanishes, then the data either had no cluster structure detectable by K-means, or K is very wrong.
Are mean normalization and feature scaling needed for k-means clustering? If your variables are of incomparable units (e.g. height in cm and weight in kg) then you should standardize variables, of course. Even if variables are of the same units but show quite different vari
3,543
Are mean normalization and feature scaling needed for k-means clustering?
Depends on your data I guess. If you would like trends in your data to cluster together regardless of the magnitude, you should center. eg. say you have some gene expression profile, and want to see trends in gene expression, then without mean centering, your low expression genes will cluster together and away from high expression genes, regardless of trends. Centering makes genes (both high and low expressed) with like expression patterns cluster together.
Are mean normalization and feature scaling needed for k-means clustering?
Depends on your data I guess. If you would like trends in your data to cluster together regardless of the magnitude, you should center. eg. say you have some gene expression profile, and want to see t
Are mean normalization and feature scaling needed for k-means clustering? Depends on your data I guess. If you would like trends in your data to cluster together regardless of the magnitude, you should center. eg. say you have some gene expression profile, and want to see trends in gene expression, then without mean centering, your low expression genes will cluster together and away from high expression genes, regardless of trends. Centering makes genes (both high and low expressed) with like expression patterns cluster together.
Are mean normalization and feature scaling needed for k-means clustering? Depends on your data I guess. If you would like trends in your data to cluster together regardless of the magnitude, you should center. eg. say you have some gene expression profile, and want to see t
3,544
Why only three partitions? (training, validation, test)
First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run them on the validation data to compare your algorithms (and their trained parameters) and decide on a winner. You then run the winner on your test data to give you a forecast of how well it will do in the real world. You don't validate on the training data because that would overfit your models. You don't stop at the validation step's winner's score because you've iteratively been adjusting things to get a winner in the validation step, and so you need an independent test (that you haven't specifically been adjusting towards) to give you an idea of how well you'll do outside of the current arena. Second, I would think that one limiting factor here is how much data you have. Most of the time, we don't even want to split the data into fixed partitions at all, hence CV.
Why only three partitions? (training, validation, test)
First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run t
Why only three partitions? (training, validation, test) First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run them on the validation data to compare your algorithms (and their trained parameters) and decide on a winner. You then run the winner on your test data to give you a forecast of how well it will do in the real world. You don't validate on the training data because that would overfit your models. You don't stop at the validation step's winner's score because you've iteratively been adjusting things to get a winner in the validation step, and so you need an independent test (that you haven't specifically been adjusting towards) to give you an idea of how well you'll do outside of the current arena. Second, I would think that one limiting factor here is how much data you have. Most of the time, we don't even want to split the data into fixed partitions at all, hence CV.
Why only three partitions? (training, validation, test) First, I think you're mistaken about what the three partitions do. You don't make any choices based on the test data. Your algorithms adjust their parameters based on the training data. You then run t
3,545
Why only three partitions? (training, validation, test)
This is interesting question, and I found it is helpful with the answer from @Wayne. From my understanding, dividing the dataset into different partition depends on the purpose of the author, and the requirement of the model in real world application. Normally we have two datsets: training and testing. The training one is used to find the parameters of the models, or to fit the models. The testing one is used to evaluate the performance of the model in an unseen data (or real world data). If we just do one step in training, it is obvious that there are a training and a testing (or validating) process. However, doing this way, it may raise the over-fitting problem when the model is trained with one dataset, onetime. This may lead to instability of the model in the real world problem. One way to solve this issue is to cross-validate (CV) the model in the training dataset. That means, we divide the training datset into different folds, keep one fold for testing the model which is trained with other folds. The winner is now the one which give minimum loss (based on our own objective function) in whole CV process. By doing this way, we can make sure that we minimize the chance of over fitting in training process, and select the right winner. The test set is again used to evaluate the winner in the unseen data.
Why only three partitions? (training, validation, test)
This is interesting question, and I found it is helpful with the answer from @Wayne. From my understanding, dividing the dataset into different partition depends on the purpose of the author, and the
Why only three partitions? (training, validation, test) This is interesting question, and I found it is helpful with the answer from @Wayne. From my understanding, dividing the dataset into different partition depends on the purpose of the author, and the requirement of the model in real world application. Normally we have two datsets: training and testing. The training one is used to find the parameters of the models, or to fit the models. The testing one is used to evaluate the performance of the model in an unseen data (or real world data). If we just do one step in training, it is obvious that there are a training and a testing (or validating) process. However, doing this way, it may raise the over-fitting problem when the model is trained with one dataset, onetime. This may lead to instability of the model in the real world problem. One way to solve this issue is to cross-validate (CV) the model in the training dataset. That means, we divide the training datset into different folds, keep one fold for testing the model which is trained with other folds. The winner is now the one which give minimum loss (based on our own objective function) in whole CV process. By doing this way, we can make sure that we minimize the chance of over fitting in training process, and select the right winner. The test set is again used to evaluate the winner in the unseen data.
Why only three partitions? (training, validation, test) This is interesting question, and I found it is helpful with the answer from @Wayne. From my understanding, dividing the dataset into different partition depends on the purpose of the author, and the
3,546
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
That is correct, but therefore in most of those sub-samplings where variable Y was available it would produce the best possible split. You may try to increase mtry, to make sure this happens more often. You may try either recursive correlation pruning, that is in turns to remove one of two variables whom together have the highest correlation. A sensible threshold to stop this pruning could be that any pair of correlations(pearson) is lower than $R^2<.7$ You may try recursive variable importance pruning, that is in turns to remove, e.g. 20% with lowest variable importance. Try e.g. rfcv from randomForest package. You may try some decomposition/aggregation of your redundant variables.
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
That is correct, but therefore in most of those sub-samplings where variable Y was available it would produce the best possible split. You may try to increase mtry, to make sure this happens more ofte
Won't highly-correlated variables in random forest distort accuracy and feature-selection? That is correct, but therefore in most of those sub-samplings where variable Y was available it would produce the best possible split. You may try to increase mtry, to make sure this happens more often. You may try either recursive correlation pruning, that is in turns to remove one of two variables whom together have the highest correlation. A sensible threshold to stop this pruning could be that any pair of correlations(pearson) is lower than $R^2<.7$ You may try recursive variable importance pruning, that is in turns to remove, e.g. 20% with lowest variable importance. Try e.g. rfcv from randomForest package. You may try some decomposition/aggregation of your redundant variables.
Won't highly-correlated variables in random forest distort accuracy and feature-selection? That is correct, but therefore in most of those sub-samplings where variable Y was available it would produce the best possible split. You may try to increase mtry, to make sure this happens more ofte
3,547
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
Old thread, but I don't agree with a blanket statement that collinearity is not an issue with random forest models. When the dataset has two (or more) correlated features, then from the point of view of the model, any of these correlated features can be used as the predictor, with no concrete preference of one over the others. However once one of them is used, the importance of others is significantly reduced since effectively the impurity they can remove is already removed by the first feature. As a consequence, they will have a lower reported importance. This is not an issue when we want to use feature selection to reduce overfitting, since it makes sense to remove features that are mostly duplicated by other features, But when interpreting the data, it can lead to the incorrect conclusion that one of the variables is a strong predictor while the others in the same group are unimportant, while actually they are very close in terms of their relationship with the response variable. The effect of this phenomenon is somewhat reduced thanks to random selection of features at each node creation, but in general the effect is not removed completely. The above mostly cribbed from here: Selecting good features
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
Old thread, but I don't agree with a blanket statement that collinearity is not an issue with random forest models. When the dataset has two (or more) correlated features, then from the point of view
Won't highly-correlated variables in random forest distort accuracy and feature-selection? Old thread, but I don't agree with a blanket statement that collinearity is not an issue with random forest models. When the dataset has two (or more) correlated features, then from the point of view of the model, any of these correlated features can be used as the predictor, with no concrete preference of one over the others. However once one of them is used, the importance of others is significantly reduced since effectively the impurity they can remove is already removed by the first feature. As a consequence, they will have a lower reported importance. This is not an issue when we want to use feature selection to reduce overfitting, since it makes sense to remove features that are mostly duplicated by other features, But when interpreting the data, it can lead to the incorrect conclusion that one of the variables is a strong predictor while the others in the same group are unimportant, while actually they are very close in terms of their relationship with the response variable. The effect of this phenomenon is somewhat reduced thanks to random selection of features at each node creation, but in general the effect is not removed completely. The above mostly cribbed from here: Selecting good features
Won't highly-correlated variables in random forest distort accuracy and feature-selection? Old thread, but I don't agree with a blanket statement that collinearity is not an issue with random forest models. When the dataset has two (or more) correlated features, then from the point of view
3,548
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
One thing to add to above explanations: based on the experiments in Genuer et al, 2010: Robin Genuer, Jean-Michel Poggi, Christine Tuleau-Malot. Variable selection using Random Forests. Pattern Recognition Letters, Elsevier, 2010, 31 (14), pp.2225-2236. When the number of variables were more than the number of observations p>>n, they added highly-correlated variables with the already-known important variables, one by one in each RF model, and noticed that the magnitude of the importance values of the variables changes (less relative value from the y axis for the already-known important variables) BUT the order of importance of variables remained the same and even the order of the relative values remains pretty similar, and they are still significantly recognisable from noisy variables (less-relevant variables). Also check the table in page 2231 when the number of replications (adding highly-correlated variables with two of the previously-known most important variables) increases, the prediction set for each RF model still shows the most important variable is the already-known most important variable. for variable selection for interpretation purposes, they construct many (e.g., 50) RF models, they introduce important variables one by one, and the model with lowest OOB error rate is selected for interpretation and variable selection. for variable selection procedure for prediction purposes, "in each model We perform a sequential variable introduction with testing: a variable is added only if the error gain exceeds a threshold. The idea is that the error decrease must be significantly greater than the average variation obtained by adding noisy variables. "
Won't highly-correlated variables in random forest distort accuracy and feature-selection?
One thing to add to above explanations: based on the experiments in Genuer et al, 2010: Robin Genuer, Jean-Michel Poggi, Christine Tuleau-Malot. Variable selection using Random Forests. Pattern Recogn
Won't highly-correlated variables in random forest distort accuracy and feature-selection? One thing to add to above explanations: based on the experiments in Genuer et al, 2010: Robin Genuer, Jean-Michel Poggi, Christine Tuleau-Malot. Variable selection using Random Forests. Pattern Recognition Letters, Elsevier, 2010, 31 (14), pp.2225-2236. When the number of variables were more than the number of observations p>>n, they added highly-correlated variables with the already-known important variables, one by one in each RF model, and noticed that the magnitude of the importance values of the variables changes (less relative value from the y axis for the already-known important variables) BUT the order of importance of variables remained the same and even the order of the relative values remains pretty similar, and they are still significantly recognisable from noisy variables (less-relevant variables). Also check the table in page 2231 when the number of replications (adding highly-correlated variables with two of the previously-known most important variables) increases, the prediction set for each RF model still shows the most important variable is the already-known most important variable. for variable selection for interpretation purposes, they construct many (e.g., 50) RF models, they introduce important variables one by one, and the model with lowest OOB error rate is selected for interpretation and variable selection. for variable selection procedure for prediction purposes, "in each model We perform a sequential variable introduction with testing: a variable is added only if the error gain exceeds a threshold. The idea is that the error decrease must be significantly greater than the average variation obtained by adding noisy variables. "
Won't highly-correlated variables in random forest distort accuracy and feature-selection? One thing to add to above explanations: based on the experiments in Genuer et al, 2010: Robin Genuer, Jean-Michel Poggi, Christine Tuleau-Malot. Variable selection using Random Forests. Pattern Recogn
3,549
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
A feature map, or activation map, is the output activations for a given filter (a1 in your case) and the definition is the same regardless of what layer you are on. Feature map and activation map mean exactly the same thing. It is called an activation map because it is a mapping that corresponds to the activation of different parts of the image, and also a feature map because it is also a mapping of where a certain kind of feature is found in the image. A high activation means a certain feature was found. A "rectified feature map" is just a feature map that was created using Relu. You could possibly see the term "feature map" used for the result of the dot products (z1) because this is also really a map of where certain features are in the image, but that is not common to see.
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
A feature map, or activation map, is the output activations for a given filter (a1 in your case) and the definition is the same regardless of what layer you are on. Feature map and activation map mea
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? A feature map, or activation map, is the output activations for a given filter (a1 in your case) and the definition is the same regardless of what layer you are on. Feature map and activation map mean exactly the same thing. It is called an activation map because it is a mapping that corresponds to the activation of different parts of the image, and also a feature map because it is also a mapping of where a certain kind of feature is found in the image. A high activation means a certain feature was found. A "rectified feature map" is just a feature map that was created using Relu. You could possibly see the term "feature map" used for the result of the dot products (z1) because this is also really a map of where certain features are in the image, but that is not common to see.
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? A feature map, or activation map, is the output activations for a given filter (a1 in your case) and the definition is the same regardless of what layer you are on. Feature map and activation map mea
3,550
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
In CNN terminology, the 3×3 matrix is called a ‘filter‘ or ‘kernel’ or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the dot product is called the ‘Convolved Feature’ or ‘Activation Map’ or the ‘Feature Map‘. It is important to note that filters acts as feature detectors from the original input image. source : https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
In CNN terminology, the 3×3 matrix is called a ‘filter‘ or ‘kernel’ or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the dot product is called the ‘Convol
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? In CNN terminology, the 3×3 matrix is called a ‘filter‘ or ‘kernel’ or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the dot product is called the ‘Convolved Feature’ or ‘Activation Map’ or the ‘Feature Map‘. It is important to note that filters acts as feature detectors from the original input image. source : https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? In CNN terminology, the 3×3 matrix is called a ‘filter‘ or ‘kernel’ or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the dot product is called the ‘Convol
3,551
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
before talk about what feature map means, let just define the term of feature vector. feature vector is vectorial representation of objects. For example, a car can be represented by [number of wheels, door. windows, age ..etc]. feature map is a function that takes feature vectors in one space and transforms them into feature vectors in another. For example given a feature vector [volume ,weight, height, width] it can return [1, volume/weight, height * width] or [height * width] or even just [volume]
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
before talk about what feature map means, let just define the term of feature vector. feature vector is vectorial representation of objects. For example, a car can be represented by [number of wheels,
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? before talk about what feature map means, let just define the term of feature vector. feature vector is vectorial representation of objects. For example, a car can be represented by [number of wheels, door. windows, age ..etc]. feature map is a function that takes feature vectors in one space and transforms them into feature vectors in another. For example given a feature vector [volume ,weight, height, width] it can return [1, volume/weight, height * width] or [height * width] or even just [volume]
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? before talk about what feature map means, let just define the term of feature vector. feature vector is vectorial representation of objects. For example, a car can be represented by [number of wheels,
3,552
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
To give a complete answer, we need some definitions: Background Definitions: For us, an "input space" $\mathcal{X}$ is just a metric space. A model class $\mathcal{F}$ (of continuous functions) is universal from $\mathcal{X}$ to $\mathcal{R}^D$ if $\mathcal{F}$ is dense in $C(\mathcal{X},\mathbb{R}^D)$ for the uniform convergence on compacts topology. Definition of a Feature Map: A feature map implicitly depends on the learning model class used and on the "input space" $\mathcal{X}$ where the data lies. More formally, if $\mathcal{F}$ is a class of models from $\mathbb{R}^d$ to $\mathbb{R}^D$ then a feature map for $\mathcal{F}$ on an input space $\mathcal{X}$ is a (just) function $$ \phi:\mathcal{X}\rightarrow \mathbb{R}^d . $$ What's the point of a feature map?: The (first) point here is that $\phi$ makes the data in $\mathcal{X}$ compatable with the learning model in $F$; i.e.: $\mathcal{F}_{\phi}\triangleq \{\hat{f}\circ \phi:\, f\in \mathcal{F}\}$ is not a set of models from $\mathcal{X}$ to $\mathbb{R}^D$. What is a "good" feature map? The (second) point is that a "good" choice of a feature map (even in the case where $\mathcal{X}=\mathbb{R}^d$) can strictly improve the expressibility of the model class $\mathcal{F}$. This means that: a. (Upgrading Property) $\mathcal{F}$ is "universal" then so is $\mathcal{F}_{\phi}$ b. (UAP-Invariance Property) if $\mathcal{F}$ is "universal" then so is $\mathcal{F}_{\phi}$. Literature Review on Constructing "Good Feature Maps: It is proven in Theorem 3.4; page 5 of this NeurIPS paper that property $b$ holds if and only if $\phi$ is continuous and injective. A "generic" class of feature maps with both properties $a$ an $b$ are constructed in Definition 2.1 and 2.2 of this recent JMLR paper.
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network?
To give a complete answer, we need some definitions: Background Definitions: For us, an "input space" $\mathcal{X}$ is just a metric space. A model class $\mathcal{F}$ (of continuous functions) is un
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? To give a complete answer, we need some definitions: Background Definitions: For us, an "input space" $\mathcal{X}$ is just a metric space. A model class $\mathcal{F}$ (of continuous functions) is universal from $\mathcal{X}$ to $\mathcal{R}^D$ if $\mathcal{F}$ is dense in $C(\mathcal{X},\mathbb{R}^D)$ for the uniform convergence on compacts topology. Definition of a Feature Map: A feature map implicitly depends on the learning model class used and on the "input space" $\mathcal{X}$ where the data lies. More formally, if $\mathcal{F}$ is a class of models from $\mathbb{R}^d$ to $\mathbb{R}^D$ then a feature map for $\mathcal{F}$ on an input space $\mathcal{X}$ is a (just) function $$ \phi:\mathcal{X}\rightarrow \mathbb{R}^d . $$ What's the point of a feature map?: The (first) point here is that $\phi$ makes the data in $\mathcal{X}$ compatable with the learning model in $F$; i.e.: $\mathcal{F}_{\phi}\triangleq \{\hat{f}\circ \phi:\, f\in \mathcal{F}\}$ is not a set of models from $\mathcal{X}$ to $\mathbb{R}^D$. What is a "good" feature map? The (second) point is that a "good" choice of a feature map (even in the case where $\mathcal{X}=\mathbb{R}^d$) can strictly improve the expressibility of the model class $\mathcal{F}$. This means that: a. (Upgrading Property) $\mathcal{F}$ is "universal" then so is $\mathcal{F}_{\phi}$ b. (UAP-Invariance Property) if $\mathcal{F}$ is "universal" then so is $\mathcal{F}_{\phi}$. Literature Review on Constructing "Good Feature Maps: It is proven in Theorem 3.4; page 5 of this NeurIPS paper that property $b$ holds if and only if $\phi$ is continuous and injective. A "generic" class of feature maps with both properties $a$ an $b$ are constructed in Definition 2.1 and 2.2 of this recent JMLR paper.
What is the definition of a "feature map" (aka "activation map") in a convolutional neural network? To give a complete answer, we need some definitions: Background Definitions: For us, an "input space" $\mathcal{X}$ is just a metric space. A model class $\mathcal{F}$ (of continuous functions) is un
3,553
40,000 neuroscience papers might be wrong
On the 40000 figure The news are really sensationalist, but the paper is really well founded. Discussions raged for days in my laboratory, all in all a really necessary critique that makes researchers introspect their work. I recommend the reading of the following commentary by Thomas Nichols, one of the authors of the "Cluster Failure: Why fMRI inferences for spatial extent have inflated false-positive rates" paper (sorry for the long quote). However, there is one number I regret: 40,000. In trying to refer to the importance of the fMRI discipline, we used an estimate of the entire fMRI literature as number of studies impinged by our findings. In our defense, we found problems with cluster size inference in general (severe for P=0.01 CDT, biased for P=0.001), the dominant inference method, suggesting the majority of the literature was affected. The number in the impact statement, however, has been picked up by popular press and fed a small twitterstorm. Hence, I feel it’s my duty to make at least a rough estimate of “How many articles does our work affect?”. I’m not a bibliometrician, and this really a rough-and-ready exercise, but it hopefully gives a sense of the order of magnitude of the problem. The analysis code (in Matlab) is laid out below, but here is the skinny: Based on some reasonable probabilistic computations, but perhaps fragile samples of the literature, I estimate about 15,000 papers use cluster size inference with correction for multiple testing; of these, around 3,500 use a CDT of P=0.01. 3,500 is about 9% of the entire literature, or perhaps more usefully, 11% of papers containing original data. (Of course some of these 15,000 or 3,500 might use nonparametric inference, but it’s unfortunately rare for fMRI—in contrast, it’s the default inference tool for structural VBM/DTI analyses in FSL). I frankly thought this number would be higher, but didn’t realise the large proportion of studies that never used any sort of multiple testing correction. (Can’t have inflated corrected significances if you don’t correct!). These calculations suggest 13,000 papers used no multiple testing correction. Of course some of these may be using regions of interest or sub-volume analyses, but it’s a scant few (i.e. clinical trial style outcome) that have absolutely no multiplicity at all. Our paper isn’t directly about this group, but for publications that used the folk multiple testing correction, P<0.001 & k>10, our paper shows this approach has familywise error rates well in excess of 50%. So, are we saying 3,500 papers are “wrong”? It depends. Our results suggest CDT P=0.01 results have inflated P-values, but each study must be examined… if the effects are really strong, it likely doesn’t matter if the P-values are biased, and the scientific inference will remain unchanged. But if the effects are really weak, then the results might indeed be consistent with noise. And, what about those 13,000 papers with no correction, especially common in the earlier literature? No, they shouldn’t be discarded out of hand either, but a particularly jaded eye is needed for those works, especially when comparing them to new references with improved methodological standards. He also includes this table at the end: AFNI BV FSL SPM OTHERS ____ __ ___ ___ ______ >.01 9 5 9 8 4 .01 9 4 44 20 3 .005 24 6 1 48 3 .001 13 20 11 206 5 <.001 2 5 3 16 2 Basically, SPM (Statistical Parametric Mapping, a toolbox for Matlab) is the most widely used tool for fMRI neuroscience studies. If you check the paper you'll see using a CDT of P = 0.001 (the standard) for clusters in SPM gives nearly the expected family-wise error rate. The authors even filled an errata due to the wording of the paper: Given the widespread misinterpretation of our paper, Eklund et al., Cluster Failure: Why fMRI inferences for spatial extent have inflated false-positive rates, we filed an errata with the PNAS Editoral office: Errata for Eklund et al., Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Eklund, Anders; Nichols, Thomas E; Knutsson, Hans Two sentences were poorly worded and could easily be misunderstood as overstating our results. The last sentence of the Significance statement should read: “These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.” The first sentence after the heading “The future of fMRI” should have read: “Due to lamentable archiving and data-sharing practices it is unlikely that problematic analyses can be redone.” These replace the two sentences that mistakenly implied that our work affected all 40,000 publications (see Bibliometrics of Cluster Inference for an guestimate of how much of the literature is potentially affected). After initially declining the the errata, on the grounds that it was correcting interpretation and not fact, PNAS have agreed to publish it as we submitted it above. On the so called Bug Some news also mentioned a bug as the cause of the invalidity of the studies. Indeed, one of AFNI tools was undercorrecting inferences, and this was solved after the preprint was posted in arXiv. Statistical inference used in functional neuroimaging Functional neuroimaging includes many techniques that aim to measure neuronal activity in the brain (e.g. fMRI, EEG, MEG, NIRS, PET and SPECT). These are based on different contrast mechanisms. fMRI is based on the blood-oxygen level dependent (BOLD) contrast. In task-based fMRI, given a stimulus, the neurons in the brain responsible for the reception of that stimulation start consuming energy and this triggers the haemodynamic response changing the magnetic resonance signal ($\approx 5\%$) in the vicinity of the recruited micro-vascularization. Using a generalized linear model (GLM) you identify which voxel signal time-series are correlated with the design of the paradigm of your experiment (usually a boolean timeseries convoluted with a canonical haemodynamic response function, but variations exist). So this GLM given you how much each voxel time-series resembles the task. Now, say you have two groups of individuals: patients and controls usually. Comparing the GLM scores between the groups could be used to show how the condition of the groups modulates their brain "activation" pattern. Voxel-wise comparison between the groups is doable, but due to the point-spread function inherent to the equipment plus a smoothing preprocessing step it isn't reasonable to expect voxels individually carry all the information. The difference in voxels among groups should be, in fact, spread over neighboring voxels. So, cluster-wise comparison is performed, i.e. only differences between groups that form into clusters are considered. This cluster extent thresholding is the most popular multiple comparison correction technique in fMRI studies. The problem lies here. SPM and FSL depend on Gaussian random-field theory (RFT) for FWE-corrected voxelwise and clusterwise inference. However, RFT clusterwise inference depends on two additional assumptions. The first assumption is that the spatial smoothness of the fMRI signal is constant over the brain, and the second assumption is that the spatial autocorrelation function has a specific shape (a squared exponential) (30) In SPM at least you have to set a nominal FWE rate and also a cluster-defining threshold (CDT). Basically, SPM finds voxels highly correlated to the task and, after thresholding with the CDT, neighboring ones are aggregated into clusters. These clusters sizes are compared to the expected cluster extent from Random Field Theory (RFT) given the FWER set [1]. Random field theory requires the activity map to be smooth, to be a good lattice approximation to random fields. This is related to the amount of smoothing that is applied to the volumes. The smoothing also affects the assumption that the residuals are normally distributed, as smoothing, by the central limit theorem, will make the data more Gaussian. The authors have shown in [1] that the expected cluster sizes from RFT are really small when comparing with cluster extent thresholds obtained from random permutation testing (RPT). In their most recent paper, resting-state (another modality of fMRI, where participants are instructed to not think in anything in particular) data was used as if people performed a task during image acquisition, and the group comparison was performed voxel- and cluster-wise. The observed false positive error (i.e. when you observe differences in the signal response to a virtual task between groups) rate should be reasonably lower than the expected FWE rate set at $\alpha = 0.05$. Redoing this analysis millions of times on randomly sampled groups with different paradigms showed most observed FWE rates to be higher than acceptable though. @amoeba raised these two highly pertinent questions in the comments: (1) The Eklund et al. PNAS paper talks about "nominal 5% level" of all the tests (see e.g. horizontal black line on Fig 1). However, CDT in the same figure is varying and can be e.g. 0.01 and 0.001. How does CDT threshold relate to the nominal type I error rate? I am confused by that. (2) Have you seen Karl Friston's reply http://arxiv.org/abs/1606.08199 ? I read it, but I am not quite sure what they are saying: do I see correctly that they agree with Eklund et al. but say that this is a "well known" issue? (1) Good question. I actually reviewed my references, let's see if I can make it clearer now. Cluster-wise inference is based on the extent of clusters that form after a primary threshold (the CDT, which is arbitrary) is applied. In the secondary analysis a threshold on the number of voxels per cluster is applied. This threshold is based on the expected distribution of null cluster extents, which can be estimated from theory (e.g. RFT), and sets a nominal FWER. A good reference is [2]. (2) Thanks for this reference, didn't see it before. Flandin & Friston argue Eklund et al. corroborated RFT inference because they basically showed that respecting its assumptions (regarding CDT and smoothing) the results are unbiased. Under this light, the new results show different practices in the literature tend to bias the inference as it breaks down the assumptions of RFT. On the multiple comparisons It's also well known many studies in neuroscience don't correct for multiple comparisons, estimates ranging from 10% to 40% of the literature. But these are not accounted by that claim, everyone knows these papers have fragile validity and possibly huge false positive rates. On the FWER in excess of 70% The authors also reported a procedure that produces FWER in excess of 70%. This "folk"-procedure consists in applying the CDT to keep only highly significant clusters and then applying another arbitrarily chosen cluster-extent threshold (in number of voxels). This, sometimes called "set-inference", has weak statistical bases, and possibly generates the least trustworthy results. Previous reports The same authors had already reported on problems with the validity of SPM [1] on individual analyses. There are also other cited works in this area. Curiously, several reports on group- and individual-level analysis based on simulated data concluded the RFT threshold were, in fact, conservative. With recent advances in processing power though RPT can be performed much more easily on real data, showing great discrepancies with RFT. UPDATE: October 18th, 2017 A commentary on "Cluster Failure" has surfaced last June [3]. There Mueller et al. argue the results presented in Eklund et al might be due to a specific imaging preprocessing technique used in their study. Basically, they resampled the functional images to a higher resolution before smoothing (while probably not done by every researcher, this is a routine procedure in most fMRI analysis software). They also note that Flandin & Friston didn't. I actually got to see Eklund talk at the same month in the Organization for Human Brain Mapping (OHBM) Annual Meeting in Vancouver, but I don't remember any comments on this issue, yet it seems crucial to the question. [1] Eklund, A., Andersson, M., Josephson, C., Johannesson, M., & Knutsson, H. (2012). Does parametric fMRI analysis with SPM yield valid results?—An empirical study of 1484 rest datasets. NeuroImage, 61(3), 565-578. [2] Woo, C. W., Krishnan, A., & Wager, T. D. (2014). Cluster-extent based thresholding in fMRI analyses: pitfalls and recommendations. Neuroimage, 91, 412-419. [3] Mueller, K., Lepsien, J., Möller, H. E., & Lohmann, G. (2017). Commentary: Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Frontiers in Human Neuroscience, 11.
40,000 neuroscience papers might be wrong
On the 40000 figure The news are really sensationalist, but the paper is really well founded. Discussions raged for days in my laboratory, all in all a really necessary critique that makes researchers
40,000 neuroscience papers might be wrong On the 40000 figure The news are really sensationalist, but the paper is really well founded. Discussions raged for days in my laboratory, all in all a really necessary critique that makes researchers introspect their work. I recommend the reading of the following commentary by Thomas Nichols, one of the authors of the "Cluster Failure: Why fMRI inferences for spatial extent have inflated false-positive rates" paper (sorry for the long quote). However, there is one number I regret: 40,000. In trying to refer to the importance of the fMRI discipline, we used an estimate of the entire fMRI literature as number of studies impinged by our findings. In our defense, we found problems with cluster size inference in general (severe for P=0.01 CDT, biased for P=0.001), the dominant inference method, suggesting the majority of the literature was affected. The number in the impact statement, however, has been picked up by popular press and fed a small twitterstorm. Hence, I feel it’s my duty to make at least a rough estimate of “How many articles does our work affect?”. I’m not a bibliometrician, and this really a rough-and-ready exercise, but it hopefully gives a sense of the order of magnitude of the problem. The analysis code (in Matlab) is laid out below, but here is the skinny: Based on some reasonable probabilistic computations, but perhaps fragile samples of the literature, I estimate about 15,000 papers use cluster size inference with correction for multiple testing; of these, around 3,500 use a CDT of P=0.01. 3,500 is about 9% of the entire literature, or perhaps more usefully, 11% of papers containing original data. (Of course some of these 15,000 or 3,500 might use nonparametric inference, but it’s unfortunately rare for fMRI—in contrast, it’s the default inference tool for structural VBM/DTI analyses in FSL). I frankly thought this number would be higher, but didn’t realise the large proportion of studies that never used any sort of multiple testing correction. (Can’t have inflated corrected significances if you don’t correct!). These calculations suggest 13,000 papers used no multiple testing correction. Of course some of these may be using regions of interest or sub-volume analyses, but it’s a scant few (i.e. clinical trial style outcome) that have absolutely no multiplicity at all. Our paper isn’t directly about this group, but for publications that used the folk multiple testing correction, P<0.001 & k>10, our paper shows this approach has familywise error rates well in excess of 50%. So, are we saying 3,500 papers are “wrong”? It depends. Our results suggest CDT P=0.01 results have inflated P-values, but each study must be examined… if the effects are really strong, it likely doesn’t matter if the P-values are biased, and the scientific inference will remain unchanged. But if the effects are really weak, then the results might indeed be consistent with noise. And, what about those 13,000 papers with no correction, especially common in the earlier literature? No, they shouldn’t be discarded out of hand either, but a particularly jaded eye is needed for those works, especially when comparing them to new references with improved methodological standards. He also includes this table at the end: AFNI BV FSL SPM OTHERS ____ __ ___ ___ ______ >.01 9 5 9 8 4 .01 9 4 44 20 3 .005 24 6 1 48 3 .001 13 20 11 206 5 <.001 2 5 3 16 2 Basically, SPM (Statistical Parametric Mapping, a toolbox for Matlab) is the most widely used tool for fMRI neuroscience studies. If you check the paper you'll see using a CDT of P = 0.001 (the standard) for clusters in SPM gives nearly the expected family-wise error rate. The authors even filled an errata due to the wording of the paper: Given the widespread misinterpretation of our paper, Eklund et al., Cluster Failure: Why fMRI inferences for spatial extent have inflated false-positive rates, we filed an errata with the PNAS Editoral office: Errata for Eklund et al., Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Eklund, Anders; Nichols, Thomas E; Knutsson, Hans Two sentences were poorly worded and could easily be misunderstood as overstating our results. The last sentence of the Significance statement should read: “These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.” The first sentence after the heading “The future of fMRI” should have read: “Due to lamentable archiving and data-sharing practices it is unlikely that problematic analyses can be redone.” These replace the two sentences that mistakenly implied that our work affected all 40,000 publications (see Bibliometrics of Cluster Inference for an guestimate of how much of the literature is potentially affected). After initially declining the the errata, on the grounds that it was correcting interpretation and not fact, PNAS have agreed to publish it as we submitted it above. On the so called Bug Some news also mentioned a bug as the cause of the invalidity of the studies. Indeed, one of AFNI tools was undercorrecting inferences, and this was solved after the preprint was posted in arXiv. Statistical inference used in functional neuroimaging Functional neuroimaging includes many techniques that aim to measure neuronal activity in the brain (e.g. fMRI, EEG, MEG, NIRS, PET and SPECT). These are based on different contrast mechanisms. fMRI is based on the blood-oxygen level dependent (BOLD) contrast. In task-based fMRI, given a stimulus, the neurons in the brain responsible for the reception of that stimulation start consuming energy and this triggers the haemodynamic response changing the magnetic resonance signal ($\approx 5\%$) in the vicinity of the recruited micro-vascularization. Using a generalized linear model (GLM) you identify which voxel signal time-series are correlated with the design of the paradigm of your experiment (usually a boolean timeseries convoluted with a canonical haemodynamic response function, but variations exist). So this GLM given you how much each voxel time-series resembles the task. Now, say you have two groups of individuals: patients and controls usually. Comparing the GLM scores between the groups could be used to show how the condition of the groups modulates their brain "activation" pattern. Voxel-wise comparison between the groups is doable, but due to the point-spread function inherent to the equipment plus a smoothing preprocessing step it isn't reasonable to expect voxels individually carry all the information. The difference in voxels among groups should be, in fact, spread over neighboring voxels. So, cluster-wise comparison is performed, i.e. only differences between groups that form into clusters are considered. This cluster extent thresholding is the most popular multiple comparison correction technique in fMRI studies. The problem lies here. SPM and FSL depend on Gaussian random-field theory (RFT) for FWE-corrected voxelwise and clusterwise inference. However, RFT clusterwise inference depends on two additional assumptions. The first assumption is that the spatial smoothness of the fMRI signal is constant over the brain, and the second assumption is that the spatial autocorrelation function has a specific shape (a squared exponential) (30) In SPM at least you have to set a nominal FWE rate and also a cluster-defining threshold (CDT). Basically, SPM finds voxels highly correlated to the task and, after thresholding with the CDT, neighboring ones are aggregated into clusters. These clusters sizes are compared to the expected cluster extent from Random Field Theory (RFT) given the FWER set [1]. Random field theory requires the activity map to be smooth, to be a good lattice approximation to random fields. This is related to the amount of smoothing that is applied to the volumes. The smoothing also affects the assumption that the residuals are normally distributed, as smoothing, by the central limit theorem, will make the data more Gaussian. The authors have shown in [1] that the expected cluster sizes from RFT are really small when comparing with cluster extent thresholds obtained from random permutation testing (RPT). In their most recent paper, resting-state (another modality of fMRI, where participants are instructed to not think in anything in particular) data was used as if people performed a task during image acquisition, and the group comparison was performed voxel- and cluster-wise. The observed false positive error (i.e. when you observe differences in the signal response to a virtual task between groups) rate should be reasonably lower than the expected FWE rate set at $\alpha = 0.05$. Redoing this analysis millions of times on randomly sampled groups with different paradigms showed most observed FWE rates to be higher than acceptable though. @amoeba raised these two highly pertinent questions in the comments: (1) The Eklund et al. PNAS paper talks about "nominal 5% level" of all the tests (see e.g. horizontal black line on Fig 1). However, CDT in the same figure is varying and can be e.g. 0.01 and 0.001. How does CDT threshold relate to the nominal type I error rate? I am confused by that. (2) Have you seen Karl Friston's reply http://arxiv.org/abs/1606.08199 ? I read it, but I am not quite sure what they are saying: do I see correctly that they agree with Eklund et al. but say that this is a "well known" issue? (1) Good question. I actually reviewed my references, let's see if I can make it clearer now. Cluster-wise inference is based on the extent of clusters that form after a primary threshold (the CDT, which is arbitrary) is applied. In the secondary analysis a threshold on the number of voxels per cluster is applied. This threshold is based on the expected distribution of null cluster extents, which can be estimated from theory (e.g. RFT), and sets a nominal FWER. A good reference is [2]. (2) Thanks for this reference, didn't see it before. Flandin & Friston argue Eklund et al. corroborated RFT inference because they basically showed that respecting its assumptions (regarding CDT and smoothing) the results are unbiased. Under this light, the new results show different practices in the literature tend to bias the inference as it breaks down the assumptions of RFT. On the multiple comparisons It's also well known many studies in neuroscience don't correct for multiple comparisons, estimates ranging from 10% to 40% of the literature. But these are not accounted by that claim, everyone knows these papers have fragile validity and possibly huge false positive rates. On the FWER in excess of 70% The authors also reported a procedure that produces FWER in excess of 70%. This "folk"-procedure consists in applying the CDT to keep only highly significant clusters and then applying another arbitrarily chosen cluster-extent threshold (in number of voxels). This, sometimes called "set-inference", has weak statistical bases, and possibly generates the least trustworthy results. Previous reports The same authors had already reported on problems with the validity of SPM [1] on individual analyses. There are also other cited works in this area. Curiously, several reports on group- and individual-level analysis based on simulated data concluded the RFT threshold were, in fact, conservative. With recent advances in processing power though RPT can be performed much more easily on real data, showing great discrepancies with RFT. UPDATE: October 18th, 2017 A commentary on "Cluster Failure" has surfaced last June [3]. There Mueller et al. argue the results presented in Eklund et al might be due to a specific imaging preprocessing technique used in their study. Basically, they resampled the functional images to a higher resolution before smoothing (while probably not done by every researcher, this is a routine procedure in most fMRI analysis software). They also note that Flandin & Friston didn't. I actually got to see Eklund talk at the same month in the Organization for Human Brain Mapping (OHBM) Annual Meeting in Vancouver, but I don't remember any comments on this issue, yet it seems crucial to the question. [1] Eklund, A., Andersson, M., Josephson, C., Johannesson, M., & Knutsson, H. (2012). Does parametric fMRI analysis with SPM yield valid results?—An empirical study of 1484 rest datasets. NeuroImage, 61(3), 565-578. [2] Woo, C. W., Krishnan, A., & Wager, T. D. (2014). Cluster-extent based thresholding in fMRI analyses: pitfalls and recommendations. Neuroimage, 91, 412-419. [3] Mueller, K., Lepsien, J., Möller, H. E., & Lohmann, G. (2017). Commentary: Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Frontiers in Human Neuroscience, 11.
40,000 neuroscience papers might be wrong On the 40000 figure The news are really sensationalist, but the paper is really well founded. Discussions raged for days in my laboratory, all in all a really necessary critique that makes researchers
3,554
How much to pay? A practical problem
I would be interested in feedback on the paragraph beginning "Upon reflection...", since particular part of the model has been keeping me up at night. The Bayesian model The revised question makes me think that we can develop the model explicitly, without using simulation. Simulation introduced additional variability due to the inherent randomness of sampling. Sophologists answer is great, though. Assumptions: the smallest number of labels per envelope is 90, and the largest is 100. Therefore, the smallest possible number of labels is 9000+7+8+6+10+5+7=9043 (as given by OP's data), 9000 due to our lower bound, and the additional labels coming from the observed data. Denote $Y_i$ the number of labels in an envelope $i$. Denote $X_i$ the number of labels over 90, i.e. $X=Y-90$, so $X\in\{0,1,2,...,10\}$. The binomial distribution models the total number of successes (here a success is the presence of a label in an envelope) in $n$ trials when the trials are independent with constant success probability $p$ so $X$ takes values $0, 1, 2, 3, ..., n.$ We take $n=10$, which gives 11 different possible outcomes. I assume that because the sheet sizes are irregular, some sheets only have room for $X$ additional labels in excess of 90, and that this "additional space" for each label in excess of 90 occurs independently with probability $p$. So $X_i\sim\text{Binomial}(10,p).$ (Upon reflection, the independence assumption/binomial model is probably a strange assumption to make, since it effectively fixes the composition of the printer's sheets to be unimodal, and the data can only change the location of the mode, but the model will never admit a multimodal distribution. For example, under an alternative model, it's possible that the printer only has sheets of sizes 97, 98, 96, 100 and 95: this satisfies all the stated constraints and data doesn't exclude this possibility. It might be more appropriate to regard each sheet size as its own category and then fit a Dirichlet-multinomial model to the data. I do not do this here because the data are so scarce, so posterior probabilities on each of the 11 categories will be very strongly influenced by the prior. On the other hand, by fitting the simpler model we are likewise constricting the kinds of inferences that we can make.) Each envelope $i$ is an iid realization of $X$. The sum of binomial trials with the same success probability $p$ is also binomial, so $\sum_i X_i\sim\text{Binomial}(60,p).$ (This is a theorem -- to verify, use the MGF uniqueness theorem.) I prefer to think about these problems in a Bayesian mode, because you can make direct probability statements about posterior quantities of interest. A typical prior for binomial trials with unknown $p$ is the beta distribution, which is very flexible (varies between 0 and 1, can be symmetric or asymmetric in either direction, uniform or one of two Dirac masses, have an antimode or a mode... It's an amazing tool!). In the absence of data, it seems reasonable to assume uniform probability over $p$. That is, one might expect to see a sheet accommodate 90 labels as often as 91, as often as 92, ..., as often as 100. So our prior is $p\sim\text{Beta}(1,1).$ If you don't think this beta prior is reasonable, the uniform prior can be replaced with another beta prior, and the math won't even increase in difficulty! The posterior distribution on $p$ is $p\sim\text{Beta}(1+43,1+17)$ by the conjugacy properties of this model. This is only an intermediate step, though, because we don't care about $p$ as much as we care about the total number of labels. Forunately, the properties of conjugacy also mean that the posterior predictive distribution of sheets is beta-binomial, with parameters of the beta posterior. There are $940$ reamining "trials", i.e. labels for which their presence in the delivery is uncertain, so our posterior model on the remaining labels $Z$ is $Z\sim\text{BB}(44,18,940).$ Since we have a distribution on $Z$ and a value model per label (the vendor agreed to one dollar per label), we can also infer a probability distribution over the value of the lot. Denote $D$ the total dollar value of the lot. We know that $D=9043+Z$, because $Z$ only models the labels that we are uncertain about. So the distribution over value is given by $D$. What's the appropriate way to consider pricing the lot? We can find that the quantiles at 0.025 and 0.975 (a 95% interval) are 553 and 769, respectively. So the 95% interval on D is $[9596, 9812]$. Your payment falls in that interval. (The distribution on $D$ is not exactly symmetric, so this is not the central 95% interval -- however, the asymmetry is negligible. Anyway, as I elaborate below, I'm not sure that a central 95% interval is even the correct one to consider!) I'm not aware of a quantile function for beta binomial distribution in R, so I wrote my own using R's root-finding. qbetabinom.ab <- function(p, size, shape1, shape2){ tmpFn <- function(x) pbetabinom.ab(x, size=size, shape1=shape1, shape2=shape2)-p q <- uniroot(f=tmpFn, interval=c(0,size)) return(q$root) } Another way to think about it is just to think about the expectation. If you repeated this process many times, what's the average cost you would pay? We can compute the expectation of $D$ directly. $\mathbb{E}(D)=\mathbb{E}(9043+Z)=\mathbb{E}(Z)+9043.$ The beta binomial model has expectation $\mathbb{E}(Z)=\frac{n\alpha}{\alpha+\beta}=667.0968$, so $\mathbb{E}(D)=9710.097,$ almost exactly what you paid. Your expected loss on the deal was only 6 dollars! All told, well done! But I'm not sure either of these figures is the most relevant. After all, this vendor is trying to cheat you! If I were doing this deal, I'd stop worrying about breaking even or the fair-value price of the lot and start working out the probability that I'm overpaying! The vendor is clearly trying to defraud me, so I'm perfectly within my rights to minimize my losses and not concern myself with the break-even point. In this setting, the highest price I would offer is 9615 dollars, because this is the 5% quantile of the posterior on $D$, i.e. there's 95% probability that I'm underpaying. The vendor can't prove to me that all the labels are there, so I'm going to hedge my bets. (Of course, the fact that the vendor accepted the deal tells us that he has nonnegative real loss... I haven't figured out a way to use that information to help us determine more precisely how much you were cheated, except to note that because he accepted the offer, you were at best breaking even.) Comparison to the bootstrap We only have 6 observations to work with. The justification for the bootstrap is asymptotic, so let's consider what the results look like on our small sample. This plot shows the density of the boostrap simulation. The "bumpy" pattern is an artifact of the small sample size. Including or excluding any one point will have a dramatic effect on the mean, creating this "bunchy" apperance. The Bayesian approach smooths out these clumps and, in my opinion, is a more believable portrait of what's going on. Vertical lines are the 5% quantiles.
How much to pay? A practical problem
I would be interested in feedback on the paragraph beginning "Upon reflection...", since particular part of the model has been keeping me up at night. The Bayesian model The revised question makes me
How much to pay? A practical problem I would be interested in feedback on the paragraph beginning "Upon reflection...", since particular part of the model has been keeping me up at night. The Bayesian model The revised question makes me think that we can develop the model explicitly, without using simulation. Simulation introduced additional variability due to the inherent randomness of sampling. Sophologists answer is great, though. Assumptions: the smallest number of labels per envelope is 90, and the largest is 100. Therefore, the smallest possible number of labels is 9000+7+8+6+10+5+7=9043 (as given by OP's data), 9000 due to our lower bound, and the additional labels coming from the observed data. Denote $Y_i$ the number of labels in an envelope $i$. Denote $X_i$ the number of labels over 90, i.e. $X=Y-90$, so $X\in\{0,1,2,...,10\}$. The binomial distribution models the total number of successes (here a success is the presence of a label in an envelope) in $n$ trials when the trials are independent with constant success probability $p$ so $X$ takes values $0, 1, 2, 3, ..., n.$ We take $n=10$, which gives 11 different possible outcomes. I assume that because the sheet sizes are irregular, some sheets only have room for $X$ additional labels in excess of 90, and that this "additional space" for each label in excess of 90 occurs independently with probability $p$. So $X_i\sim\text{Binomial}(10,p).$ (Upon reflection, the independence assumption/binomial model is probably a strange assumption to make, since it effectively fixes the composition of the printer's sheets to be unimodal, and the data can only change the location of the mode, but the model will never admit a multimodal distribution. For example, under an alternative model, it's possible that the printer only has sheets of sizes 97, 98, 96, 100 and 95: this satisfies all the stated constraints and data doesn't exclude this possibility. It might be more appropriate to regard each sheet size as its own category and then fit a Dirichlet-multinomial model to the data. I do not do this here because the data are so scarce, so posterior probabilities on each of the 11 categories will be very strongly influenced by the prior. On the other hand, by fitting the simpler model we are likewise constricting the kinds of inferences that we can make.) Each envelope $i$ is an iid realization of $X$. The sum of binomial trials with the same success probability $p$ is also binomial, so $\sum_i X_i\sim\text{Binomial}(60,p).$ (This is a theorem -- to verify, use the MGF uniqueness theorem.) I prefer to think about these problems in a Bayesian mode, because you can make direct probability statements about posterior quantities of interest. A typical prior for binomial trials with unknown $p$ is the beta distribution, which is very flexible (varies between 0 and 1, can be symmetric or asymmetric in either direction, uniform or one of two Dirac masses, have an antimode or a mode... It's an amazing tool!). In the absence of data, it seems reasonable to assume uniform probability over $p$. That is, one might expect to see a sheet accommodate 90 labels as often as 91, as often as 92, ..., as often as 100. So our prior is $p\sim\text{Beta}(1,1).$ If you don't think this beta prior is reasonable, the uniform prior can be replaced with another beta prior, and the math won't even increase in difficulty! The posterior distribution on $p$ is $p\sim\text{Beta}(1+43,1+17)$ by the conjugacy properties of this model. This is only an intermediate step, though, because we don't care about $p$ as much as we care about the total number of labels. Forunately, the properties of conjugacy also mean that the posterior predictive distribution of sheets is beta-binomial, with parameters of the beta posterior. There are $940$ reamining "trials", i.e. labels for which their presence in the delivery is uncertain, so our posterior model on the remaining labels $Z$ is $Z\sim\text{BB}(44,18,940).$ Since we have a distribution on $Z$ and a value model per label (the vendor agreed to one dollar per label), we can also infer a probability distribution over the value of the lot. Denote $D$ the total dollar value of the lot. We know that $D=9043+Z$, because $Z$ only models the labels that we are uncertain about. So the distribution over value is given by $D$. What's the appropriate way to consider pricing the lot? We can find that the quantiles at 0.025 and 0.975 (a 95% interval) are 553 and 769, respectively. So the 95% interval on D is $[9596, 9812]$. Your payment falls in that interval. (The distribution on $D$ is not exactly symmetric, so this is not the central 95% interval -- however, the asymmetry is negligible. Anyway, as I elaborate below, I'm not sure that a central 95% interval is even the correct one to consider!) I'm not aware of a quantile function for beta binomial distribution in R, so I wrote my own using R's root-finding. qbetabinom.ab <- function(p, size, shape1, shape2){ tmpFn <- function(x) pbetabinom.ab(x, size=size, shape1=shape1, shape2=shape2)-p q <- uniroot(f=tmpFn, interval=c(0,size)) return(q$root) } Another way to think about it is just to think about the expectation. If you repeated this process many times, what's the average cost you would pay? We can compute the expectation of $D$ directly. $\mathbb{E}(D)=\mathbb{E}(9043+Z)=\mathbb{E}(Z)+9043.$ The beta binomial model has expectation $\mathbb{E}(Z)=\frac{n\alpha}{\alpha+\beta}=667.0968$, so $\mathbb{E}(D)=9710.097,$ almost exactly what you paid. Your expected loss on the deal was only 6 dollars! All told, well done! But I'm not sure either of these figures is the most relevant. After all, this vendor is trying to cheat you! If I were doing this deal, I'd stop worrying about breaking even or the fair-value price of the lot and start working out the probability that I'm overpaying! The vendor is clearly trying to defraud me, so I'm perfectly within my rights to minimize my losses and not concern myself with the break-even point. In this setting, the highest price I would offer is 9615 dollars, because this is the 5% quantile of the posterior on $D$, i.e. there's 95% probability that I'm underpaying. The vendor can't prove to me that all the labels are there, so I'm going to hedge my bets. (Of course, the fact that the vendor accepted the deal tells us that he has nonnegative real loss... I haven't figured out a way to use that information to help us determine more precisely how much you were cheated, except to note that because he accepted the offer, you were at best breaking even.) Comparison to the bootstrap We only have 6 observations to work with. The justification for the bootstrap is asymptotic, so let's consider what the results look like on our small sample. This plot shows the density of the boostrap simulation. The "bumpy" pattern is an artifact of the small sample size. Including or excluding any one point will have a dramatic effect on the mean, creating this "bunchy" apperance. The Bayesian approach smooths out these clumps and, in my opinion, is a more believable portrait of what's going on. Vertical lines are the 5% quantiles.
How much to pay? A practical problem I would be interested in feedback on the paragraph beginning "Upon reflection...", since particular part of the model has been keeping me up at night. The Bayesian model The revised question makes me
3,555
How much to pay? A practical problem
EDIT: Tragedy! My initial assumptions were incorrect! (Or in doubt, at least -- do you trust what the seller is telling you? Still, hat tip to Morten, as well.) Which I guess is another good introduction to statistics, but The Partial Sheet Approach is now added below (since people seemed to like the Whole Sheet one, and maybe somebody will still find it useful). First of all, great problem. But I'd like to make it a little more complicated. Because of that, before I do, let me make it a little simpler, and say -- the method you're using right now is perfectly reasonable. It's cheap it's easy it makes sense. So if you have to stick with it, you shouldn't feel bad. Just make sure you choose your bundles randomly. AND, if you can just weigh everything reliably (hat tip to whuber and user777), then you should do that. The reason I want to make it a little more complicated though is that you already have -- you just haven't told us about the whole complication, which is that -- counting takes time, and time is money too. But how much? Maybe it actually is cheaper to count everything! So what you're really doing is balancing the time it takes to count, with the amount of money you're saving. (IF, of course, you only play this game once. NEXT time you have this happen with the seller, they may have caught on, and tried a new trick. In game theory, this is the difference between Single Shot Games, and Iterated Games. But for now, let's pretend the seller will always do the same thing.) One more thing before I get to the estimation though. (And, sorry to have written so much and still not gotten to the answer, but then, that's a pretty good answer to What would a statistician do? They would spend a huge amount of time making sure they understood every tiny part of the problem before they were comfortable saying anything about it.) And that thing is an insight based on the following: (EDIT: IF THEY'RE ACTUALLY CHEATING ...) Your seller doesn't save money by removing labels -- they save money by not printing sheets. They can't sell your labels to somebody else (I assume). And maybe, I don't know and I don't know if you do, they can't print half a sheet of your stuff, and half a sheet of somebody else's. In other words, before you've even started counting, you can assume that the total number of labels is either 9000, 9100, ... 9900, or 10,000. That's how I'll approach it, for now. The Whole Sheet Method When a problem is a little tricky like this one (discrete, and bounded), a lot of statisticians will simulate what might happen. Here's what I simulated: # The number of sheets they used sheets <- sample(90:100, 1) # The base counts for the stacks stacks <- rep(90, 100) # The remaining labels are distributed randomly over the stacks for(i in 1:((sheets-90)*100)){ bucket <- sample(which(stacks!=100),1) stacks[bucket] <- stacks[bucket] + 1 } This gives you, assuming they're using whole sheets, and your assumptions are correct, a possible distribution of your labels (in the programming language R). Then I did this: alpha = 0.05/2 for(i in 4:20){ s <- replicate(1000, mean(sample(stacks, i))) print(round(quantile(s, probs=c(alpha, 1-alpha)), 3)) } This finds, using a "bootstrap" method, confidence intervals using 4, 5, ... 20 samples. In other words, On average, if you were to use N samples, how big would your confidence interval be? I use this to find an interval that's small enough to decide on the number of sheets, and that's my answer. By "small enough," I mean my 95% confidence interval has only one whole number in it -- e.g. if my confidence interval was from [93.1, 94.7], then I would choose 94 as the correct number of sheets, since we know it's a whole number. ANOTHER difficulty though -- your confidence depends on the truth. If you have 90 sheets, and every pile has 90 labels, then you converge really fast. Same with 100 sheets. So I looked at 95 sheets, where there is the greatest uncertainty, and found that to have 95% certainty, you need about 15 samples, on average. So let's say overall, you want to take 15 samples, because you never know what's really there. AFTER you know how many samples you need, you know that your expected savings are: $100N_{missing} - 15c$ where $c$ is the cost of counting one stack. If you assume that there's an equal chance of every number between 0 and 10 being missing, then your expected savings are $500 - 15*$c$. But, and here's the point of making the equation -- you could also optimize it, to trade off your confidence, for the number of samples you need. If you're okay with the confidence that 5 samples gives you, then you can also calculate how much you'll make there. (And you can play with this code, to figure that out.) But you should also charge the guy for making you do all this work! (EDIT: ADDED!) The Partial Sheet Approach Okay, so let's assume what the manufacturer is saying is true, and it's not intentional -- a few labels are just lost in every sheet. You still want to know, About how many labels, overall? This problem is different because you no longer have a nice clean decision that you can make -- that was an advantage to the Whole Sheet assumption. Before, there were only 11 possible answers -- now, there are 1100, and getting a 95% confidence interval on exactly how many labels there are is probably going to take a lot more samples than you want. So, let's see if we can think about about this differently. Because this is really about you making a decision, we're still missing a few parameters -- how much money are you willing to lose, in a single deal, and how much money it costs to count one stack. But let me set up what you could do, with those numbers. Simulating again (although props to user777 if you can do it without!), it's informative to look at the size of the intervals when using different numbers of samples. That can be done like this: stacks <- 90 + round(10*runif(100)) q <- array(dim=c(17,2)) for(i in 4:20){ s <- replicate(1000, mean(sample(stacks, i))) q[i-3,] <- quantile(s, probs=c(.025, .975)) } plot(q[,1], ylim=c(90,100)) points(q[,2]) Which assumes (this time) that each stack has a uniformly random number of labels between 90 and 100, and gives you: Of course, if things were really like they've been simulated, the true mean would be around 95 samples per stack, which is lower than what the truth appears to be -- this is one argument in fact for the Bayesian approach. But, it gives you a useful sense of how much more certain you're becoming about your answer, as you continue to sample -- and you can now explicitly trade off the cost of sampling with whatever deal you come to about pricing. Which I know by now, we're all really curious to hear about.
How much to pay? A practical problem
EDIT: Tragedy! My initial assumptions were incorrect! (Or in doubt, at least -- do you trust what the seller is telling you? Still, hat tip to Morten, as well.) Which I guess is another good introduct
How much to pay? A practical problem EDIT: Tragedy! My initial assumptions were incorrect! (Or in doubt, at least -- do you trust what the seller is telling you? Still, hat tip to Morten, as well.) Which I guess is another good introduction to statistics, but The Partial Sheet Approach is now added below (since people seemed to like the Whole Sheet one, and maybe somebody will still find it useful). First of all, great problem. But I'd like to make it a little more complicated. Because of that, before I do, let me make it a little simpler, and say -- the method you're using right now is perfectly reasonable. It's cheap it's easy it makes sense. So if you have to stick with it, you shouldn't feel bad. Just make sure you choose your bundles randomly. AND, if you can just weigh everything reliably (hat tip to whuber and user777), then you should do that. The reason I want to make it a little more complicated though is that you already have -- you just haven't told us about the whole complication, which is that -- counting takes time, and time is money too. But how much? Maybe it actually is cheaper to count everything! So what you're really doing is balancing the time it takes to count, with the amount of money you're saving. (IF, of course, you only play this game once. NEXT time you have this happen with the seller, they may have caught on, and tried a new trick. In game theory, this is the difference between Single Shot Games, and Iterated Games. But for now, let's pretend the seller will always do the same thing.) One more thing before I get to the estimation though. (And, sorry to have written so much and still not gotten to the answer, but then, that's a pretty good answer to What would a statistician do? They would spend a huge amount of time making sure they understood every tiny part of the problem before they were comfortable saying anything about it.) And that thing is an insight based on the following: (EDIT: IF THEY'RE ACTUALLY CHEATING ...) Your seller doesn't save money by removing labels -- they save money by not printing sheets. They can't sell your labels to somebody else (I assume). And maybe, I don't know and I don't know if you do, they can't print half a sheet of your stuff, and half a sheet of somebody else's. In other words, before you've even started counting, you can assume that the total number of labels is either 9000, 9100, ... 9900, or 10,000. That's how I'll approach it, for now. The Whole Sheet Method When a problem is a little tricky like this one (discrete, and bounded), a lot of statisticians will simulate what might happen. Here's what I simulated: # The number of sheets they used sheets <- sample(90:100, 1) # The base counts for the stacks stacks <- rep(90, 100) # The remaining labels are distributed randomly over the stacks for(i in 1:((sheets-90)*100)){ bucket <- sample(which(stacks!=100),1) stacks[bucket] <- stacks[bucket] + 1 } This gives you, assuming they're using whole sheets, and your assumptions are correct, a possible distribution of your labels (in the programming language R). Then I did this: alpha = 0.05/2 for(i in 4:20){ s <- replicate(1000, mean(sample(stacks, i))) print(round(quantile(s, probs=c(alpha, 1-alpha)), 3)) } This finds, using a "bootstrap" method, confidence intervals using 4, 5, ... 20 samples. In other words, On average, if you were to use N samples, how big would your confidence interval be? I use this to find an interval that's small enough to decide on the number of sheets, and that's my answer. By "small enough," I mean my 95% confidence interval has only one whole number in it -- e.g. if my confidence interval was from [93.1, 94.7], then I would choose 94 as the correct number of sheets, since we know it's a whole number. ANOTHER difficulty though -- your confidence depends on the truth. If you have 90 sheets, and every pile has 90 labels, then you converge really fast. Same with 100 sheets. So I looked at 95 sheets, where there is the greatest uncertainty, and found that to have 95% certainty, you need about 15 samples, on average. So let's say overall, you want to take 15 samples, because you never know what's really there. AFTER you know how many samples you need, you know that your expected savings are: $100N_{missing} - 15c$ where $c$ is the cost of counting one stack. If you assume that there's an equal chance of every number between 0 and 10 being missing, then your expected savings are $500 - 15*$c$. But, and here's the point of making the equation -- you could also optimize it, to trade off your confidence, for the number of samples you need. If you're okay with the confidence that 5 samples gives you, then you can also calculate how much you'll make there. (And you can play with this code, to figure that out.) But you should also charge the guy for making you do all this work! (EDIT: ADDED!) The Partial Sheet Approach Okay, so let's assume what the manufacturer is saying is true, and it's not intentional -- a few labels are just lost in every sheet. You still want to know, About how many labels, overall? This problem is different because you no longer have a nice clean decision that you can make -- that was an advantage to the Whole Sheet assumption. Before, there were only 11 possible answers -- now, there are 1100, and getting a 95% confidence interval on exactly how many labels there are is probably going to take a lot more samples than you want. So, let's see if we can think about about this differently. Because this is really about you making a decision, we're still missing a few parameters -- how much money are you willing to lose, in a single deal, and how much money it costs to count one stack. But let me set up what you could do, with those numbers. Simulating again (although props to user777 if you can do it without!), it's informative to look at the size of the intervals when using different numbers of samples. That can be done like this: stacks <- 90 + round(10*runif(100)) q <- array(dim=c(17,2)) for(i in 4:20){ s <- replicate(1000, mean(sample(stacks, i))) q[i-3,] <- quantile(s, probs=c(.025, .975)) } plot(q[,1], ylim=c(90,100)) points(q[,2]) Which assumes (this time) that each stack has a uniformly random number of labels between 90 and 100, and gives you: Of course, if things were really like they've been simulated, the true mean would be around 95 samples per stack, which is lower than what the truth appears to be -- this is one argument in fact for the Bayesian approach. But, it gives you a useful sense of how much more certain you're becoming about your answer, as you continue to sample -- and you can now explicitly trade off the cost of sampling with whatever deal you come to about pricing. Which I know by now, we're all really curious to hear about.
How much to pay? A practical problem EDIT: Tragedy! My initial assumptions were incorrect! (Or in doubt, at least -- do you trust what the seller is telling you? Still, hat tip to Morten, as well.) Which I guess is another good introduct
3,556
How much to pay? A practical problem
This is a fairly limited sample. (Code snippets are in R) > sample <- c(97,98,96,100,95,97) For an initial guess at expected number in the total population and a 95% confidence value for price we can start with the mean and the 5% quantile > 100*mean(sample) [1] 9716.667 > 100*quantile(sample,0.05) 5% 9525 To go further, we are going to have to create a theoretical model and make additional assumptions. There are several sources of uncertainty at play - (1) uncertainty for functional form of a model of packet filling, (2) uncertainty in estimating parameters for the model, and (3) sampling error. For the model, let's assume that there is a process for dropping each label independently into a packet that is prone to failure at some unknown rate $p$. We'll not assume the manufacturer is engaging in fraud, just that some portion end up mangled or otherwise on the floor. The success of each drop is then a Bernoulli random variable. For each packet, the process is repeated $n=100$ times, meaning the number of labels in each packet will follow a binomial distribution. We can estimate $p$ from the sample as follows: > n <- 100 > (p<-1-mean(sample)/100) [1] 0.02833333 Since $n\ge100$ and $np \le 10$, we can approximate the binomial distribute well with the simpler Poisson distribution > (lambda <- n*p) [1] 2.833333 We can find some small assurance in that the Poisson distribution has a variance equal to its mean, $\lambda = $lambda, and that the sample variance is fairly close to the sample mean > var(sample) [1] 2.966667 If we assume that each packet is filled independently, then the number of failures for the entire run of 100 packets is also approximately Poisson with parameter $\lambda_r = $100*lambda. The mean and 95% quantile are then > 100*100-100*lambda [1] 9716.667 > 100*100-qpois(0.95,100*lambda) [1] 9689 The problem is that the failure rate, $p$, is unknown, and we have not accounted for its uncertainty. Let's return to the binomial distribution, and, for the sake of flexibility and simplicity, assume that $p$ is a Beta random variable with unknown shape parameters $\alpha$ and $\beta$. This makes the process a Beta-Bernoulli process. We need some prior assumption for $\alpha$ and $\beta$, so we'll give the manufacturer the benefit of the doubt, but not much confidence, and make $\alpha = 1$ and $\beta = 0$. In 600 observations, you observed 583 successes and 17 failures, so we update the Beta-Bernoilli process to have parameters $\alpha^* = 1+583$ and $\beta^* = 0+17$. So, for a packet of 100, we would expect a mean of 97.17138 and standard deviation of 1.789028 (see e.g. Wikipedia's entry for the formulas). Using the distribution function, we can see that the probability of having fewer than 90 in a packet is sufficiently low (0.05%) that we will ignore that assumption; doing so is conservative for setting our price. The beauty of this model is it is easy to update $\alpha^*$ and $\beta^*$ (add new successes to $\alpha$ and new failures to $\beta$, the posterior model remains a beta-binomial) for more observations to reduce uncertainty and your initial assumptions are explicit. Now, assuming each packet is filled independently, we can view the entire box of packets as 10000 independent events rather than 100 events of 100 subevents. The mean is therefore 9717.138 with standard deviation 69.57153. Using the distribution function, you can calculate the 95% confidence number to be around 9593. I've used the R package VGAM for its *betabinom.ab functions in doing so. So, the uncertainty in the estimated parameter reduces the 95% confidence price by nearly 100, and we end up fairly close to our initial simple approximation. Whatever the approach or model, additional data can be used to validate the model, that is to see the additional data are reasonable under the theoretical model or whether adjustments or a new model is warranted. The modeling process is similar to the scientific method.
How much to pay? A practical problem
This is a fairly limited sample. (Code snippets are in R) > sample <- c(97,98,96,100,95,97) For an initial guess at expected number in the total population and a 95% confidence value for price we can
How much to pay? A practical problem This is a fairly limited sample. (Code snippets are in R) > sample <- c(97,98,96,100,95,97) For an initial guess at expected number in the total population and a 95% confidence value for price we can start with the mean and the 5% quantile > 100*mean(sample) [1] 9716.667 > 100*quantile(sample,0.05) 5% 9525 To go further, we are going to have to create a theoretical model and make additional assumptions. There are several sources of uncertainty at play - (1) uncertainty for functional form of a model of packet filling, (2) uncertainty in estimating parameters for the model, and (3) sampling error. For the model, let's assume that there is a process for dropping each label independently into a packet that is prone to failure at some unknown rate $p$. We'll not assume the manufacturer is engaging in fraud, just that some portion end up mangled or otherwise on the floor. The success of each drop is then a Bernoulli random variable. For each packet, the process is repeated $n=100$ times, meaning the number of labels in each packet will follow a binomial distribution. We can estimate $p$ from the sample as follows: > n <- 100 > (p<-1-mean(sample)/100) [1] 0.02833333 Since $n\ge100$ and $np \le 10$, we can approximate the binomial distribute well with the simpler Poisson distribution > (lambda <- n*p) [1] 2.833333 We can find some small assurance in that the Poisson distribution has a variance equal to its mean, $\lambda = $lambda, and that the sample variance is fairly close to the sample mean > var(sample) [1] 2.966667 If we assume that each packet is filled independently, then the number of failures for the entire run of 100 packets is also approximately Poisson with parameter $\lambda_r = $100*lambda. The mean and 95% quantile are then > 100*100-100*lambda [1] 9716.667 > 100*100-qpois(0.95,100*lambda) [1] 9689 The problem is that the failure rate, $p$, is unknown, and we have not accounted for its uncertainty. Let's return to the binomial distribution, and, for the sake of flexibility and simplicity, assume that $p$ is a Beta random variable with unknown shape parameters $\alpha$ and $\beta$. This makes the process a Beta-Bernoulli process. We need some prior assumption for $\alpha$ and $\beta$, so we'll give the manufacturer the benefit of the doubt, but not much confidence, and make $\alpha = 1$ and $\beta = 0$. In 600 observations, you observed 583 successes and 17 failures, so we update the Beta-Bernoilli process to have parameters $\alpha^* = 1+583$ and $\beta^* = 0+17$. So, for a packet of 100, we would expect a mean of 97.17138 and standard deviation of 1.789028 (see e.g. Wikipedia's entry for the formulas). Using the distribution function, we can see that the probability of having fewer than 90 in a packet is sufficiently low (0.05%) that we will ignore that assumption; doing so is conservative for setting our price. The beauty of this model is it is easy to update $\alpha^*$ and $\beta^*$ (add new successes to $\alpha$ and new failures to $\beta$, the posterior model remains a beta-binomial) for more observations to reduce uncertainty and your initial assumptions are explicit. Now, assuming each packet is filled independently, we can view the entire box of packets as 10000 independent events rather than 100 events of 100 subevents. The mean is therefore 9717.138 with standard deviation 69.57153. Using the distribution function, you can calculate the 95% confidence number to be around 9593. I've used the R package VGAM for its *betabinom.ab functions in doing so. So, the uncertainty in the estimated parameter reduces the 95% confidence price by nearly 100, and we end up fairly close to our initial simple approximation. Whatever the approach or model, additional data can be used to validate the model, that is to see the additional data are reasonable under the theoretical model or whether adjustments or a new model is warranted. The modeling process is similar to the scientific method.
How much to pay? A practical problem This is a fairly limited sample. (Code snippets are in R) > sample <- c(97,98,96,100,95,97) For an initial guess at expected number in the total population and a 95% confidence value for price we can
3,557
How much to pay? A practical problem
In a pinch, my first inclination would be to calculate a 95% confidence interval for your sample mean over a truncated normal distribution falling between the lower and upper bounds of 90 and 100 labels. The R package truncnorm allows you to find confidence intervals for a truncated normal distribution given a specified sample mean, sample standard deviation, lower bound, and upper bound. Since you're taking a sample of n=5 from a relatively small population (N=100), you may want to multiply your sample standard deviation by a finite population factor = [(N-n)/(N-1)]^.5 = 0.98.
How much to pay? A practical problem
In a pinch, my first inclination would be to calculate a 95% confidence interval for your sample mean over a truncated normal distribution falling between the lower and upper bounds of 90 and 100 labe
How much to pay? A practical problem In a pinch, my first inclination would be to calculate a 95% confidence interval for your sample mean over a truncated normal distribution falling between the lower and upper bounds of 90 and 100 labels. The R package truncnorm allows you to find confidence intervals for a truncated normal distribution given a specified sample mean, sample standard deviation, lower bound, and upper bound. Since you're taking a sample of n=5 from a relatively small population (N=100), you may want to multiply your sample standard deviation by a finite population factor = [(N-n)/(N-1)]^.5 = 0.98.
How much to pay? A practical problem In a pinch, my first inclination would be to calculate a 95% confidence interval for your sample mean over a truncated normal distribution falling between the lower and upper bounds of 90 and 100 labe
3,558
How much to pay? A practical problem
A quick and simple approach is to consider all possible resamples of size 6. There are only 15,625 permutations. Looking at these and taking the average for each case, and then sorting the averages and extracting the 5% quantile, we get a value of 96. So the estimated amount you should be willing to pay is about 9600. This is in good agreement with a couple of the more sophisticated approaches. An improvement here would be to simulate a large number of samples of size 6 and use the same procedure to find the 5th percentile of the sample means. Using slightly more than a million resamples, I found the 5th percentile to be 96.1667, so to the nearest dollar the payment would be 9617 dollars, which is only a 2 dollar difference from user777's result of 9615.
How much to pay? A practical problem
A quick and simple approach is to consider all possible resamples of size 6. There are only 15,625 permutations. Looking at these and taking the average for each case, and then sorting the averages an
How much to pay? A practical problem A quick and simple approach is to consider all possible resamples of size 6. There are only 15,625 permutations. Looking at these and taking the average for each case, and then sorting the averages and extracting the 5% quantile, we get a value of 96. So the estimated amount you should be willing to pay is about 9600. This is in good agreement with a couple of the more sophisticated approaches. An improvement here would be to simulate a large number of samples of size 6 and use the same procedure to find the 5th percentile of the sample means. Using slightly more than a million resamples, I found the 5th percentile to be 96.1667, so to the nearest dollar the payment would be 9617 dollars, which is only a 2 dollar difference from user777's result of 9615.
How much to pay? A practical problem A quick and simple approach is to consider all possible resamples of size 6. There are only 15,625 permutations. Looking at these and taking the average for each case, and then sorting the averages an
3,559
How much to pay? A practical problem
It seems like you have already concluded that the error was done intentionally, but a statistician would not jump to such conclusions (even though the evidence seems to support this). One could set this up as an hypothesis test: H0: The dealer is honest but quite sloppy H1: The dealer is fraudulent, and the shortfall is intentional. Let´s assume H0, then each deviation is a random event with mean = 0 and equal chance of being positive or negative. Let´s further assume that the deviations are normally distributed. The standard deviation for the normal distribution based on the deviations in the 6 data points is sd=1.722 If the statistician did not remember his theory very well, but had R nearby (not an unlikely scenario) then he/she could write the following code to check the likelihood of receiving no positive deviations (no packages of more then 100) if H0 is true. numpackages=c(97,98,96,100,95,97) error<-100-numpackages errorStdev<-sd(error) numSimulations<-1000000 max100orLes<-0 for(p in 1:numSimulations) { simulatedError<-rnorm(6,mean=0,sd=errorStdev) packageDeviations<-round(simulatedError) maxValue<-max(packageDeviations) if(maxValue<=0) { max100orLes<-max100orLes+1 } } probH0<-100*max100orLes/numSimulations cat("The probability the H0 is correct is:",probH0,"%") The result of the simulation is: The probability the H0 is correct is: 5.3471 % The probability of the dealer being Honest is only 5.35% , and it´s therefore quite likely that you have been a victim of fraud. Since you say that this is not a homework question, but a real situation for your company, then this ceases to be an exercise in calculation the correct expected number labels, but instead it´s a tricky case of how to handle an dishonest supplier. What you do from here, really cant be answered by statistics alone. It very much depends on your leverage and relationship with the dealer. Best of luck ! Morten Bunes Gustavsen
How much to pay? A practical problem
It seems like you have already concluded that the error was done intentionally, but a statistician would not jump to such conclusions (even though the evidence seems to support this). One could set th
How much to pay? A practical problem It seems like you have already concluded that the error was done intentionally, but a statistician would not jump to such conclusions (even though the evidence seems to support this). One could set this up as an hypothesis test: H0: The dealer is honest but quite sloppy H1: The dealer is fraudulent, and the shortfall is intentional. Let´s assume H0, then each deviation is a random event with mean = 0 and equal chance of being positive or negative. Let´s further assume that the deviations are normally distributed. The standard deviation for the normal distribution based on the deviations in the 6 data points is sd=1.722 If the statistician did not remember his theory very well, but had R nearby (not an unlikely scenario) then he/she could write the following code to check the likelihood of receiving no positive deviations (no packages of more then 100) if H0 is true. numpackages=c(97,98,96,100,95,97) error<-100-numpackages errorStdev<-sd(error) numSimulations<-1000000 max100orLes<-0 for(p in 1:numSimulations) { simulatedError<-rnorm(6,mean=0,sd=errorStdev) packageDeviations<-round(simulatedError) maxValue<-max(packageDeviations) if(maxValue<=0) { max100orLes<-max100orLes+1 } } probH0<-100*max100orLes/numSimulations cat("The probability the H0 is correct is:",probH0,"%") The result of the simulation is: The probability the H0 is correct is: 5.3471 % The probability of the dealer being Honest is only 5.35% , and it´s therefore quite likely that you have been a victim of fraud. Since you say that this is not a homework question, but a real situation for your company, then this ceases to be an exercise in calculation the correct expected number labels, but instead it´s a tricky case of how to handle an dishonest supplier. What you do from here, really cant be answered by statistics alone. It very much depends on your leverage and relationship with the dealer. Best of luck ! Morten Bunes Gustavsen
How much to pay? A practical problem It seems like you have already concluded that the error was done intentionally, but a statistician would not jump to such conclusions (even though the evidence seems to support this). One could set th
3,560
How much to pay? A practical problem
How about something like a multinomial model. Prob of each outcome is estimated as 1/6, 1/6, .... (based on the 6 observations) and so E(x)=97.16 and Var(x)=sum(95^2*1/6+...)-E(x)^2=2.47 so the 95% CI would be [94, 100]
How much to pay? A practical problem
How about something like a multinomial model. Prob of each outcome is estimated as 1/6, 1/6, .... (based on the 6 observations) and so E(x)=97.16 and Var(x)=sum(95^2*1/6+...)-E(x)^2=2.47 so the 95% C
How much to pay? A practical problem How about something like a multinomial model. Prob of each outcome is estimated as 1/6, 1/6, .... (based on the 6 observations) and so E(x)=97.16 and Var(x)=sum(95^2*1/6+...)-E(x)^2=2.47 so the 95% CI would be [94, 100]
How much to pay? A practical problem How about something like a multinomial model. Prob of each outcome is estimated as 1/6, 1/6, .... (based on the 6 observations) and so E(x)=97.16 and Var(x)=sum(95^2*1/6+...)-E(x)^2=2.47 so the 95% C
3,561
Is it important to scale data before clustering?
The issue is what represents a good measure of distance between cases. If you have two features, one where the differences between cases is large and the other small, are you prepared to have the former as almost the only driver of distance? So for example if you clustered people on their weights in kilograms and heights in metres, is a 1kg difference as significant as a 1m difference in height? Does it matter that you would get different clusterings on weights in kilograms and heights in centimetres? If your answers are "no" and "yes" respectively then you should probably scale. On the other hand, if you were clustering Canadian cities based on distances east/west and distances north/south then, although there will typically be much bigger differences east/west, you may be happy just to use unscaled distances in either kilometres or miles (though you might want to adjust degrees of longitude and latitude for the curvature of the earth).
Is it important to scale data before clustering?
The issue is what represents a good measure of distance between cases. If you have two features, one where the differences between cases is large and the other small, are you prepared to have the fo
Is it important to scale data before clustering? The issue is what represents a good measure of distance between cases. If you have two features, one where the differences between cases is large and the other small, are you prepared to have the former as almost the only driver of distance? So for example if you clustered people on their weights in kilograms and heights in metres, is a 1kg difference as significant as a 1m difference in height? Does it matter that you would get different clusterings on weights in kilograms and heights in centimetres? If your answers are "no" and "yes" respectively then you should probably scale. On the other hand, if you were clustering Canadian cities based on distances east/west and distances north/south then, although there will typically be much bigger differences east/west, you may be happy just to use unscaled distances in either kilometres or miles (though you might want to adjust degrees of longitude and latitude for the curvature of the earth).
Is it important to scale data before clustering? The issue is what represents a good measure of distance between cases. If you have two features, one where the differences between cases is large and the other small, are you prepared to have the fo
3,562
Is it important to scale data before clustering?
Other answers are correct, but it might help to get an intuitive grasp of the problem by seeing an example. Below, I generate a dataset that has two clear clusters, but the non-clustered dimension is much larger than the clustered dimension (note the different scales on the axes). Clustering on the non-normalised data fails. Clustering on the normalised data works very well. The same would apply with data clustered in both dimensions, but normalisation would help less. In that case, it might help to do a PCA, then normalise, but that would only help if the clusters are linearly separable and don't overlap in the PCA dimensions. (This example only works so clearly because of the low cluster count) import numpy as np import seaborn import matplotlib.pyplot as plt from sklearn.cluster import KMeans rnorm = np.random.randn x = rnorm(1000) * 10 y = np.concatenate([rnorm(500), rnorm(500) + 5]) fig, axes = plt.subplots(3, 1) axes[0].scatter(x, y) axes[0].set_title('Data (note different axes scales)') km = KMeans(2) clusters = km.fit_predict(np.array([x, y]).T) axes[1].scatter(x, y, c=clusters, cmap='bwr') axes[1].set_title('non-normalised K-means') clusters = km.fit_predict(np.array([x / 10, y]).T) axes[2].scatter(x, y, c=clusters, cmap='bwr') axes[2].set_title('Normalised K-means')
Is it important to scale data before clustering?
Other answers are correct, but it might help to get an intuitive grasp of the problem by seeing an example. Below, I generate a dataset that has two clear clusters, but the non-clustered dimension is
Is it important to scale data before clustering? Other answers are correct, but it might help to get an intuitive grasp of the problem by seeing an example. Below, I generate a dataset that has two clear clusters, but the non-clustered dimension is much larger than the clustered dimension (note the different scales on the axes). Clustering on the non-normalised data fails. Clustering on the normalised data works very well. The same would apply with data clustered in both dimensions, but normalisation would help less. In that case, it might help to do a PCA, then normalise, but that would only help if the clusters are linearly separable and don't overlap in the PCA dimensions. (This example only works so clearly because of the low cluster count) import numpy as np import seaborn import matplotlib.pyplot as plt from sklearn.cluster import KMeans rnorm = np.random.randn x = rnorm(1000) * 10 y = np.concatenate([rnorm(500), rnorm(500) + 5]) fig, axes = plt.subplots(3, 1) axes[0].scatter(x, y) axes[0].set_title('Data (note different axes scales)') km = KMeans(2) clusters = km.fit_predict(np.array([x, y]).T) axes[1].scatter(x, y, c=clusters, cmap='bwr') axes[1].set_title('non-normalised K-means') clusters = km.fit_predict(np.array([x / 10, y]).T) axes[2].scatter(x, y, c=clusters, cmap='bwr') axes[2].set_title('Normalised K-means')
Is it important to scale data before clustering? Other answers are correct, but it might help to get an intuitive grasp of the problem by seeing an example. Below, I generate a dataset that has two clear clusters, but the non-clustered dimension is
3,563
Is it important to scale data before clustering?
It depends on your data. If you have attributes with a well-defined meaning. Say, latitude and longitude, then you should not scale your data, because this will cause distortion. (K-means might be a bad choice, too - you need something that can handle lat/lon naturally) If you have mixed numerical data, where each attribute is something entirely different (say, shoe size and weight), has different units attached (lb, tons, m, kg ...) then these values aren't really comparable anyway; z-standardizing them is a best-practise to give equal weight to them. If you have binary values, discrete attributes or categorial attributes, stay away from k-means. K-means needs to compute means, and the mean value is not meaningful on this kind of data.
Is it important to scale data before clustering?
It depends on your data. If you have attributes with a well-defined meaning. Say, latitude and longitude, then you should not scale your data, because this will cause distortion. (K-means might be a b
Is it important to scale data before clustering? It depends on your data. If you have attributes with a well-defined meaning. Say, latitude and longitude, then you should not scale your data, because this will cause distortion. (K-means might be a bad choice, too - you need something that can handle lat/lon naturally) If you have mixed numerical data, where each attribute is something entirely different (say, shoe size and weight), has different units attached (lb, tons, m, kg ...) then these values aren't really comparable anyway; z-standardizing them is a best-practise to give equal weight to them. If you have binary values, discrete attributes or categorial attributes, stay away from k-means. K-means needs to compute means, and the mean value is not meaningful on this kind of data.
Is it important to scale data before clustering? It depends on your data. If you have attributes with a well-defined meaning. Say, latitude and longitude, then you should not scale your data, because this will cause distortion. (K-means might be a b
3,564
Is it important to scale data before clustering?
Standardization is an important step of Data preprocessing. it controls the variability of the dataset, it convert data into specific range using a linear transformation which generate good quality clusters and improve the accuracy of clustering algorithms, check out the link below to view its effects on k-means analysis. https://pdfs.semanticscholar.org/1d35/2dd5f030589ecfe8910ab1cc0dd320bf600d.pdf
Is it important to scale data before clustering?
Standardization is an important step of Data preprocessing. it controls the variability of the dataset, it convert data into specific range using a linear transformation which generate good quality cl
Is it important to scale data before clustering? Standardization is an important step of Data preprocessing. it controls the variability of the dataset, it convert data into specific range using a linear transformation which generate good quality clusters and improve the accuracy of clustering algorithms, check out the link below to view its effects on k-means analysis. https://pdfs.semanticscholar.org/1d35/2dd5f030589ecfe8910ab1cc0dd320bf600d.pdf
Is it important to scale data before clustering? Standardization is an important step of Data preprocessing. it controls the variability of the dataset, it convert data into specific range using a linear transformation which generate good quality cl
3,565
Is it important to scale data before clustering?
As explained in this paper, the k-means minimizes the error function using the Newton algorithm, i.e. a gradient-based optimization algorithm. Normalizing the data improves convergence of such algorithms. See here for some details on it. The idea is that if different components of data (features) have different scales, then derivatives tend to align along directions with higher variance, which leads to poorer/slower convergence.
Is it important to scale data before clustering?
As explained in this paper, the k-means minimizes the error function using the Newton algorithm, i.e. a gradient-based optimization algorithm. Normalizing the data improves convergence of such algorit
Is it important to scale data before clustering? As explained in this paper, the k-means minimizes the error function using the Newton algorithm, i.e. a gradient-based optimization algorithm. Normalizing the data improves convergence of such algorithms. See here for some details on it. The idea is that if different components of data (features) have different scales, then derivatives tend to align along directions with higher variance, which leads to poorer/slower convergence.
Is it important to scale data before clustering? As explained in this paper, the k-means minimizes the error function using the Newton algorithm, i.e. a gradient-based optimization algorithm. Normalizing the data improves convergence of such algorit
3,566
Is it important to scale data before clustering?
Standardization (Z-cscore normalization) is to bring the data to a mean of 0 and std dev of 1. This can be accomplished by (x-xmean)/std dev Normalization is to bring the data to a scale of [0,1]. This can be accomplished by (x-xmin)/(xmax-xmin). For algorithms such as clustering, each feature range can differ. Let's say we have income and age. Range of income is [65000,150000] and the range of age [21,90]. Since we calculate the distance(euclidean, manhattan etc), it is important to have the range of each variable to same level.So, I believe to do normalization to bring all the features to a range of [0,1].
Is it important to scale data before clustering?
Standardization (Z-cscore normalization) is to bring the data to a mean of 0 and std dev of 1. This can be accomplished by (x-xmean)/std dev Normalization is to bring the data to a scale of [0,1]. Thi
Is it important to scale data before clustering? Standardization (Z-cscore normalization) is to bring the data to a mean of 0 and std dev of 1. This can be accomplished by (x-xmean)/std dev Normalization is to bring the data to a scale of [0,1]. This can be accomplished by (x-xmin)/(xmax-xmin). For algorithms such as clustering, each feature range can differ. Let's say we have income and age. Range of income is [65000,150000] and the range of age [21,90]. Since we calculate the distance(euclidean, manhattan etc), it is important to have the range of each variable to same level.So, I believe to do normalization to bring all the features to a range of [0,1].
Is it important to scale data before clustering? Standardization (Z-cscore normalization) is to bring the data to a mean of 0 and std dev of 1. This can be accomplished by (x-xmean)/std dev Normalization is to bring the data to a scale of [0,1]. Thi
3,567
Is this chart showing the likelihood of a terrorist attack statistically useful?
Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the National Vital Statistics Data final death data for 2014. The assumption is that 2017 might look roughly like 2014. You'll find that approximately 2,626,000 Americans died in 2014: 614,000 died of heart disease. 592,000 died of cancer. 147,000 from respiratory disease. 136,000 from accidents. ... 42,773 from suicide. 42,032 from accidental poisoning (subset of accidents category). 15,809 from homicide. 0 from terrorism under the CDC, NCHS classification. 18 from terrorism using a broader definition (University of Maryland Global Terrorism Datbase) See link for definitions. By my quick count, 0 of the perpetrators of these 2014 attacks were born outside the United States. Note that anecdote is not the same as data, but I've assembled links to the underlying news stories here: 1, 2, 3, 4, 5, 6, 7, 8, and 9. Terrorist incidents in the U.S. are quite rare, so estimating off a single year is going to be problematic. Looking at the time-series, what you see is that the vast majority of U.S. terrorism fatalities came during the 9/11 attacks (See this report from the National Consortium for the Study of Terrorism and Responses to Terrorism.) I've copied their Figure 1 below: Immediately you see that you have an outlier, rare events problem. A single outlier is driving the overall number. If you're trying to forecast deaths from terrorism, there are numerous issues: What counts as terrorism? Terrorism can be defined broadly or narrowly. Is the process stationary? If we take a time-series average, what are we estimating? Are conditions changing? What does a forecast conditional on current conditions look like? If the vast majority of deaths come from a single outlier, how do you reasonably model that? We can get more data in a sense by looking more broadly at other countries and going back further in time but then there are questions as to whether any of those patterns apply in today's world. IMHO, the FT graphic picked an overly narrow definition (the 9/11 attacks don't show up in the graphic because the attackers weren't refugees). There are legitimate issues with the chart, but the FT's broader point is correct that terrorism in the U.S. is quite rare. Your chance of being killed by a foreign born terrorist in the United States is close to zero. Life expectancy in the U.S. is about 78.7 years. What has moved life expectancy numbers down in the past has been events like the 1918 Spanish flu pandemic or WWII. Additional risks to life expectancy now might include obesity and opioid abuse. If you're trying to create a detailed estimate of terrorism risk, there are huge statistical issues, but to understand the big picture requires not so much statistics as understanding orders of magnitude and basic quantitative literacy. A more reasonable concern... (perhaps veering off topic) Looking back at history, the way huge numbers of people get killed is through disease, genocide, and war. A more reasonable concern might be that some rare, terrorist event triggers something catastrophic (eg. how the assassination of Archduke Ferdinand help set off WWI.) Or one could worry about nuclear weapons in the hands of someone crazy. Thinking about extremely rare but catastrophic events is incredibly difficult. It's a multidisciplinary pursuit and goes far outside of statistics. Perhaps the only statistical point here is that it's hard to estimate the probability and effects of some event which hasn't happened? (Except to say that it can't be that common or it would have happened already.)
Is this chart showing the likelihood of a terrorist attack statistically useful?
Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the National Vital Statistics Data final death d
Is this chart showing the likelihood of a terrorist attack statistically useful? Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the National Vital Statistics Data final death data for 2014. The assumption is that 2017 might look roughly like 2014. You'll find that approximately 2,626,000 Americans died in 2014: 614,000 died of heart disease. 592,000 died of cancer. 147,000 from respiratory disease. 136,000 from accidents. ... 42,773 from suicide. 42,032 from accidental poisoning (subset of accidents category). 15,809 from homicide. 0 from terrorism under the CDC, NCHS classification. 18 from terrorism using a broader definition (University of Maryland Global Terrorism Datbase) See link for definitions. By my quick count, 0 of the perpetrators of these 2014 attacks were born outside the United States. Note that anecdote is not the same as data, but I've assembled links to the underlying news stories here: 1, 2, 3, 4, 5, 6, 7, 8, and 9. Terrorist incidents in the U.S. are quite rare, so estimating off a single year is going to be problematic. Looking at the time-series, what you see is that the vast majority of U.S. terrorism fatalities came during the 9/11 attacks (See this report from the National Consortium for the Study of Terrorism and Responses to Terrorism.) I've copied their Figure 1 below: Immediately you see that you have an outlier, rare events problem. A single outlier is driving the overall number. If you're trying to forecast deaths from terrorism, there are numerous issues: What counts as terrorism? Terrorism can be defined broadly or narrowly. Is the process stationary? If we take a time-series average, what are we estimating? Are conditions changing? What does a forecast conditional on current conditions look like? If the vast majority of deaths come from a single outlier, how do you reasonably model that? We can get more data in a sense by looking more broadly at other countries and going back further in time but then there are questions as to whether any of those patterns apply in today's world. IMHO, the FT graphic picked an overly narrow definition (the 9/11 attacks don't show up in the graphic because the attackers weren't refugees). There are legitimate issues with the chart, but the FT's broader point is correct that terrorism in the U.S. is quite rare. Your chance of being killed by a foreign born terrorist in the United States is close to zero. Life expectancy in the U.S. is about 78.7 years. What has moved life expectancy numbers down in the past has been events like the 1918 Spanish flu pandemic or WWII. Additional risks to life expectancy now might include obesity and opioid abuse. If you're trying to create a detailed estimate of terrorism risk, there are huge statistical issues, but to understand the big picture requires not so much statistics as understanding orders of magnitude and basic quantitative literacy. A more reasonable concern... (perhaps veering off topic) Looking back at history, the way huge numbers of people get killed is through disease, genocide, and war. A more reasonable concern might be that some rare, terrorist event triggers something catastrophic (eg. how the assassination of Archduke Ferdinand help set off WWI.) Or one could worry about nuclear weapons in the hands of someone crazy. Thinking about extremely rare but catastrophic events is incredibly difficult. It's a multidisciplinary pursuit and goes far outside of statistics. Perhaps the only statistical point here is that it's hard to estimate the probability and effects of some event which hasn't happened? (Except to say that it can't be that common or it would have happened already.)
Is this chart showing the likelihood of a terrorist attack statistically useful? Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the National Vital Statistics Data final death d
3,568
Is this chart showing the likelihood of a terrorist attack statistically useful?
Problems with the chart: It implies refugees are more likely than other groups of people to commit acts of terror. Why not frame it in terms of migrants in general? And what about acts of terror committed by a country's own citizens? How does it define a refugee? The comparative groups don't make sense. If we are going to look a killings why not compare it to other forms of killing, such as those killed in gun related crimes. Comparing to lightening strikes (which man cannot control) or lottery wins (which would be a positive rather than negative thing) makes little sense. It's very very generalised. Expressed as a percent per billion people would suggest these probabilities are universally true. The information would be more useful if we were to make use of other prior knowledge such as geographic location, comparative volume of people moving cross country over a period in time, the level of integration of refugees at the destination country, etc. Conditional probabilities are often more useful than general probabilities. (For example, we know that there are more lightening strikes in Venezuela, where the Catatumbo River meets Lake Maracaibo (the most lightening struck place on earth apparently) than in the south of the United Kingdom.) As the question states, relying on general probabilities unconditionally can cause the wrong conclusion to be made:
Is this chart showing the likelihood of a terrorist attack statistically useful?
Problems with the chart: It implies refugees are more likely than other groups of people to commit acts of terror. Why not frame it in terms of migrants in general? And what about acts of terror comm
Is this chart showing the likelihood of a terrorist attack statistically useful? Problems with the chart: It implies refugees are more likely than other groups of people to commit acts of terror. Why not frame it in terms of migrants in general? And what about acts of terror committed by a country's own citizens? How does it define a refugee? The comparative groups don't make sense. If we are going to look a killings why not compare it to other forms of killing, such as those killed in gun related crimes. Comparing to lightening strikes (which man cannot control) or lottery wins (which would be a positive rather than negative thing) makes little sense. It's very very generalised. Expressed as a percent per billion people would suggest these probabilities are universally true. The information would be more useful if we were to make use of other prior knowledge such as geographic location, comparative volume of people moving cross country over a period in time, the level of integration of refugees at the destination country, etc. Conditional probabilities are often more useful than general probabilities. (For example, we know that there are more lightening strikes in Venezuela, where the Catatumbo River meets Lake Maracaibo (the most lightening struck place on earth apparently) than in the south of the United Kingdom.) As the question states, relying on general probabilities unconditionally can cause the wrong conclusion to be made:
Is this chart showing the likelihood of a terrorist attack statistically useful? Problems with the chart: It implies refugees are more likely than other groups of people to commit acts of terror. Why not frame it in terms of migrants in general? And what about acts of terror comm
3,569
Is this chart showing the likelihood of a terrorist attack statistically useful?
This chart is definitely incomplete without at least the following information: how "terrorism" is defined for these purposes, how "refugee" is defined for these purposes, what time-span this data covers, and which people are included--for instance, does the lighting strike data include people who live in nursing homes and never go outside? Presumably (hopefully) at least some of these points are covered in articles and essays that employ this graphic. I'm also going to assume that the specific numbers are reasonably accurate. Is this chart as presented useful for accurately showing what the threat level from refugees is? Not really. It tells us what the threat level from refugee terrorism is compared to the threat level from lighting strikes and vending machine accidents. If we were trying to decide whether to devote resources to restricting refugee entry or to overhauling vending machine design, that could be useful information. But we're not. Is there necessary statistical context that makes this chart more or less useful? Absolutely! People are using this chart to argue that "refugees aren't dangerous," but that's misleading, because no one really cares whether refugees are dangerous compared to lightning. Well, you could argue that because other things are more dangerous, we should all just stop worrying about less-dangerous things at all. Don't worry about refugees, because lightning is more dangerous! Don't worry about plane crashes, because cars are more dangerous! Personally, I think this a stupid argument, especially since sometimes less-likely bad outcomes are easier to prevent, and so in some sense more worth spending resources to prevent. Also, I haven't seen anyone making this argument about refugees or terrorism lately. If we're going to talk about whether it makes sense to restrict refugee entry to prevent terrorism, there are more useful things to look at. (Whether there's reliable data available for them is a different question, but even if there isn't, that's no excuse to look at useless data and pretend it's useful.) We could compare the likelihood of being killed in a terrorist attack by a refugee with the likelihood of being killed in a terrorist attack by a non-refugee. If, hypothetically, the first is four times as likely, that would mean that 4/5 of terrorism deaths are caused by refugees, so if we banned refugees it could cut terrorism deaths by 80%, other factors held constant. What other ramifications such a policy would have, and whether it would be a good idea on balance, has nothing to do with lightning deaths. Maybe the second is 1000 times as likely. I have no idea, and I can't tell from this chart. Side note: overall deaths from terrorism in the US are pretty low to begin with, so we might not be able to draw strong inferences. We could also compare the likelihood of a refugee being a terrorist with the likelihood of a non-refugee immigrant being a terrorist, or with the likelihood of a non-immigrant being a terrorist. We could look at how any of these probabilities has changed over time, and how they're related to other factors, such as global conflict levels, which might give us some clues as to how much whatever past data this chart is based on is actually likely to tell us about the future. We could look at the likelihood of being killed by a refugee, a vending machine, or a lightning strike controlled for how frequently you interact with each of them. Because people have way more contact with vending machines than with refugees, there could be more deaths from vending machines even if a random refugee/non-refugee interaction was more likely to result in death than a random vending machine/person interaction. Is it more likely? I have no idea, and I can't tell from this chart. Even saying that "refugees are less dangerous than vending machines" based on this chart is misleading. It's like saying that a rare disease is "less dangerous" than a common disease without looking at which disease has a higher fatality rate. It's true for a certain meaning of "less dangerous," but it's totally irrelevant to discussions of what preventative measures to take against a potential outbreak of the usually rare disease. Pretty much any of these statistics would be more useful for discussing and making policy about refugee entry than the ones in this chart, but they'd probably be less cute and shareable.
Is this chart showing the likelihood of a terrorist attack statistically useful?
This chart is definitely incomplete without at least the following information: how "terrorism" is defined for these purposes, how "refugee" is defined for these purposes, what time-span this data cov
Is this chart showing the likelihood of a terrorist attack statistically useful? This chart is definitely incomplete without at least the following information: how "terrorism" is defined for these purposes, how "refugee" is defined for these purposes, what time-span this data covers, and which people are included--for instance, does the lighting strike data include people who live in nursing homes and never go outside? Presumably (hopefully) at least some of these points are covered in articles and essays that employ this graphic. I'm also going to assume that the specific numbers are reasonably accurate. Is this chart as presented useful for accurately showing what the threat level from refugees is? Not really. It tells us what the threat level from refugee terrorism is compared to the threat level from lighting strikes and vending machine accidents. If we were trying to decide whether to devote resources to restricting refugee entry or to overhauling vending machine design, that could be useful information. But we're not. Is there necessary statistical context that makes this chart more or less useful? Absolutely! People are using this chart to argue that "refugees aren't dangerous," but that's misleading, because no one really cares whether refugees are dangerous compared to lightning. Well, you could argue that because other things are more dangerous, we should all just stop worrying about less-dangerous things at all. Don't worry about refugees, because lightning is more dangerous! Don't worry about plane crashes, because cars are more dangerous! Personally, I think this a stupid argument, especially since sometimes less-likely bad outcomes are easier to prevent, and so in some sense more worth spending resources to prevent. Also, I haven't seen anyone making this argument about refugees or terrorism lately. If we're going to talk about whether it makes sense to restrict refugee entry to prevent terrorism, there are more useful things to look at. (Whether there's reliable data available for them is a different question, but even if there isn't, that's no excuse to look at useless data and pretend it's useful.) We could compare the likelihood of being killed in a terrorist attack by a refugee with the likelihood of being killed in a terrorist attack by a non-refugee. If, hypothetically, the first is four times as likely, that would mean that 4/5 of terrorism deaths are caused by refugees, so if we banned refugees it could cut terrorism deaths by 80%, other factors held constant. What other ramifications such a policy would have, and whether it would be a good idea on balance, has nothing to do with lightning deaths. Maybe the second is 1000 times as likely. I have no idea, and I can't tell from this chart. Side note: overall deaths from terrorism in the US are pretty low to begin with, so we might not be able to draw strong inferences. We could also compare the likelihood of a refugee being a terrorist with the likelihood of a non-refugee immigrant being a terrorist, or with the likelihood of a non-immigrant being a terrorist. We could look at how any of these probabilities has changed over time, and how they're related to other factors, such as global conflict levels, which might give us some clues as to how much whatever past data this chart is based on is actually likely to tell us about the future. We could look at the likelihood of being killed by a refugee, a vending machine, or a lightning strike controlled for how frequently you interact with each of them. Because people have way more contact with vending machines than with refugees, there could be more deaths from vending machines even if a random refugee/non-refugee interaction was more likely to result in death than a random vending machine/person interaction. Is it more likely? I have no idea, and I can't tell from this chart. Even saying that "refugees are less dangerous than vending machines" based on this chart is misleading. It's like saying that a rare disease is "less dangerous" than a common disease without looking at which disease has a higher fatality rate. It's true for a certain meaning of "less dangerous," but it's totally irrelevant to discussions of what preventative measures to take against a potential outbreak of the usually rare disease. Pretty much any of these statistics would be more useful for discussing and making policy about refugee entry than the ones in this chart, but they'd probably be less cute and shareable.
Is this chart showing the likelihood of a terrorist attack statistically useful? This chart is definitely incomplete without at least the following information: how "terrorism" is defined for these purposes, how "refugee" is defined for these purposes, what time-span this data cov
3,570
Is this chart showing the likelihood of a terrorist attack statistically useful?
On the Frequency of Severe Terrorist Events This paper attempts to model the likelihood that a terrorist attack of any given severity occurs. The conclusion is that terrorist events follow a power law distribution, which is 'tail heavy'. What this means is that most terrorism related deaths happen due to things like 9/11, which appear to be outliers, or, as the author puts it The regular scaling in the upper tails of these distributions immediately demonstrates that events orders of magnitude larger than the average event size are not outliers, but are instead in concordance with a global pattern in the frequency statistics of terrorist attacks. The chart likely does not account for this - it seems to be simply counting the number of people who have been killed by terrorists. By way of analogy, the vast majority of earthquake deaths in California came from a single quake in 1906. Modelling the threat posed by earthquakes has to take into account the risk of really big earthquakes, modelling the risk of terrorism has to take into account the risk of really big terrorism.
Is this chart showing the likelihood of a terrorist attack statistically useful?
On the Frequency of Severe Terrorist Events This paper attempts to model the likelihood that a terrorist attack of any given severity occurs. The conclusion is that terrorist events follow a power law
Is this chart showing the likelihood of a terrorist attack statistically useful? On the Frequency of Severe Terrorist Events This paper attempts to model the likelihood that a terrorist attack of any given severity occurs. The conclusion is that terrorist events follow a power law distribution, which is 'tail heavy'. What this means is that most terrorism related deaths happen due to things like 9/11, which appear to be outliers, or, as the author puts it The regular scaling in the upper tails of these distributions immediately demonstrates that events orders of magnitude larger than the average event size are not outliers, but are instead in concordance with a global pattern in the frequency statistics of terrorist attacks. The chart likely does not account for this - it seems to be simply counting the number of people who have been killed by terrorists. By way of analogy, the vast majority of earthquake deaths in California came from a single quake in 1906. Modelling the threat posed by earthquakes has to take into account the risk of really big earthquakes, modelling the risk of terrorism has to take into account the risk of really big terrorism.
Is this chart showing the likelihood of a terrorist attack statistically useful? On the Frequency of Severe Terrorist Events This paper attempts to model the likelihood that a terrorist attack of any given severity occurs. The conclusion is that terrorist events follow a power law
3,571
Is this chart showing the likelihood of a terrorist attack statistically useful?
This chart is only useful if you want to know the probability of being killed by a person with a particular status in particular circumstances over the time of the study, which is 35 years (1975 to 2015). What it's useless for includes: knowing how probable it is to be killed by a refugee. Cases of homicide performed by refugees which didn't count as terrorism (example) are excluded. knowing how probable it is to die in a terrorist attack by what you think are refugees. For example, Boston Marathon bombing is excluded from this study, because the terrorists applied for asylum (so they stopped being refugees and became asylum seekers) before the attack. Also, results of this study cannot be easily extrapolated into the future, especially considering that the study is designed to to advocate the policy of letting more refugees in. A study in relative numbers (normalized by the total number of refugees) would make more sense. As a side note, the danger of vending machines is only real when you try to steal food or money from them. So far all vending machine related deaths involved people rocking or tilting these machines.
Is this chart showing the likelihood of a terrorist attack statistically useful?
This chart is only useful if you want to know the probability of being killed by a person with a particular status in particular circumstances over the time of the study, which is 35 years (1975 to 20
Is this chart showing the likelihood of a terrorist attack statistically useful? This chart is only useful if you want to know the probability of being killed by a person with a particular status in particular circumstances over the time of the study, which is 35 years (1975 to 2015). What it's useless for includes: knowing how probable it is to be killed by a refugee. Cases of homicide performed by refugees which didn't count as terrorism (example) are excluded. knowing how probable it is to die in a terrorist attack by what you think are refugees. For example, Boston Marathon bombing is excluded from this study, because the terrorists applied for asylum (so they stopped being refugees and became asylum seekers) before the attack. Also, results of this study cannot be easily extrapolated into the future, especially considering that the study is designed to to advocate the policy of letting more refugees in. A study in relative numbers (normalized by the total number of refugees) would make more sense. As a side note, the danger of vending machines is only real when you try to steal food or money from them. So far all vending machine related deaths involved people rocking or tilting these machines.
Is this chart showing the likelihood of a terrorist attack statistically useful? This chart is only useful if you want to know the probability of being killed by a person with a particular status in particular circumstances over the time of the study, which is 35 years (1975 to 20
3,572
Is this chart showing the likelihood of a terrorist attack statistically useful?
Your intuition is correct that the statistic above doesn't tell the whole story. Yes, past refugee terrorist behaviour isn't necessarily a good indicator of future refugee terrorist behaviour, but that isn't the problem. The problem is that even one or two large-scale terrorist attacks would be awful, and statistics isn't appropriate for dealing with such small numbers of things. If we only consider mass murder refugee terror attacks, there have never been any, at least not in the past fifteen years. The figure in the graph comes from the fact there were 3 murders by refugees in terror attacks since 1975[1], which is essentially zero compared to the terror attacks everyone is scared of. But on the basis of statistics alone, that data isn't enough to rule out the chance that a huge terror attack is coming. We can't say "the threat is low", because statistics can't tell us if the threat is low enough. First off, imagine how upset you would be if a refugee committed a horrible terrorist attack that killed 100 innocent people. Now imagine how much you would hate the idea of a 20% chance of that happening. We would want to put safegaurds into the refugee program. Now, let's look at the statistics. How can you prove the probability of a refugee terrorist attack next year is less than 20%? Well, the probability of a terrorist attack went up after the awful September 11 attacks, so you could say there haven't been any attacks in 15 years. If the probability of an attack was 20%, then no attacks in 15 years would be pretty unlikely (p=3.5%). So we can be pretty certain the chance of a major terrorist attack next year by a refugee is less than 20%. But now suppose we're interested in the chance of a terrorist attack within five years. Then we only have three independent samples since 2001 (2001-2006, 2006-2011, and 2011-2016). We can't say with any confidence that the probability of a terrorist attack within five years is less than 20%. And a 20% risk within 5 years of a tragic terrorist attack by a refugee where 100 people die is awful, and would certainly justify changes to the refugee program if it were real. And even a 1% chance of a major attack in ten years would be afwul. In terms of probability, one devastating attack the same size as the September 11 attacks, done by refugees, would turn the chance per year of being killed in a terror attack by refugees into 1 in 1 million, which is almost as bad as the odds of being killed by lightning. But notice that the logic I used there could be applied to anything where the odds of it changed 15 years ago. We failed to disprove a threat of something happening, based only off the fact that it happened zero times. It would be equally hard to say the threat of a terrorist attack by a refugee named Tim was less than 20% in five years, but it wouldn't make sense to stop all refugees named Tim. It would be equally hard to prove the threat of a terrorist attack by an astronaut was less than 20% in five years, but you don't see anyone saying we should stop letting astronauts in. Statistics is the wrong approach to dealing with extreme events. If you want to be certain that something won't happen once in ten years, you can't use past experience to convince yourself of the fact. That's why it's wrong to say the graph shows the terrorism threat from refugees is very low. If we're only using history and a single major incident is unacceptable, then no threat is very low.
Is this chart showing the likelihood of a terrorist attack statistically useful?
Your intuition is correct that the statistic above doesn't tell the whole story. Yes, past refugee terrorist behaviour isn't necessarily a good indicator of future refugee terrorist behaviour, but tha
Is this chart showing the likelihood of a terrorist attack statistically useful? Your intuition is correct that the statistic above doesn't tell the whole story. Yes, past refugee terrorist behaviour isn't necessarily a good indicator of future refugee terrorist behaviour, but that isn't the problem. The problem is that even one or two large-scale terrorist attacks would be awful, and statistics isn't appropriate for dealing with such small numbers of things. If we only consider mass murder refugee terror attacks, there have never been any, at least not in the past fifteen years. The figure in the graph comes from the fact there were 3 murders by refugees in terror attacks since 1975[1], which is essentially zero compared to the terror attacks everyone is scared of. But on the basis of statistics alone, that data isn't enough to rule out the chance that a huge terror attack is coming. We can't say "the threat is low", because statistics can't tell us if the threat is low enough. First off, imagine how upset you would be if a refugee committed a horrible terrorist attack that killed 100 innocent people. Now imagine how much you would hate the idea of a 20% chance of that happening. We would want to put safegaurds into the refugee program. Now, let's look at the statistics. How can you prove the probability of a refugee terrorist attack next year is less than 20%? Well, the probability of a terrorist attack went up after the awful September 11 attacks, so you could say there haven't been any attacks in 15 years. If the probability of an attack was 20%, then no attacks in 15 years would be pretty unlikely (p=3.5%). So we can be pretty certain the chance of a major terrorist attack next year by a refugee is less than 20%. But now suppose we're interested in the chance of a terrorist attack within five years. Then we only have three independent samples since 2001 (2001-2006, 2006-2011, and 2011-2016). We can't say with any confidence that the probability of a terrorist attack within five years is less than 20%. And a 20% risk within 5 years of a tragic terrorist attack by a refugee where 100 people die is awful, and would certainly justify changes to the refugee program if it were real. And even a 1% chance of a major attack in ten years would be afwul. In terms of probability, one devastating attack the same size as the September 11 attacks, done by refugees, would turn the chance per year of being killed in a terror attack by refugees into 1 in 1 million, which is almost as bad as the odds of being killed by lightning. But notice that the logic I used there could be applied to anything where the odds of it changed 15 years ago. We failed to disprove a threat of something happening, based only off the fact that it happened zero times. It would be equally hard to say the threat of a terrorist attack by a refugee named Tim was less than 20% in five years, but it wouldn't make sense to stop all refugees named Tim. It would be equally hard to prove the threat of a terrorist attack by an astronaut was less than 20% in five years, but you don't see anyone saying we should stop letting astronauts in. Statistics is the wrong approach to dealing with extreme events. If you want to be certain that something won't happen once in ten years, you can't use past experience to convince yourself of the fact. That's why it's wrong to say the graph shows the terrorism threat from refugees is very low. If we're only using history and a single major incident is unacceptable, then no threat is very low.
Is this chart showing the likelihood of a terrorist attack statistically useful? Your intuition is correct that the statistic above doesn't tell the whole story. Yes, past refugee terrorist behaviour isn't necessarily a good indicator of future refugee terrorist behaviour, but tha
3,573
Is this chart showing the likelihood of a terrorist attack statistically useful?
Others have answered in a great deal more detail than I will, but here's my 2 cents: The details just don't matter. You can quibble about the definition of terrorism, migrants, etc, but when the deaths due to terrorism are multiple orders of magnitude smaller than other causes of death, the difference between the broad and narrow definitions is vanishingly insignificant. The situation gets even more absurd when you look at it as an economic optimisation problem: given finite resources, how many lives can you save per dollar by investing in terrorism prevention versus, say, heart disease prevention? Again, you can quibble over the details of this-or-that spending, but given that the USA's spending in response to 9/11 is trillions of dollars, you don't need to be too precise about it to draw your conclusions as to whether or not it's a comparatively good investment (hint: it's not). Your chances of being killed by terrorists are effectively zero. In this situation, competing models of the process are more or less interchangeably useless because the anomalies are so proportionally large.
Is this chart showing the likelihood of a terrorist attack statistically useful?
Others have answered in a great deal more detail than I will, but here's my 2 cents: The details just don't matter. You can quibble about the definition of terrorism, migrants, etc, but when the death
Is this chart showing the likelihood of a terrorist attack statistically useful? Others have answered in a great deal more detail than I will, but here's my 2 cents: The details just don't matter. You can quibble about the definition of terrorism, migrants, etc, but when the deaths due to terrorism are multiple orders of magnitude smaller than other causes of death, the difference between the broad and narrow definitions is vanishingly insignificant. The situation gets even more absurd when you look at it as an economic optimisation problem: given finite resources, how many lives can you save per dollar by investing in terrorism prevention versus, say, heart disease prevention? Again, you can quibble over the details of this-or-that spending, but given that the USA's spending in response to 9/11 is trillions of dollars, you don't need to be too precise about it to draw your conclusions as to whether or not it's a comparatively good investment (hint: it's not). Your chances of being killed by terrorists are effectively zero. In this situation, competing models of the process are more or less interchangeably useless because the anomalies are so proportionally large.
Is this chart showing the likelihood of a terrorist attack statistically useful? Others have answered in a great deal more detail than I will, but here's my 2 cents: The details just don't matter. You can quibble about the definition of terrorism, migrants, etc, but when the death
3,574
Is this chart showing the likelihood of a terrorist attack statistically useful?
My feeling is that the question is about blatant political activism, is not evidence of anything relevant, and my concern is that such things should not be posted on this site. The chart shown, is propaganda, and propaganda is problematic no matter who is presenting it for whatever reason. Does that mean that we should neglect the application of statistics to a problem because the propaganda problem itself is absurd? Actually the need is striking. Lies, damned lies, and statistics is a historical reference to this, and underlines the magician's trick of misdirection to use our own preconceptions to fool us en mass. Can we find any stable data concerning terrorism? Sure, but we really have no motive for doing so. For example, let's take the number of US attacks from @MatthewGunn's Table, above and plot that. As the data is noisy, I also did a running average of the $\mu_{i}=\frac{1}{2}X_i+\frac{1}{2}\mu_{i-1}$ type. In either case, it is clear that the number of terrorist attacks has decreased significantly since 1995, and that this improvement appears bottomed out circa 2006-13. To continue our magic trick, let us point out that a lack of terrorist attacks means a lack of deaths from attacks no matter how noisy the relationship is between attacks and deaths caused. True enough, we do not know that the ratio between terrorist attacks and deaths caused is an absolute constant in time, but any such hypothetical effect would arguably enhance the result. So, is it worth investing billions or trillions in anti-terrorism just to reduce the number of terrorist attacks from five dozen to one dozen per annum? Obviously not. Terrorism is obviously not the problem. The terrorism magic trick relies on public perception fostered by obviously irresponsible journalism delivered on behalf of the desperately unscrupulous for digestion by the gullible. Then our "saviors" in the alt-left media, e.g., CNN, who are the progenitors of terrorism mythology$^1$ to begin with (e.g., CNN's three weeks of continuous loop narrative and images of 9/11 Twin Towers' attack), seek to debunk the nonsense they created by dangling it in front of our eyes, while pulling a fast one with finger counting. Indeed, this propaganda was so effective that it resulted in decreased Freedom of the Press. Relations between the media and the Bush administration sharply deteriorated after the president used the pretext of “national security” to regard as suspicious any journalist who questioned his “war on terrorism.” Now, does the OP question mean anything? The propaganda sheet cited by the OP is being used in a discussion of vetting of refugees, not terrorism. Of the 1000 current, ongoing FBI investigations for terrorism more than 300 (nearly 1/3rd) are being conducted on refugees Source: Jeff Sessions, US Attorney General. The FBI in 2016 had 12,486 FTEs working in Counterterrorism/Counterintelligence. The FBI is one of 13,160 law enforcement agencies that, as of October 31, 2015, collectively employed 635,781 sworn officers. As counterintelligence is not counterterrorism per se, and assuming proportionality one would expect at least 15,000 active investigations of refugees for crimes other than terrorism. Now, considering that in 2015 in the US there were an estimated 1,197,704 violent crimes, and 7,993,631 property crimes, 15,000 active investigations of refugees hardly seems exaggerated. However, refugees from certain lawless societies bring bad habits with them, like sticking a knife in you for recreational purposes, which although not terrorism per se, is also a useless distinction, and, when prison records are examined (e.g., in New York State, which unusually, actually keeps records) there are indeed some ethnic groups who have a high prison population. Overall, 10.0% of NY prisoners are foreign born. Of this 10% or 5,510 detainees, the largest subgroup, 2,697, reported birth in one of the island nations in the Caribbean basin, and not the Near East (only 78 detainees). So, yes, there is a problem with crime among the foreign born if the NY figures are any indication. Of the 19,889,657 NY State residents, Caribbean born are 1,080,000 or 5.4%. Among the prison detainees, 4.9% are Caribbean Islanders. So, it would seem they are just as bad as everyone else. Now concerning the Near East born, for all 50 states in the US, if I had to take a guess, I would guess ($H0$) that they are like everybody else with respect to crime, until proven otherwise. Moreover, the general lack of data for refugee crime and the lack of general definitive finding leaves us to speculate that the question itself is not statistically interesting. That is, we might be far better off looking at the socioeconomic status of a potential refugee; education, wealth, social status, occupation, etc. than at the isolated fact that someone is, or is not, a refugee. There are certain neighborhoods I would not walk around in at all, and others in which I would feel safe. This has less to do with which ethnic groups are in those neighborhoods than how much crime occurs in them. Preventing criminals from entering the US seems reasonable, no matter where they are from. That we find ourselves in a ridiculous political environment is a call to arms for searching out the truth. The only solution to that problem is to analyze well enough to find the truth and present that, and statistics is a very powerful tool for doing just that. Only the truth has enough economic leverage to be worth investing in. However, models have to achieve significance to be worth talking about, and, it is up to us to do exactly that. In conclusion, I see no evidence in the OP's chart that merits consideration. It is off-topic for the topic it pretends to discuss, and is meaningless. Then what is the problem, and what is its solution? The problem: Islamic culture regards Western culture as degenerate, and Western culture regards Islamic culture as a relic of the 12$^{\text{th}}$ century. The solution: 1) Islamic culture should be allowed a minimally acceptable degree of cultural isolation from the West. 2) Western culture should be allowed its own beliefs, which are sometimes difficult to accept even for those in the West, and without interference from Islamic culture. While many people are busy misunderstanding President Trump, I note that he wishes to end the problematic practice of promoting Western values as if they were an international "gold" standard. Indeed, this is less a right wing nationalist view than a simple recognition of the fact that the world is not, and may never be, ready for a mono-culture. Dowd C, Raleigh C. The myth of global Islamic terrorism and local conflict in Mali and the Sahel. African affairs. 2013;112/448:498-509. doi:10.1093/afraf/adt039.
Is this chart showing the likelihood of a terrorist attack statistically useful?
My feeling is that the question is about blatant political activism, is not evidence of anything relevant, and my concern is that such things should not be posted on this site. The chart shown, is pro
Is this chart showing the likelihood of a terrorist attack statistically useful? My feeling is that the question is about blatant political activism, is not evidence of anything relevant, and my concern is that such things should not be posted on this site. The chart shown, is propaganda, and propaganda is problematic no matter who is presenting it for whatever reason. Does that mean that we should neglect the application of statistics to a problem because the propaganda problem itself is absurd? Actually the need is striking. Lies, damned lies, and statistics is a historical reference to this, and underlines the magician's trick of misdirection to use our own preconceptions to fool us en mass. Can we find any stable data concerning terrorism? Sure, but we really have no motive for doing so. For example, let's take the number of US attacks from @MatthewGunn's Table, above and plot that. As the data is noisy, I also did a running average of the $\mu_{i}=\frac{1}{2}X_i+\frac{1}{2}\mu_{i-1}$ type. In either case, it is clear that the number of terrorist attacks has decreased significantly since 1995, and that this improvement appears bottomed out circa 2006-13. To continue our magic trick, let us point out that a lack of terrorist attacks means a lack of deaths from attacks no matter how noisy the relationship is between attacks and deaths caused. True enough, we do not know that the ratio between terrorist attacks and deaths caused is an absolute constant in time, but any such hypothetical effect would arguably enhance the result. So, is it worth investing billions or trillions in anti-terrorism just to reduce the number of terrorist attacks from five dozen to one dozen per annum? Obviously not. Terrorism is obviously not the problem. The terrorism magic trick relies on public perception fostered by obviously irresponsible journalism delivered on behalf of the desperately unscrupulous for digestion by the gullible. Then our "saviors" in the alt-left media, e.g., CNN, who are the progenitors of terrorism mythology$^1$ to begin with (e.g., CNN's three weeks of continuous loop narrative and images of 9/11 Twin Towers' attack), seek to debunk the nonsense they created by dangling it in front of our eyes, while pulling a fast one with finger counting. Indeed, this propaganda was so effective that it resulted in decreased Freedom of the Press. Relations between the media and the Bush administration sharply deteriorated after the president used the pretext of “national security” to regard as suspicious any journalist who questioned his “war on terrorism.” Now, does the OP question mean anything? The propaganda sheet cited by the OP is being used in a discussion of vetting of refugees, not terrorism. Of the 1000 current, ongoing FBI investigations for terrorism more than 300 (nearly 1/3rd) are being conducted on refugees Source: Jeff Sessions, US Attorney General. The FBI in 2016 had 12,486 FTEs working in Counterterrorism/Counterintelligence. The FBI is one of 13,160 law enforcement agencies that, as of October 31, 2015, collectively employed 635,781 sworn officers. As counterintelligence is not counterterrorism per se, and assuming proportionality one would expect at least 15,000 active investigations of refugees for crimes other than terrorism. Now, considering that in 2015 in the US there were an estimated 1,197,704 violent crimes, and 7,993,631 property crimes, 15,000 active investigations of refugees hardly seems exaggerated. However, refugees from certain lawless societies bring bad habits with them, like sticking a knife in you for recreational purposes, which although not terrorism per se, is also a useless distinction, and, when prison records are examined (e.g., in New York State, which unusually, actually keeps records) there are indeed some ethnic groups who have a high prison population. Overall, 10.0% of NY prisoners are foreign born. Of this 10% or 5,510 detainees, the largest subgroup, 2,697, reported birth in one of the island nations in the Caribbean basin, and not the Near East (only 78 detainees). So, yes, there is a problem with crime among the foreign born if the NY figures are any indication. Of the 19,889,657 NY State residents, Caribbean born are 1,080,000 or 5.4%. Among the prison detainees, 4.9% are Caribbean Islanders. So, it would seem they are just as bad as everyone else. Now concerning the Near East born, for all 50 states in the US, if I had to take a guess, I would guess ($H0$) that they are like everybody else with respect to crime, until proven otherwise. Moreover, the general lack of data for refugee crime and the lack of general definitive finding leaves us to speculate that the question itself is not statistically interesting. That is, we might be far better off looking at the socioeconomic status of a potential refugee; education, wealth, social status, occupation, etc. than at the isolated fact that someone is, or is not, a refugee. There are certain neighborhoods I would not walk around in at all, and others in which I would feel safe. This has less to do with which ethnic groups are in those neighborhoods than how much crime occurs in them. Preventing criminals from entering the US seems reasonable, no matter where they are from. That we find ourselves in a ridiculous political environment is a call to arms for searching out the truth. The only solution to that problem is to analyze well enough to find the truth and present that, and statistics is a very powerful tool for doing just that. Only the truth has enough economic leverage to be worth investing in. However, models have to achieve significance to be worth talking about, and, it is up to us to do exactly that. In conclusion, I see no evidence in the OP's chart that merits consideration. It is off-topic for the topic it pretends to discuss, and is meaningless. Then what is the problem, and what is its solution? The problem: Islamic culture regards Western culture as degenerate, and Western culture regards Islamic culture as a relic of the 12$^{\text{th}}$ century. The solution: 1) Islamic culture should be allowed a minimally acceptable degree of cultural isolation from the West. 2) Western culture should be allowed its own beliefs, which are sometimes difficult to accept even for those in the West, and without interference from Islamic culture. While many people are busy misunderstanding President Trump, I note that he wishes to end the problematic practice of promoting Western values as if they were an international "gold" standard. Indeed, this is less a right wing nationalist view than a simple recognition of the fact that the world is not, and may never be, ready for a mono-culture. Dowd C, Raleigh C. The myth of global Islamic terrorism and local conflict in Mali and the Sahel. African affairs. 2013;112/448:498-509. doi:10.1093/afraf/adt039.
Is this chart showing the likelihood of a terrorist attack statistically useful? My feeling is that the question is about blatant political activism, is not evidence of anything relevant, and my concern is that such things should not be posted on this site. The chart shown, is pro
3,575
Is this chart showing the likelihood of a terrorist attack statistically useful?
This is a picture representation of numbers for people who are too lazy to look at the numbers. This is almost statistically useless.
Is this chart showing the likelihood of a terrorist attack statistically useful?
This is a picture representation of numbers for people who are too lazy to look at the numbers. This is almost statistically useless.
Is this chart showing the likelihood of a terrorist attack statistically useful? This is a picture representation of numbers for people who are too lazy to look at the numbers. This is almost statistically useless.
Is this chart showing the likelihood of a terrorist attack statistically useful? This is a picture representation of numbers for people who are too lazy to look at the numbers. This is almost statistically useless.
3,576
Rule of thumb for number of bootstrap samples
My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instance, this paper used Niter=50 to demonstrate LASSO as a feature selection tool. My thesis would have taken a lot less time to run had 50 iterations been deemed acceptable! I recommend that you should always inspect the histogram of the bootstrap samples. Their distribution should appear fairly regular. I don't think any plain numerical rule will suffice, and it would be overkill to perform, say, a double-bootstrap to assess MC error. Suppose you were estimating the mean from a ratio of two independent standard normal random variables, some statistician might recommend bootstrapping it since the integral is difficult to compute. If you have basic probability theory under your belt, you would recognize that this ratio forms a Cauchy random variable with a non-existent mean. Any other leptokurtic distribution would require several additional bootstrap iterations compared to a more regular Gaussian density counterpart. In that case, 1000, 100000, or 10000000 bootstrap samples would be insufficient to estimate that which doesn't exist. The histogram of these bootstraps would continue to look irregular and wrong. There are a few more wrinkles to that story. In particular, the bootstrap is only really justified when the moments of the data generating probability model exist. That's because you are using the empirical distribution function as a straw man for the actual probability model, and assuming they have the same mean, standard deviation, skewness, 99th percentile, etc. In short, a bootstrap estimate of a statistic and its standard error is only justified when the histogram of the bootstrapped samples appears regular beyond reasonable doubt and when the bootstrap is justified.
Rule of thumb for number of bootstrap samples
My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instanc
Rule of thumb for number of bootstrap samples My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instance, this paper used Niter=50 to demonstrate LASSO as a feature selection tool. My thesis would have taken a lot less time to run had 50 iterations been deemed acceptable! I recommend that you should always inspect the histogram of the bootstrap samples. Their distribution should appear fairly regular. I don't think any plain numerical rule will suffice, and it would be overkill to perform, say, a double-bootstrap to assess MC error. Suppose you were estimating the mean from a ratio of two independent standard normal random variables, some statistician might recommend bootstrapping it since the integral is difficult to compute. If you have basic probability theory under your belt, you would recognize that this ratio forms a Cauchy random variable with a non-existent mean. Any other leptokurtic distribution would require several additional bootstrap iterations compared to a more regular Gaussian density counterpart. In that case, 1000, 100000, or 10000000 bootstrap samples would be insufficient to estimate that which doesn't exist. The histogram of these bootstraps would continue to look irregular and wrong. There are a few more wrinkles to that story. In particular, the bootstrap is only really justified when the moments of the data generating probability model exist. That's because you are using the empirical distribution function as a straw man for the actual probability model, and assuming they have the same mean, standard deviation, skewness, 99th percentile, etc. In short, a bootstrap estimate of a statistic and its standard error is only justified when the histogram of the bootstrapped samples appears regular beyond reasonable doubt and when the bootstrap is justified.
Rule of thumb for number of bootstrap samples My experience is that statisticians won't take simulations or bootstraps seriously unless the number of iterations exceeds 1,000. MC error is a big issue that's a little under appreciated. For instanc
3,577
Rule of thumb for number of bootstrap samples
edit: If you are serious about having enough samples, what you should do is to run your bootstrap procedure with, what you hope are, enough samples a number of times and see how much the bootstrap estimates "jump around". If the repeated estimates does not differ much (where "much" depends on your specific situation) your are most likely fine. Of course you can estimate how much the repeated estimates jump around by calculating the sample SD or similar. If you want a reference and a rule of thumb Wilcox(2010) writes "599 is recommended for general use." But this should be considered only a guideline or perhaps the minimum number of samples you should consider. If you want to be on the safe side there is no reason (if it is computationally feasible) why you should not generate an order of magnitude more samples. On a personal note I tend to run 10,000 samples when estimating "for myself" and 100,000 samples when estimating something passed on to others (but this is quick as I work with small datasets). Reference Wilcox, R. R. (2010). Fundamentals of modern statistical methods: Substantially improving power and accuracy. Springer.
Rule of thumb for number of bootstrap samples
edit: If you are serious about having enough samples, what you should do is to run your bootstrap procedure with, what you hope are, enough samples a number of times and see how much the bootstrap est
Rule of thumb for number of bootstrap samples edit: If you are serious about having enough samples, what you should do is to run your bootstrap procedure with, what you hope are, enough samples a number of times and see how much the bootstrap estimates "jump around". If the repeated estimates does not differ much (where "much" depends on your specific situation) your are most likely fine. Of course you can estimate how much the repeated estimates jump around by calculating the sample SD or similar. If you want a reference and a rule of thumb Wilcox(2010) writes "599 is recommended for general use." But this should be considered only a guideline or perhaps the minimum number of samples you should consider. If you want to be on the safe side there is no reason (if it is computationally feasible) why you should not generate an order of magnitude more samples. On a personal note I tend to run 10,000 samples when estimating "for myself" and 100,000 samples when estimating something passed on to others (but this is quick as I work with small datasets). Reference Wilcox, R. R. (2010). Fundamentals of modern statistical methods: Substantially improving power and accuracy. Springer.
Rule of thumb for number of bootstrap samples edit: If you are serious about having enough samples, what you should do is to run your bootstrap procedure with, what you hope are, enough samples a number of times and see how much the bootstrap est
3,578
Rule of thumb for number of bootstrap samples
I start by responding to something raised in another answer: why such a strange number as "$599$" (number of bootstrap samples)? This applies also to Monte Carlo tests (to which bootstrapping is equivalent when the underlying statistic is pivotal), and comes from the following: if the test is to be exact, then, if $\alpha$ is the desired significance level, and $B$ is the number of samples, the following relation must hold: $$\alpha \cdot (1+B) = \text{integer}$$ Now consider typical significance levels $\alpha_1 = 0.1$ and $\alpha_2 = 0.05$ We have $$B_1 = \frac {\text{integer}}{0.1} - 1,\;\;\; B_2 = \frac {\text{integer}}{0.05} - 1$$ This "minus one" is what leads to proposed numbers like "$599$", in order to ensure an exact test. I took the following information from Davidson, R., & MacKinnon, J. G. (2000). Bootstrap tests: How many bootstraps?. Econometric Reviews, 19(1), 55-68. (the working paper version is freely downloadable). As regards rule of thumb, the authors examine the case of bootstrapping p-values and they suggest that for tests at the $0.05$ the minimum number of samples is about 400 (so $399$) while for a test at the $0.01$ level it is 1500 so ($1499$). They also propose a pre-testing procedure to determine $B$ endogenously. After simulating their procedure they conclude: "It is easy to understand why the pretesting procedure works well. When the null hypothesis is true, B can safely be small, because we are not concerned about power at all. Similarly, when the null is false and test power is extremely high, B does not need to be large, because power loss is not a serious issue. However, when the null is false and test power is moderately high, B needs to be large in order to avoid loss of power. The pretesting procedure tends to make B small when it can safely be small and large when it needs to be large." At the end of the paper they also compare it to another procedure that has been proposed in order to determine $B$ and they find that theirs performs better.
Rule of thumb for number of bootstrap samples
I start by responding to something raised in another answer: why such a strange number as "$599$" (number of bootstrap samples)? This applies also to Monte Carlo tests (to which bootstrapping is equ
Rule of thumb for number of bootstrap samples I start by responding to something raised in another answer: why such a strange number as "$599$" (number of bootstrap samples)? This applies also to Monte Carlo tests (to which bootstrapping is equivalent when the underlying statistic is pivotal), and comes from the following: if the test is to be exact, then, if $\alpha$ is the desired significance level, and $B$ is the number of samples, the following relation must hold: $$\alpha \cdot (1+B) = \text{integer}$$ Now consider typical significance levels $\alpha_1 = 0.1$ and $\alpha_2 = 0.05$ We have $$B_1 = \frac {\text{integer}}{0.1} - 1,\;\;\; B_2 = \frac {\text{integer}}{0.05} - 1$$ This "minus one" is what leads to proposed numbers like "$599$", in order to ensure an exact test. I took the following information from Davidson, R., & MacKinnon, J. G. (2000). Bootstrap tests: How many bootstraps?. Econometric Reviews, 19(1), 55-68. (the working paper version is freely downloadable). As regards rule of thumb, the authors examine the case of bootstrapping p-values and they suggest that for tests at the $0.05$ the minimum number of samples is about 400 (so $399$) while for a test at the $0.01$ level it is 1500 so ($1499$). They also propose a pre-testing procedure to determine $B$ endogenously. After simulating their procedure they conclude: "It is easy to understand why the pretesting procedure works well. When the null hypothesis is true, B can safely be small, because we are not concerned about power at all. Similarly, when the null is false and test power is extremely high, B does not need to be large, because power loss is not a serious issue. However, when the null is false and test power is moderately high, B needs to be large in order to avoid loss of power. The pretesting procedure tends to make B small when it can safely be small and large when it needs to be large." At the end of the paper they also compare it to another procedure that has been proposed in order to determine $B$ and they find that theirs performs better.
Rule of thumb for number of bootstrap samples I start by responding to something raised in another answer: why such a strange number as "$599$" (number of bootstrap samples)? This applies also to Monte Carlo tests (to which bootstrapping is equ
3,579
Rule of thumb for number of bootstrap samples
There are a some situations where you can tell either beforehand or after a few iterations that huge numbers of bootstrap iterations won't help in the end. You hopefully have an idea beforehand on the order of magnitude of precision that is required for meaningful interpretation of the results. If you don't maybe it is time to learn a bit more about the problem behind the data analysis. Anyways, after a few iterations you may be able to estimate how many more iterations are needed. Obviously, if you have extremely few cases (say, the ethics committee allowed 5 rats) you don't need to think about tens of thousands of iterations. Maybe it would be better to look at all possible draws. And maybe it would be even better to stop and think how certain any kind of conclusion can (not) be based on 5 rats. Think about the total uncertainty of the results. In my field, the part of uncertainty that you can measure and reduce by bootstrapping may only be a minor part of the total uncertainty (e.g. due to restrictions in the design of the experiments important sources of variation are often not covered by the experiment - say, we start by experiments on cell lines although the final goal will of course be patients). In this situation it doesn't make sense to run too many iterations -- it anyways won't help the final result and moreover it may indroduce a false sense of certainty. A related (though not exactly the same) issue occurs during out-of-bootstrap or cross validation of models: you have two sources of uncertainty: the finite (and in my case usually very small number of independent cases) and the (in)stability of the bootstrapped models. Depending on your set up of the resampling validation, you may have only one of them contributing to the resampling estimate. In that case, you can use an estimate of the other source of variance to judge what certainty you should achieve with the resampling, and when it stops to help the final result. Finally, while so far my thoughts were about how to do fewer iterations, here's a practical consideration in favor of doing more: In practice my work is not done after the bootstrap is run. The output of the bootstrap needs to be aggregated into summary statistics and/or figures. Results need to be interpreted the paper or report to be written. Much of these can already be done with preliminary results of a few iterations of the bootstrap (if the results are clear, they show already after few iterations, if they are borderline they'll stay borderline). So I often set up the bootstrapping in a way that allows me to pull preliminary results so I can go on working while the computer computes. That way it doesn't bother me much if the bootstrapping takes another few days.
Rule of thumb for number of bootstrap samples
There are a some situations where you can tell either beforehand or after a few iterations that huge numbers of bootstrap iterations won't help in the end. You hopefully have an idea beforehand on th
Rule of thumb for number of bootstrap samples There are a some situations where you can tell either beforehand or after a few iterations that huge numbers of bootstrap iterations won't help in the end. You hopefully have an idea beforehand on the order of magnitude of precision that is required for meaningful interpretation of the results. If you don't maybe it is time to learn a bit more about the problem behind the data analysis. Anyways, after a few iterations you may be able to estimate how many more iterations are needed. Obviously, if you have extremely few cases (say, the ethics committee allowed 5 rats) you don't need to think about tens of thousands of iterations. Maybe it would be better to look at all possible draws. And maybe it would be even better to stop and think how certain any kind of conclusion can (not) be based on 5 rats. Think about the total uncertainty of the results. In my field, the part of uncertainty that you can measure and reduce by bootstrapping may only be a minor part of the total uncertainty (e.g. due to restrictions in the design of the experiments important sources of variation are often not covered by the experiment - say, we start by experiments on cell lines although the final goal will of course be patients). In this situation it doesn't make sense to run too many iterations -- it anyways won't help the final result and moreover it may indroduce a false sense of certainty. A related (though not exactly the same) issue occurs during out-of-bootstrap or cross validation of models: you have two sources of uncertainty: the finite (and in my case usually very small number of independent cases) and the (in)stability of the bootstrapped models. Depending on your set up of the resampling validation, you may have only one of them contributing to the resampling estimate. In that case, you can use an estimate of the other source of variance to judge what certainty you should achieve with the resampling, and when it stops to help the final result. Finally, while so far my thoughts were about how to do fewer iterations, here's a practical consideration in favor of doing more: In practice my work is not done after the bootstrap is run. The output of the bootstrap needs to be aggregated into summary statistics and/or figures. Results need to be interpreted the paper or report to be written. Much of these can already be done with preliminary results of a few iterations of the bootstrap (if the results are clear, they show already after few iterations, if they are borderline they'll stay borderline). So I often set up the bootstrapping in a way that allows me to pull preliminary results so I can go on working while the computer computes. That way it doesn't bother me much if the bootstrapping takes another few days.
Rule of thumb for number of bootstrap samples There are a some situations where you can tell either beforehand or after a few iterations that huge numbers of bootstrap iterations won't help in the end. You hopefully have an idea beforehand on th
3,580
Rule of thumb for number of bootstrap samples
TLDR. 10,000 seems to be a good rule of thumb, e.g. p-values from this large or larger of bootstrap samples will be within 0.01 of the "true p-value" for the method about 95% of the time. I only consider the percentile bootstrap approach below, which is the most commonly used method (to my knowledge) but also admittedly has weaknesses and shouldn't be used with small samples. Reframing slightly. It can be useful to compute the uncertainty associated with results from the bootstrap to get a sense for the uncertainty resulting from the use of the bootstrap. Note that this does not address possible weaknesses in the bootstrap (e.g. see the link above), but it does help evaluate if there are "enough" bootstrap samples in a particular application. Generally, the error related to the bootstrap sample size n goes to zero as n goes to infinity, and the question asks, how big should n be for the error associated with small bootstrap sample size to be small? Bootstrap uncertainty in a p-value. The imprecision in an estimated p-value, say pv_est is the p-value estimated from the bootstrap, is about 2 x sqrt(pv_est * (1 - pv_est) / N), where N is the number of bootstrap samples. This is valid if pv_est * N and (1 - pv_est) * N are both >= 10. If one of these is smaller than 10, then it's less precise but very roughly in the same neighborhood as that estimate. Bootstrap error in a confidence interval. If using a 95% confidence interval, then look at how variability of the quantiles of the bootstrap distribution near 2.5% and 97.5% by checking the percentiles at (for the 2.5th percentile) 2.5 +/- 2 * 100 * sqrt(0.025 * 0.975 / n). This formula communicates the uncertainty of the lower end of the 95% confidence interval based on the number of bootstrap samples taken. A similar exploration should be done at the top end. If this estimate is somewhat volatile, then be sure to take more bootstrap samples!
Rule of thumb for number of bootstrap samples
TLDR. 10,000 seems to be a good rule of thumb, e.g. p-values from this large or larger of bootstrap samples will be within 0.01 of the "true p-value" for the method about 95% of the time. I only consi
Rule of thumb for number of bootstrap samples TLDR. 10,000 seems to be a good rule of thumb, e.g. p-values from this large or larger of bootstrap samples will be within 0.01 of the "true p-value" for the method about 95% of the time. I only consider the percentile bootstrap approach below, which is the most commonly used method (to my knowledge) but also admittedly has weaknesses and shouldn't be used with small samples. Reframing slightly. It can be useful to compute the uncertainty associated with results from the bootstrap to get a sense for the uncertainty resulting from the use of the bootstrap. Note that this does not address possible weaknesses in the bootstrap (e.g. see the link above), but it does help evaluate if there are "enough" bootstrap samples in a particular application. Generally, the error related to the bootstrap sample size n goes to zero as n goes to infinity, and the question asks, how big should n be for the error associated with small bootstrap sample size to be small? Bootstrap uncertainty in a p-value. The imprecision in an estimated p-value, say pv_est is the p-value estimated from the bootstrap, is about 2 x sqrt(pv_est * (1 - pv_est) / N), where N is the number of bootstrap samples. This is valid if pv_est * N and (1 - pv_est) * N are both >= 10. If one of these is smaller than 10, then it's less precise but very roughly in the same neighborhood as that estimate. Bootstrap error in a confidence interval. If using a 95% confidence interval, then look at how variability of the quantiles of the bootstrap distribution near 2.5% and 97.5% by checking the percentiles at (for the 2.5th percentile) 2.5 +/- 2 * 100 * sqrt(0.025 * 0.975 / n). This formula communicates the uncertainty of the lower end of the 95% confidence interval based on the number of bootstrap samples taken. A similar exploration should be done at the top end. If this estimate is somewhat volatile, then be sure to take more bootstrap samples!
Rule of thumb for number of bootstrap samples TLDR. 10,000 seems to be a good rule of thumb, e.g. p-values from this large or larger of bootstrap samples will be within 0.01 of the "true p-value" for the method about 95% of the time. I only consi
3,581
Rule of thumb for number of bootstrap samples
Most bootstrapping applications I have seen reported around 2,000 to 100k iterations. In modern practice with adequate software, the salient issues with bootstrap are the statistical ones, more so than time and computing capacity. For novice users with Excel, one could perform only several hundreds before requiring the use of advanced Visual Basic programming. However, R is much simpler to use and makes generation of thousands of bootstrapped values easy and straightforward.
Rule of thumb for number of bootstrap samples
Most bootstrapping applications I have seen reported around 2,000 to 100k iterations. In modern practice with adequate software, the salient issues with bootstrap are the statistical ones, more so tha
Rule of thumb for number of bootstrap samples Most bootstrapping applications I have seen reported around 2,000 to 100k iterations. In modern practice with adequate software, the salient issues with bootstrap are the statistical ones, more so than time and computing capacity. For novice users with Excel, one could perform only several hundreds before requiring the use of advanced Visual Basic programming. However, R is much simpler to use and makes generation of thousands of bootstrapped values easy and straightforward.
Rule of thumb for number of bootstrap samples Most bootstrapping applications I have seen reported around 2,000 to 100k iterations. In modern practice with adequate software, the salient issues with bootstrap are the statistical ones, more so tha
3,582
Rule of thumb for number of bootstrap samples
Data-driven theory-backed procedure If you want a formal treatment of the subject, a good method comes from a pioneering paper by Andrews & Buchinsky (2000, Econometrica): do some small number of bootstrap replications, see how stable or noisy the estimator is, and then, based on some target accuracy measure, increase the number of replications until you are sure that this resampling-related error has reached a certain lower bound with a chosen certainty. Our helper here is the Weak Law of Large Numbers where the asymptotics are in B. To be more specific, B is chosen depending on the user-chosen bound on the relative deviation measure of the Monte-Carlo approximation of the quantity of interest based on B simulations. This quantity can be standard error, p-value, confidence interval, or bias correction. The closeness is the relative deviation $R^*$ of the B-replication bootstrap quantity from the infinite-replication quantity (or, to be more precise, the one that requires $n^n$ replications): $R^* := (\hat\lambda_B - \hat\lambda_\infty)/\hat\lambda_\infty$. The idea is, find such B that the actual relative deviation of the statistic of interest be less than a chosen bound (usually 5%, 10%, 15%) with a specified high probability $1-\tau$ (usually $\tau = 5\%$ or $10\%$). Then, $$\sqrt{B} \cdot R^* \xrightarrow{d} \mathcal{N}(0, \omega),$$ where $\omega$ can be estimated using a relatively small (usually 200–300) preliminary bootstrap sample that one should be doing in any case. Here is the general formula for the number of necessary bootstrap replications $B$: $$ B \ge \omega \cdot (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2,$$ where r is the maximum allowed relative discrepancy (i.e. accuracy), $1-\tau$ is the probability that this desired relative accuracy bound has been achieved, $Q_{\mathcal{N}(0, 1)}$ is the quantile function of the standard Gaussian distribution, and $\omega$ is the asymptotic variance of $R$*. The only unknown quantity here is $\omega$ that represents the variance due to simulation randomness. The general 3-step procedure for choosing B is like this: Compute the approximate preliminary number $B_1 := \lceil \omega_1 (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$, where $\omega_1$ is a very simple theoretical formula from Table III in Andrews & Buchinsky (2000, Econometrica). Using these $B_1$ samples, compute an improved estimate $\hat\omega_{B_1}$ using a formula from Table IV (ibid.). With this $\hat\omega_{B_1}$ compute $B_2 := \lceil\hat\omega_{B_1} (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$ and take $B_{\mathrm{opt}} := \max(B_1, B_2)$. If necessary, this procedure can be iterated to improve the estimate of $\omega$, but this 3-step procedure as it is tends to yield already conservative estimates that ensure that the desired accuracy has been achieved. This approach can be vulgarised by taking some fixed $B_1 = 1000$, doing 1000 bootstrap replications in any case, and then, doing steps 2 and 3 to compute $\hat\omega_{B_1}$ and $B_2$. Example (Table V, ibid.): to compute a bootstrap 95% CI for the linear regression coefficients, in most practical settings, to be 90% sure that the relative CI length discrepancy does not exceed 10%, 700 replications are sufficient in half of the cases, and to be 95% sure, 850 replications. However, requiring a smaller relative error (5%) increases B to 2000 for $\tau=10\%$ and to 2700 for $\tau=5\%$. This agrees with the formula for B above. If one seeks to reduce the relative discrepancy r, by a factor of k, the optimal B goes up roughly by a factor of $k^2$, whilst increasing the confidence level that the desired closeness is reached merely changes the critical value of the standard normal (1.96 → 2.57 for 95% → 99% confidence). Concise practical advice This being said, we should realise that not everyone is a theoretical econometrician with deep bootstrap knowledge, so here is my quick rule of thumb. B >= 1000, otherwise your paper will be rejected with something like ‘We are not in the Pentium-II era’ from Referee 2. Ideally, B >= 10000; try to do it if your computer can handle it. You could check if your B yields the desired probability $1-\tau$ of achieving the desired relative accuracy $r$ for the values thereof that are psychologically comfortable for you (e.g. $r= 5\%$ and $\tau=5\%$). If not, increase B to the value dictated by the A&B 3-stage procedure described above. In general, for any actual accuracy of your bootstrapped quantity, to increase the desired relative accuracy by a factor of k, increase B by a factor of $k^2$. Happy bootstrapping!
Rule of thumb for number of bootstrap samples
Data-driven theory-backed procedure If you want a formal treatment of the subject, a good method comes from a pioneering paper by Andrews & Buchinsky (2000, Econometrica): do some small number of boot
Rule of thumb for number of bootstrap samples Data-driven theory-backed procedure If you want a formal treatment of the subject, a good method comes from a pioneering paper by Andrews & Buchinsky (2000, Econometrica): do some small number of bootstrap replications, see how stable or noisy the estimator is, and then, based on some target accuracy measure, increase the number of replications until you are sure that this resampling-related error has reached a certain lower bound with a chosen certainty. Our helper here is the Weak Law of Large Numbers where the asymptotics are in B. To be more specific, B is chosen depending on the user-chosen bound on the relative deviation measure of the Monte-Carlo approximation of the quantity of interest based on B simulations. This quantity can be standard error, p-value, confidence interval, or bias correction. The closeness is the relative deviation $R^*$ of the B-replication bootstrap quantity from the infinite-replication quantity (or, to be more precise, the one that requires $n^n$ replications): $R^* := (\hat\lambda_B - \hat\lambda_\infty)/\hat\lambda_\infty$. The idea is, find such B that the actual relative deviation of the statistic of interest be less than a chosen bound (usually 5%, 10%, 15%) with a specified high probability $1-\tau$ (usually $\tau = 5\%$ or $10\%$). Then, $$\sqrt{B} \cdot R^* \xrightarrow{d} \mathcal{N}(0, \omega),$$ where $\omega$ can be estimated using a relatively small (usually 200–300) preliminary bootstrap sample that one should be doing in any case. Here is the general formula for the number of necessary bootstrap replications $B$: $$ B \ge \omega \cdot (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2,$$ where r is the maximum allowed relative discrepancy (i.e. accuracy), $1-\tau$ is the probability that this desired relative accuracy bound has been achieved, $Q_{\mathcal{N}(0, 1)}$ is the quantile function of the standard Gaussian distribution, and $\omega$ is the asymptotic variance of $R$*. The only unknown quantity here is $\omega$ that represents the variance due to simulation randomness. The general 3-step procedure for choosing B is like this: Compute the approximate preliminary number $B_1 := \lceil \omega_1 (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$, where $\omega_1$ is a very simple theoretical formula from Table III in Andrews & Buchinsky (2000, Econometrica). Using these $B_1$ samples, compute an improved estimate $\hat\omega_{B_1}$ using a formula from Table IV (ibid.). With this $\hat\omega_{B_1}$ compute $B_2 := \lceil\hat\omega_{B_1} (Q_{\mathcal{N}(0, 1)}(1-\tau/2) / r)^2 \rceil$ and take $B_{\mathrm{opt}} := \max(B_1, B_2)$. If necessary, this procedure can be iterated to improve the estimate of $\omega$, but this 3-step procedure as it is tends to yield already conservative estimates that ensure that the desired accuracy has been achieved. This approach can be vulgarised by taking some fixed $B_1 = 1000$, doing 1000 bootstrap replications in any case, and then, doing steps 2 and 3 to compute $\hat\omega_{B_1}$ and $B_2$. Example (Table V, ibid.): to compute a bootstrap 95% CI for the linear regression coefficients, in most practical settings, to be 90% sure that the relative CI length discrepancy does not exceed 10%, 700 replications are sufficient in half of the cases, and to be 95% sure, 850 replications. However, requiring a smaller relative error (5%) increases B to 2000 for $\tau=10\%$ and to 2700 for $\tau=5\%$. This agrees with the formula for B above. If one seeks to reduce the relative discrepancy r, by a factor of k, the optimal B goes up roughly by a factor of $k^2$, whilst increasing the confidence level that the desired closeness is reached merely changes the critical value of the standard normal (1.96 → 2.57 for 95% → 99% confidence). Concise practical advice This being said, we should realise that not everyone is a theoretical econometrician with deep bootstrap knowledge, so here is my quick rule of thumb. B >= 1000, otherwise your paper will be rejected with something like ‘We are not in the Pentium-II era’ from Referee 2. Ideally, B >= 10000; try to do it if your computer can handle it. You could check if your B yields the desired probability $1-\tau$ of achieving the desired relative accuracy $r$ for the values thereof that are psychologically comfortable for you (e.g. $r= 5\%$ and $\tau=5\%$). If not, increase B to the value dictated by the A&B 3-stage procedure described above. In general, for any actual accuracy of your bootstrapped quantity, to increase the desired relative accuracy by a factor of k, increase B by a factor of $k^2$. Happy bootstrapping!
Rule of thumb for number of bootstrap samples Data-driven theory-backed procedure If you want a formal treatment of the subject, a good method comes from a pioneering paper by Andrews & Buchinsky (2000, Econometrica): do some small number of boot
3,583
Effect of switching response and explanatory variable in simple linear regression
Given $n$ data points $(x_i,y_i), i = 1,2,\ldots n$, in the plane, let us draw a straight line $y = ax+b$. If we predict $ax_i+b$ as the value $\hat{y}_i$ of $y_i$, then the error is $(y_i-\hat{y}_i) = (y_i-ax_i-b)$, the squared error is $(y_i-ax_i-b)^2$, and the total squared error $\sum_{i=1}^n (y_i-ax_i-b)^2$. We ask What choice of $a$ and $b$ minimizes $S =\displaystyle\sum_{i=1}^n (y_i-ax_i-b)^2$? Since $(y_i-ax_i-b)$ is the vertical distance of $(x_i,y_i)$ from the straight line, we are asking for the line such that the sum of the squares of the vertical distances of the points from the line is as small as possible. Now $S$ is a quadratic function of both $a$ and $b$ and attains its minimum value when $a$ and $b$ are such that $$\begin{align*} \frac{\partial S}{\partial a} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-x_i) &= 0\\ \frac{\partial S}{\partial b} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-1) &= 0 \end{align*}$$ From the second equation, we get $$b = \frac{1}{n}\sum_{i=1}^n (y_i - ax_i) = \mu_y - a\mu_x$$ where $\displaystyle \mu_y = \frac{1}{n}\sum_{i=1}^n y_i, ~ \mu_x = \frac{1}{n}\sum_{i=1}^n x_i$ are the arithmetic average values of the $y_i$'s and the $x_i$'s respectively. Substituting into the first equation, we get $$ a = \frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}. $$ Thus, the line that minimizes $S$ can be expressed as $$y = ax+b = \mu_y + \left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}\right) (x - \mu_x), $$ and the minimum value of $S$ is $$S_{\min} = \frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right] \left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right] - \left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}.$$ If we interchange the roles of $x$ and $y$, draw a line $x = \hat{a}y + \hat{b}$, and ask for the values of $\hat{a}$ and $\hat{b}$ that minimize $$T = \sum_{i=1}^n (x_i - \hat{a}y_i - \hat{b})^2,$$ that is, we want the line such that the sum of the squares of the horizontal distances of the points from the line is as small as possible, then we get $$x = \hat{a}y+\hat{b} = \mu_x + \left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}\right) (y - \mu_y) $$ and the minimum value of $T$ is $$T_{\min} = \frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right] \left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right] - \left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}.$$ Note that both lines pass through the point $(\mu_x,\mu_y)$ but the slopes are $$a = \frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2},~~ \hat{a}^{-1} = \frac{ \left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}$$ are different in general. Indeed, as @whuber points out in a comment, the slopes are the same when all the points $(x_i,y_i)$ lie on the same straight line. To see this, note that $$\hat{a}^{-1} - a = \frac{S_{\min}}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y} = 0 \Rightarrow S_{\min} = 0 \Rightarrow y_i=ax_i+b, i=1,2,\ldots, n. $$
Effect of switching response and explanatory variable in simple linear regression
Given $n$ data points $(x_i,y_i), i = 1,2,\ldots n$, in the plane, let us draw a straight line $y = ax+b$. If we predict $ax_i+b$ as the value $\hat{y}_i$ of $y_i$, then the error is $(y_i-\hat{y}_i
Effect of switching response and explanatory variable in simple linear regression Given $n$ data points $(x_i,y_i), i = 1,2,\ldots n$, in the plane, let us draw a straight line $y = ax+b$. If we predict $ax_i+b$ as the value $\hat{y}_i$ of $y_i$, then the error is $(y_i-\hat{y}_i) = (y_i-ax_i-b)$, the squared error is $(y_i-ax_i-b)^2$, and the total squared error $\sum_{i=1}^n (y_i-ax_i-b)^2$. We ask What choice of $a$ and $b$ minimizes $S =\displaystyle\sum_{i=1}^n (y_i-ax_i-b)^2$? Since $(y_i-ax_i-b)$ is the vertical distance of $(x_i,y_i)$ from the straight line, we are asking for the line such that the sum of the squares of the vertical distances of the points from the line is as small as possible. Now $S$ is a quadratic function of both $a$ and $b$ and attains its minimum value when $a$ and $b$ are such that $$\begin{align*} \frac{\partial S}{\partial a} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-x_i) &= 0\\ \frac{\partial S}{\partial b} &= 2\sum_{i=1}^n (y_i-ax_i-b)(-1) &= 0 \end{align*}$$ From the second equation, we get $$b = \frac{1}{n}\sum_{i=1}^n (y_i - ax_i) = \mu_y - a\mu_x$$ where $\displaystyle \mu_y = \frac{1}{n}\sum_{i=1}^n y_i, ~ \mu_x = \frac{1}{n}\sum_{i=1}^n x_i$ are the arithmetic average values of the $y_i$'s and the $x_i$'s respectively. Substituting into the first equation, we get $$ a = \frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}. $$ Thus, the line that minimizes $S$ can be expressed as $$y = ax+b = \mu_y + \left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}\right) (x - \mu_x), $$ and the minimum value of $S$ is $$S_{\min} = \frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right] \left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right] - \left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2}.$$ If we interchange the roles of $x$ and $y$, draw a line $x = \hat{a}y + \hat{b}$, and ask for the values of $\hat{a}$ and $\hat{b}$ that minimize $$T = \sum_{i=1}^n (x_i - \hat{a}y_i - \hat{b})^2,$$ that is, we want the line such that the sum of the squares of the horizontal distances of the points from the line is as small as possible, then we get $$x = \hat{a}y+\hat{b} = \mu_x + \left(\frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}\right) (y - \mu_y) $$ and the minimum value of $T$ is $$T_{\min} = \frac{\left[\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2\right] \left[\left(\frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2\right] - \left[\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y\right]^2}{\left(\frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}.$$ Note that both lines pass through the point $(\mu_x,\mu_y)$ but the slopes are $$a = \frac{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}{ \left( \frac{1}{n}\sum_{i=1}^n x_i^2\right) -\mu_x^2},~~ \hat{a}^{-1} = \frac{ \left( \frac{1}{n}\sum_{i=1}^n y_i^2\right) -\mu_y^2}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y}$$ are different in general. Indeed, as @whuber points out in a comment, the slopes are the same when all the points $(x_i,y_i)$ lie on the same straight line. To see this, note that $$\hat{a}^{-1} - a = \frac{S_{\min}}{\left(\frac{1}{n}\sum_{i=1}^n x_iy_i\right) -\mu_x\mu_y} = 0 \Rightarrow S_{\min} = 0 \Rightarrow y_i=ax_i+b, i=1,2,\ldots, n. $$
Effect of switching response and explanatory variable in simple linear regression Given $n$ data points $(x_i,y_i), i = 1,2,\ldots n$, in the plane, let us draw a straight line $y = ax+b$. If we predict $ax_i+b$ as the value $\hat{y}_i$ of $y_i$, then the error is $(y_i-\hat{y}_i
3,584
Effect of switching response and explanatory variable in simple linear regression
Just to illustrate Dilip’s answer: on the following pictures, the black dots are data points ; on the left, the black line is the regression line obtained by y ~ x, which minimize the squares of the length of the red segments; on the right, the black line is the regression line obtained by x ~ y, which minimize the squares of the length of the red segments. Edit (least rectangles regression) If there is no natural way to chose a "response" and a "covariate", but rather the two variables are interdependent you may wish to conserve a symmetrical role for $y$ and $x$; in this case you can use "least rectangles regression." write $Y = aX + b + \epsilon$, as usual; denote $\hat y_i = a x_i + b$ and $\hat x_i = {1\over a} (y_i - b)$ the estimations of $Y_i$ conditional to $X = x_i$ and of $X_i$ conditional to $Y = y_i$; minimize $\sum_i | x_i - \hat x_i | \cdot | y_i - \hat y_i|$, which leads to $$\hat y = \mathrm{sign}\left(\mathrm{cov}(x,y)\right){\hat\sigma_y \over \hat\sigma_x} (x-\overline x) + \overline y. $$ Here is an illustration with the same data points, for each point, a "rectangle" is computed as the product of the length of two red segments, and the sum of rectangles is minimized. I don’t know much about the properties of this regression and I don’t find much with google.
Effect of switching response and explanatory variable in simple linear regression
Just to illustrate Dilip’s answer: on the following pictures, the black dots are data points ; on the left, the black line is the regression line obtained by y ~ x, which minimize the squares of the
Effect of switching response and explanatory variable in simple linear regression Just to illustrate Dilip’s answer: on the following pictures, the black dots are data points ; on the left, the black line is the regression line obtained by y ~ x, which minimize the squares of the length of the red segments; on the right, the black line is the regression line obtained by x ~ y, which minimize the squares of the length of the red segments. Edit (least rectangles regression) If there is no natural way to chose a "response" and a "covariate", but rather the two variables are interdependent you may wish to conserve a symmetrical role for $y$ and $x$; in this case you can use "least rectangles regression." write $Y = aX + b + \epsilon$, as usual; denote $\hat y_i = a x_i + b$ and $\hat x_i = {1\over a} (y_i - b)$ the estimations of $Y_i$ conditional to $X = x_i$ and of $X_i$ conditional to $Y = y_i$; minimize $\sum_i | x_i - \hat x_i | \cdot | y_i - \hat y_i|$, which leads to $$\hat y = \mathrm{sign}\left(\mathrm{cov}(x,y)\right){\hat\sigma_y \over \hat\sigma_x} (x-\overline x) + \overline y. $$ Here is an illustration with the same data points, for each point, a "rectangle" is computed as the product of the length of two red segments, and the sum of rectangles is minimized. I don’t know much about the properties of this regression and I don’t find much with google.
Effect of switching response and explanatory variable in simple linear regression Just to illustrate Dilip’s answer: on the following pictures, the black dots are data points ; on the left, the black line is the regression line obtained by y ~ x, which minimize the squares of the
3,585
Effect of switching response and explanatory variable in simple linear regression
Just a brief note on why you see the slope smaller for one regression. Both slopes depend on three numbers: standard deviations of $x$ and $y$ ($s_{x}$ and $s_{y}$), and correlation between $x$ and $y$ ($r$). The regression with $y$ as response has slope $r\frac{s_{y}}{s_{x}}$ and the regression with $x$ as response has slope $r\frac{s_{x}}{s_{y}}$, hence the ratio of the first slope to the reciprocal of the second is equal to $r^2\leq 1$. So the greater the proportion of variance explained, the closer the slopes obtained from each case. Note that the proportion of variance explained is symmetric and equal to the squared correlation in simple linear regression.
Effect of switching response and explanatory variable in simple linear regression
Just a brief note on why you see the slope smaller for one regression. Both slopes depend on three numbers: standard deviations of $x$ and $y$ ($s_{x}$ and $s_{y}$), and correlation between $x$ and $
Effect of switching response and explanatory variable in simple linear regression Just a brief note on why you see the slope smaller for one regression. Both slopes depend on three numbers: standard deviations of $x$ and $y$ ($s_{x}$ and $s_{y}$), and correlation between $x$ and $y$ ($r$). The regression with $y$ as response has slope $r\frac{s_{y}}{s_{x}}$ and the regression with $x$ as response has slope $r\frac{s_{x}}{s_{y}}$, hence the ratio of the first slope to the reciprocal of the second is equal to $r^2\leq 1$. So the greater the proportion of variance explained, the closer the slopes obtained from each case. Note that the proportion of variance explained is symmetric and equal to the squared correlation in simple linear regression.
Effect of switching response and explanatory variable in simple linear regression Just a brief note on why you see the slope smaller for one regression. Both slopes depend on three numbers: standard deviations of $x$ and $y$ ($s_{x}$ and $s_{y}$), and correlation between $x$ and $
3,586
Effect of switching response and explanatory variable in simple linear regression
Regression line is not (always) the same as true relationship You may have some 'true' causal relationship with an equation in a linear form $a+bx$ like $$y := a + bx + \epsilon$$ Where the $:=$ means that the value of $a+bx$ with some added noise $\epsilon$ is assigned to $y$. The fitted regression lines y ~ x or x ~ y do not mean the same as that causal relationship (even when in practice the expression for one of the regression line may coincide with the expression for the causal 'true' relationship) More precise relationship between slopes For two switched simple linear regressions: $$Y = a_1 + b_1 X\\X = a_2 + b_2 Y$$ you can relate the slopes as following: $$b_1 = \rho^2 \frac{1}{b_2} \leq \frac{1}{b_2}$$ So the slopes are not each other inverse. Intuition The reason is that Regression lines and correlations do not necessarily correspond one-to-one to a causal relationship. Regression lines relate more directly to a conditional probability or best prediction. You can imagine that the conditional probability relates to the strength of the relationship. Regression lines reflect this and the slopes of the lines may be both shallow when the strength of the relationship is small or both steep when the strength of the relationship is strong. The slopes are not simply each others inverse. Example If two variables $X$ and $Y$ relate to each other by some (causal) linear relationship $$Y = \text{a little bit of $X + $ a lot of error}$$ Then you can imagine that it would not be good to entirely reverse that relationship in case you wish to express $X$ based on a given value of $Y$. Instead of $$X = \text{a lot of $Y + $ a little of error}$$ it would be better to also use $$X = \text{a little bit of $Y + $ a lot of error}$$ See the following example distributions with their respective regression lines. The distributions are multivariate normal with $\Sigma_{11} \Sigma_{22}=1$ and $\Sigma_{12} = \Sigma_{21} = \rho$ The conditional expected values (what you would get in a linear regression) are $$\begin{array}{} E(Y|X) &=& \rho X \\ E(X|Y) &=& \rho Y \end{array}$$ and in this case with $X,Y$ a multivariate normal distribution, then the conditional distributions are $$\begin{array}{} Y|X & \sim & N(\rho X,1-\rho^2) \\ X|Y & \sim & N(\rho Y,1-\rho^2) \end{array}$$ So you can see the variable Y as being a part $\rho X$ and a part noise with variance $1-\rho^2$. The same is true the other way around. The larger the correlation coefficient $\rho$, the closer the two lines will be. But the lower the correlation, the less strong the relationship, the less steep the lines will be (this is true for both lines Y ~ X and X ~ Y)
Effect of switching response and explanatory variable in simple linear regression
Regression line is not (always) the same as true relationship You may have some 'true' causal relationship with an equation in a linear form $a+bx$ like $$y := a + bx + \epsilon$$ Where the $:=$ means
Effect of switching response and explanatory variable in simple linear regression Regression line is not (always) the same as true relationship You may have some 'true' causal relationship with an equation in a linear form $a+bx$ like $$y := a + bx + \epsilon$$ Where the $:=$ means that the value of $a+bx$ with some added noise $\epsilon$ is assigned to $y$. The fitted regression lines y ~ x or x ~ y do not mean the same as that causal relationship (even when in practice the expression for one of the regression line may coincide with the expression for the causal 'true' relationship) More precise relationship between slopes For two switched simple linear regressions: $$Y = a_1 + b_1 X\\X = a_2 + b_2 Y$$ you can relate the slopes as following: $$b_1 = \rho^2 \frac{1}{b_2} \leq \frac{1}{b_2}$$ So the slopes are not each other inverse. Intuition The reason is that Regression lines and correlations do not necessarily correspond one-to-one to a causal relationship. Regression lines relate more directly to a conditional probability or best prediction. You can imagine that the conditional probability relates to the strength of the relationship. Regression lines reflect this and the slopes of the lines may be both shallow when the strength of the relationship is small or both steep when the strength of the relationship is strong. The slopes are not simply each others inverse. Example If two variables $X$ and $Y$ relate to each other by some (causal) linear relationship $$Y = \text{a little bit of $X + $ a lot of error}$$ Then you can imagine that it would not be good to entirely reverse that relationship in case you wish to express $X$ based on a given value of $Y$. Instead of $$X = \text{a lot of $Y + $ a little of error}$$ it would be better to also use $$X = \text{a little bit of $Y + $ a lot of error}$$ See the following example distributions with their respective regression lines. The distributions are multivariate normal with $\Sigma_{11} \Sigma_{22}=1$ and $\Sigma_{12} = \Sigma_{21} = \rho$ The conditional expected values (what you would get in a linear regression) are $$\begin{array}{} E(Y|X) &=& \rho X \\ E(X|Y) &=& \rho Y \end{array}$$ and in this case with $X,Y$ a multivariate normal distribution, then the conditional distributions are $$\begin{array}{} Y|X & \sim & N(\rho X,1-\rho^2) \\ X|Y & \sim & N(\rho Y,1-\rho^2) \end{array}$$ So you can see the variable Y as being a part $\rho X$ and a part noise with variance $1-\rho^2$. The same is true the other way around. The larger the correlation coefficient $\rho$, the closer the two lines will be. But the lower the correlation, the less strong the relationship, the less steep the lines will be (this is true for both lines Y ~ X and X ~ Y)
Effect of switching response and explanatory variable in simple linear regression Regression line is not (always) the same as true relationship You may have some 'true' causal relationship with an equation in a linear form $a+bx$ like $$y := a + bx + \epsilon$$ Where the $:=$ means
3,587
Effect of switching response and explanatory variable in simple linear regression
A simple way to look at this is to note that, if for the true model $y=\alpha+\beta x+\epsilon$, you run two regressions: $y=a_{y\sim x}+b_{y\sim x} x$ $x=a_{x\sim y}+b_{x\sim y} y$ Then we have, using $b_{y\sim x}=\frac{cov(x,y)}{var(x)}=\frac{cov(x,y)}{var(y)}\frac{var(y)}{var(x)}$: $$b_{y\sim x}=b_{x\sim y}\frac{var(y)}{var(x)}$$ So whether you get a steeper slope or not just depends on the ratio $\frac{var(y)}{var(x)}$. This ratio is equal to, based on the assumed true model: $$\frac{var(y)}{var(x)}=\frac{\beta^2 var(x) + var(\epsilon)}{var(x)}$$ Link with other answers You can connect this result with the answers from others, who said that when $R^2=1$, it should be the reciprocal. Indeed, $R^2=1\Rightarrow var(\epsilon) = 0$, and also, $b_{y\sim x}=\beta$ (no estimation error), Hence: $$R^2=1\Rightarrow b_{y\sim x}=b_{x\sim y}\frac{\beta^2 var(x) + 0}{var(x)}=b_{x\sim y}\beta^2$$ So $b_{x\sim y}=1/\beta$
Effect of switching response and explanatory variable in simple linear regression
A simple way to look at this is to note that, if for the true model $y=\alpha+\beta x+\epsilon$, you run two regressions: $y=a_{y\sim x}+b_{y\sim x} x$ $x=a_{x\sim y}+b_{x\sim y} y$ Then we have, u
Effect of switching response and explanatory variable in simple linear regression A simple way to look at this is to note that, if for the true model $y=\alpha+\beta x+\epsilon$, you run two regressions: $y=a_{y\sim x}+b_{y\sim x} x$ $x=a_{x\sim y}+b_{x\sim y} y$ Then we have, using $b_{y\sim x}=\frac{cov(x,y)}{var(x)}=\frac{cov(x,y)}{var(y)}\frac{var(y)}{var(x)}$: $$b_{y\sim x}=b_{x\sim y}\frac{var(y)}{var(x)}$$ So whether you get a steeper slope or not just depends on the ratio $\frac{var(y)}{var(x)}$. This ratio is equal to, based on the assumed true model: $$\frac{var(y)}{var(x)}=\frac{\beta^2 var(x) + var(\epsilon)}{var(x)}$$ Link with other answers You can connect this result with the answers from others, who said that when $R^2=1$, it should be the reciprocal. Indeed, $R^2=1\Rightarrow var(\epsilon) = 0$, and also, $b_{y\sim x}=\beta$ (no estimation error), Hence: $$R^2=1\Rightarrow b_{y\sim x}=b_{x\sim y}\frac{\beta^2 var(x) + 0}{var(x)}=b_{x\sim y}\beta^2$$ So $b_{x\sim y}=1/\beta$
Effect of switching response and explanatory variable in simple linear regression A simple way to look at this is to note that, if for the true model $y=\alpha+\beta x+\epsilon$, you run two regressions: $y=a_{y\sim x}+b_{y\sim x} x$ $x=a_{x\sim y}+b_{x\sim y} y$ Then we have, u
3,588
Effect of switching response and explanatory variable in simple linear regression
It becomes interesting when there is also noise on your inputs (which we could argue is always the case, no command or observation is ever perfect). I have built some simulations to observe the phenomenon, based on a simple linear relationship $x = y$, with Gaussian noise on both x and y. I generated the observations as follows (python code): x = np.linspace(0, 1, n) y = x x_o = x + np.random.normal(0, 0.2, n) y_o = y + np.random.normal(0, 0.2, n) See the different results (odr here is orthogonal distance regression, i.e. the same as least rectangles regression): All the code is in there: https://gist.github.com/jclevesque/5273ad9077d9ea93994f6d96c20b0ddd
Effect of switching response and explanatory variable in simple linear regression
It becomes interesting when there is also noise on your inputs (which we could argue is always the case, no command or observation is ever perfect). I have built some simulations to observe the phenom
Effect of switching response and explanatory variable in simple linear regression It becomes interesting when there is also noise on your inputs (which we could argue is always the case, no command or observation is ever perfect). I have built some simulations to observe the phenomenon, based on a simple linear relationship $x = y$, with Gaussian noise on both x and y. I generated the observations as follows (python code): x = np.linspace(0, 1, n) y = x x_o = x + np.random.normal(0, 0.2, n) y_o = y + np.random.normal(0, 0.2, n) See the different results (odr here is orthogonal distance regression, i.e. the same as least rectangles regression): All the code is in there: https://gist.github.com/jclevesque/5273ad9077d9ea93994f6d96c20b0ddd
Effect of switching response and explanatory variable in simple linear regression It becomes interesting when there is also noise on your inputs (which we could argue is always the case, no command or observation is ever perfect). I have built some simulations to observe the phenom
3,589
Effect of switching response and explanatory variable in simple linear regression
The short answer The goal of a simple linear regression is to come up with the best predictions of the y variable, given values of the x variable. This is a different goal than trying to come up with the best prediction of the x variable, given values of the y variable. Simple linear regression of y ~ x gives you the 'best' possible model for predicting y given x. Hence, if you fit a model for x ~ y and algebraically inverted it, that model could at its very best do only as well as the model for y ~ x. But inverting a model fit for x ~ y will usually do worse at predicting y given x, compared to the 'optimal' y ~ x model, because the "inverted x ~ y model" was created to fulfill a different objective. Illustration Imagine you have the following dataset: When you run an OLS regression of y ~ x, you come up with the following model y = 0.167 + 1.5*x This optimizes predictions of y by making the following predictions, which have associated errors: The OLS regression's predictions are optimal in the sense that the sum of the values in the rightmost column (i.e. the sum of squares) is as small as can be. When you run an OLS regression of x ~ y, you come up with a different model: x = -0.07 + 0.64*y This optimizes predictions of x by making the following predictions, with associated errors. Again, this is optimal in the sense that the sum of the values of the rightmost column are as small as possible (equal to 0.071). Now, imagine you tried to just invert the first model, y = 0.167 + 1.5*x, using algebra, giving you the model x = -0.11 + 0.67*x. This would give you the following predictions and associated errors: The sum of the values in the rightmost column is 0.074, which is larger than the corresponding sum from the model you get from regressing x on y, i.e. the x ~ y model. In other words, the "inverted y ~ x model" is doing a worse job at predicting x than the OLS model of x ~ y.
Effect of switching response and explanatory variable in simple linear regression
The short answer The goal of a simple linear regression is to come up with the best predictions of the y variable, given values of the x variable. This is a different goal than trying to come up with
Effect of switching response and explanatory variable in simple linear regression The short answer The goal of a simple linear regression is to come up with the best predictions of the y variable, given values of the x variable. This is a different goal than trying to come up with the best prediction of the x variable, given values of the y variable. Simple linear regression of y ~ x gives you the 'best' possible model for predicting y given x. Hence, if you fit a model for x ~ y and algebraically inverted it, that model could at its very best do only as well as the model for y ~ x. But inverting a model fit for x ~ y will usually do worse at predicting y given x, compared to the 'optimal' y ~ x model, because the "inverted x ~ y model" was created to fulfill a different objective. Illustration Imagine you have the following dataset: When you run an OLS regression of y ~ x, you come up with the following model y = 0.167 + 1.5*x This optimizes predictions of y by making the following predictions, which have associated errors: The OLS regression's predictions are optimal in the sense that the sum of the values in the rightmost column (i.e. the sum of squares) is as small as can be. When you run an OLS regression of x ~ y, you come up with a different model: x = -0.07 + 0.64*y This optimizes predictions of x by making the following predictions, with associated errors. Again, this is optimal in the sense that the sum of the values of the rightmost column are as small as possible (equal to 0.071). Now, imagine you tried to just invert the first model, y = 0.167 + 1.5*x, using algebra, giving you the model x = -0.11 + 0.67*x. This would give you the following predictions and associated errors: The sum of the values in the rightmost column is 0.074, which is larger than the corresponding sum from the model you get from regressing x on y, i.e. the x ~ y model. In other words, the "inverted y ~ x model" is doing a worse job at predicting x than the OLS model of x ~ y.
Effect of switching response and explanatory variable in simple linear regression The short answer The goal of a simple linear regression is to come up with the best predictions of the y variable, given values of the x variable. This is a different goal than trying to come up with
3,590
Backpropagation with Softmax / Cross Entropy
Note: I am not an expert on backprop, but now having read a bit, I think the following caveat is appropriate. When reading papers or books on neural nets, it is not uncommon for derivatives to be written using a mix of the standard summation/index notation, matrix notation, and multi-index notation (include a hybrid of the last two for tensor-tensor derivatives). Typically the intent is that this should be "understood from context", so you have to be careful! I noticed a couple of inconsistencies in your derivation. I do not do neural networks really, so the following may be incorrect. However, here is how I would go about the problem. First, you need to take account of the summation in $E$, and you cannot assume each term only depends on one weight. So taking the gradient of $E$ with respect to component $k$ of $z$, we have $$E=-\sum_jt_j\log o_j\implies\frac{\partial E}{\partial z_k}=-\sum_jt_j\frac{\partial \log o_j}{\partial z_k}$$ Then, expressing $o_j$ as $$o_j=\tfrac{1}{\Omega}e^{z_j} \,,\, \Omega=\sum_ie^{z_i} \implies \log o_j=z_j-\log\Omega$$ we have $$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-\frac{1}{\Omega}\frac{\partial\Omega}{\partial z_k}$$ where $\delta_{jk}$ is the Kronecker delta. Then the gradient of the softmax-denominator is $$\frac{\partial\Omega}{\partial z_k}=\sum_ie^{z_i}\delta_{ik}=e^{z_k}$$ which gives $$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-o_k$$ or, expanding the log $$\frac{\partial o_j}{\partial z_k}=o_j(\delta_{jk}-o_k)$$ Note that the derivative is with respect to $z_k$, an arbitrary component of $z$, which gives the $\delta_{jk}$ term ($=1$ only when $k=j$). So the gradient of $E$ with respect to $z$ is then $$\frac{\partial E}{\partial z_k}=\sum_jt_j(o_k-\delta_{jk})=o_k\left(\sum_jt_j\right)-t_k \implies \frac{\partial E}{\partial z_k}=o_k\tau-t_k$$ where $\tau=\sum_jt_j$ is constant (for a given $t$ vector). This shows a first difference from your result: the $t_k$ no longer multiplies $o_k$. Note that for the typical case where $t$ is "one-hot" we have $\tau=1$ (as noted in your first link). A second inconsistency, if I understand correctly, is that the "$o$" that is input to $z$ seems unlikely to be the "$o$" that is output from the softmax. I would think that it makes more sense that this is actually "further back" in network architecture? Calling this vector $y$, we then have $$z_k=\sum_iw_{ik}y_i+b_k \implies \frac{\partial z_k}{\partial w_{pq}}=\sum_iy_i\frac{\partial w_{ik}}{\partial w_{pq}}=\sum_iy_i\delta_{ip}\delta_{kq}=\delta_{kq}y_p$$ Finally, to get the gradient of $E$ with respect to the weight-matrix $w$, we use the chain rule $$\frac{\partial E}{\partial w_{pq}}=\sum_k\frac{\partial E}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}=\sum_k(o_k\tau-t_k)\delta_{kq}y_p=y_p(o_q\tau-t_q)$$ giving the final expression (assuming a one-hot $t$, i.e. $\tau=1$) $$\frac{\partial E}{\partial w_{ij}}=y_i(o_j-t_j)$$ where $y$ is the input on the lowest level (of your example). So this shows a second difference from your result: the "$o_i$" should presumably be from the level below $z$, which I call $y$, rather than the level above $z$ (which is $o$). Hopefully this helps. Does this result seem more consistent? Update: In response to a query from the OP in the comments, here is an expansion of the first step. First, note that the vector chain rule requires summations (see here). Second, to be certain of getting all gradient components, you should always introduce a new subscript letter for the component in the denominator of the partial derivative. So to fully write out the gradient with the full chain rule, we have $$\frac{\partial E}{\partial w_{pq}}=\sum_i \frac{\partial E}{\partial o_i}\frac{\partial o_i}{\partial w_{pq}}$$ and $$\frac{\partial o_i}{\partial w_{pq}}=\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}$$ so $$\frac{\partial E}{\partial w_{pq}}=\sum_i \left[ \frac{\partial E}{\partial o_i}\left(\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}\right) \right]$$ In practice the full summations reduce, because you get a lot of $\delta_{ab}$ terms. Although it involves a lot of perhaps "extra" summations and subscripts, using the full chain rule will ensure you always get the correct result.
Backpropagation with Softmax / Cross Entropy
Note: I am not an expert on backprop, but now having read a bit, I think the following caveat is appropriate. When reading papers or books on neural nets, it is not uncommon for derivatives to be writ
Backpropagation with Softmax / Cross Entropy Note: I am not an expert on backprop, but now having read a bit, I think the following caveat is appropriate. When reading papers or books on neural nets, it is not uncommon for derivatives to be written using a mix of the standard summation/index notation, matrix notation, and multi-index notation (include a hybrid of the last two for tensor-tensor derivatives). Typically the intent is that this should be "understood from context", so you have to be careful! I noticed a couple of inconsistencies in your derivation. I do not do neural networks really, so the following may be incorrect. However, here is how I would go about the problem. First, you need to take account of the summation in $E$, and you cannot assume each term only depends on one weight. So taking the gradient of $E$ with respect to component $k$ of $z$, we have $$E=-\sum_jt_j\log o_j\implies\frac{\partial E}{\partial z_k}=-\sum_jt_j\frac{\partial \log o_j}{\partial z_k}$$ Then, expressing $o_j$ as $$o_j=\tfrac{1}{\Omega}e^{z_j} \,,\, \Omega=\sum_ie^{z_i} \implies \log o_j=z_j-\log\Omega$$ we have $$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-\frac{1}{\Omega}\frac{\partial\Omega}{\partial z_k}$$ where $\delta_{jk}$ is the Kronecker delta. Then the gradient of the softmax-denominator is $$\frac{\partial\Omega}{\partial z_k}=\sum_ie^{z_i}\delta_{ik}=e^{z_k}$$ which gives $$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-o_k$$ or, expanding the log $$\frac{\partial o_j}{\partial z_k}=o_j(\delta_{jk}-o_k)$$ Note that the derivative is with respect to $z_k$, an arbitrary component of $z$, which gives the $\delta_{jk}$ term ($=1$ only when $k=j$). So the gradient of $E$ with respect to $z$ is then $$\frac{\partial E}{\partial z_k}=\sum_jt_j(o_k-\delta_{jk})=o_k\left(\sum_jt_j\right)-t_k \implies \frac{\partial E}{\partial z_k}=o_k\tau-t_k$$ where $\tau=\sum_jt_j$ is constant (for a given $t$ vector). This shows a first difference from your result: the $t_k$ no longer multiplies $o_k$. Note that for the typical case where $t$ is "one-hot" we have $\tau=1$ (as noted in your first link). A second inconsistency, if I understand correctly, is that the "$o$" that is input to $z$ seems unlikely to be the "$o$" that is output from the softmax. I would think that it makes more sense that this is actually "further back" in network architecture? Calling this vector $y$, we then have $$z_k=\sum_iw_{ik}y_i+b_k \implies \frac{\partial z_k}{\partial w_{pq}}=\sum_iy_i\frac{\partial w_{ik}}{\partial w_{pq}}=\sum_iy_i\delta_{ip}\delta_{kq}=\delta_{kq}y_p$$ Finally, to get the gradient of $E$ with respect to the weight-matrix $w$, we use the chain rule $$\frac{\partial E}{\partial w_{pq}}=\sum_k\frac{\partial E}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}=\sum_k(o_k\tau-t_k)\delta_{kq}y_p=y_p(o_q\tau-t_q)$$ giving the final expression (assuming a one-hot $t$, i.e. $\tau=1$) $$\frac{\partial E}{\partial w_{ij}}=y_i(o_j-t_j)$$ where $y$ is the input on the lowest level (of your example). So this shows a second difference from your result: the "$o_i$" should presumably be from the level below $z$, which I call $y$, rather than the level above $z$ (which is $o$). Hopefully this helps. Does this result seem more consistent? Update: In response to a query from the OP in the comments, here is an expansion of the first step. First, note that the vector chain rule requires summations (see here). Second, to be certain of getting all gradient components, you should always introduce a new subscript letter for the component in the denominator of the partial derivative. So to fully write out the gradient with the full chain rule, we have $$\frac{\partial E}{\partial w_{pq}}=\sum_i \frac{\partial E}{\partial o_i}\frac{\partial o_i}{\partial w_{pq}}$$ and $$\frac{\partial o_i}{\partial w_{pq}}=\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}$$ so $$\frac{\partial E}{\partial w_{pq}}=\sum_i \left[ \frac{\partial E}{\partial o_i}\left(\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}\right) \right]$$ In practice the full summations reduce, because you get a lot of $\delta_{ab}$ terms. Although it involves a lot of perhaps "extra" summations and subscripts, using the full chain rule will ensure you always get the correct result.
Backpropagation with Softmax / Cross Entropy Note: I am not an expert on backprop, but now having read a bit, I think the following caveat is appropriate. When reading papers or books on neural nets, it is not uncommon for derivatives to be writ
3,591
Backpropagation with Softmax / Cross Entropy
While @GeoMatt22's answer is correct, I personally found it very useful to reduce the problem to a toy example and draw a picture: I then defined the operations each node was computing, treating the $h$'s and $w$'s as inputs to a "network" ($\mathbf{t}$ is a one-hot vector representing the class label of the data point): $$L=-t_1\log o_1 -t_2\log o_2$$ $$o_1 = \frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)}$$ $$o_2 = \frac{\exp(y_2)}{\exp(y_1) + \exp(y_2)}$$ $$y_1 = w_{11}h_1 + w_{21}h_2 + w_{31}h_3$$ $$y_2 = w_{12}h_1 + w_{22}h_2 + w_{32}h_3$$ Say I want to calculate the derivative of the loss with respect to $w_{21}$. I can just use my picture to trace back the path from the loss to the weight I'm interested in (removed the second column of $w$'s for clarity): Then, I can just calculate the desired derivatives. Note that there are two paths through $y_1$ that lead to $w_{21}$, so I need to sum the derivatives that go through each of them. $$\frac{\partial L}{\partial o_1} = -\frac{t_1}{o_1}$$ $$\frac{\partial L}{\partial o_2} = -\frac{t_2}{o_2}$$ $$\frac{\partial o_1}{\partial y_1} = \frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)} - \left(\frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)}\right)^2 = o_1(1 - o_1)$$ $$\frac{\partial o_2}{\partial y_1} = \frac{-\exp(y_2)\exp(y_1)}{(\exp(y_1) + \exp(y_2))^2} = -o_2o_1$$ $$\frac{\partial y_1}{\partial w_{21}} = h_2$$ Finally, putting the chain rule together: \begin{align} \frac{\partial L}{\partial w_{21}} &= \frac{\partial L}{\partial o_1}\frac{\partial o_1}{\partial y_1}\frac{\partial y_1}{\partial w_{21}} + \frac{\partial L}{\partial o_2}\frac{\partial o_2}{\partial y_1}\frac{\partial y_1}{\partial w_{21}}\\ &= \frac{-t_1}{o_1}[o_1(1 - o_1)]h_2 + \frac{-t_2}{o_2}(-o_2 o_1)h_2\\ &= h_2(t_2 o_1 - t_1 + t_1 o_1)\\ &= h_2(o_1(t_1 + t_2) - t_1)\\ &= h_2(o_1 - t_1) \end{align} Note that in the last step, $t_1 + t_2 = 1$ because the vector $\mathbf{t}$ is a one-hot vector.
Backpropagation with Softmax / Cross Entropy
While @GeoMatt22's answer is correct, I personally found it very useful to reduce the problem to a toy example and draw a picture: I then defined the operations each node was computing, treating the
Backpropagation with Softmax / Cross Entropy While @GeoMatt22's answer is correct, I personally found it very useful to reduce the problem to a toy example and draw a picture: I then defined the operations each node was computing, treating the $h$'s and $w$'s as inputs to a "network" ($\mathbf{t}$ is a one-hot vector representing the class label of the data point): $$L=-t_1\log o_1 -t_2\log o_2$$ $$o_1 = \frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)}$$ $$o_2 = \frac{\exp(y_2)}{\exp(y_1) + \exp(y_2)}$$ $$y_1 = w_{11}h_1 + w_{21}h_2 + w_{31}h_3$$ $$y_2 = w_{12}h_1 + w_{22}h_2 + w_{32}h_3$$ Say I want to calculate the derivative of the loss with respect to $w_{21}$. I can just use my picture to trace back the path from the loss to the weight I'm interested in (removed the second column of $w$'s for clarity): Then, I can just calculate the desired derivatives. Note that there are two paths through $y_1$ that lead to $w_{21}$, so I need to sum the derivatives that go through each of them. $$\frac{\partial L}{\partial o_1} = -\frac{t_1}{o_1}$$ $$\frac{\partial L}{\partial o_2} = -\frac{t_2}{o_2}$$ $$\frac{\partial o_1}{\partial y_1} = \frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)} - \left(\frac{\exp(y_1)}{\exp(y_1) + \exp(y_2)}\right)^2 = o_1(1 - o_1)$$ $$\frac{\partial o_2}{\partial y_1} = \frac{-\exp(y_2)\exp(y_1)}{(\exp(y_1) + \exp(y_2))^2} = -o_2o_1$$ $$\frac{\partial y_1}{\partial w_{21}} = h_2$$ Finally, putting the chain rule together: \begin{align} \frac{\partial L}{\partial w_{21}} &= \frac{\partial L}{\partial o_1}\frac{\partial o_1}{\partial y_1}\frac{\partial y_1}{\partial w_{21}} + \frac{\partial L}{\partial o_2}\frac{\partial o_2}{\partial y_1}\frac{\partial y_1}{\partial w_{21}}\\ &= \frac{-t_1}{o_1}[o_1(1 - o_1)]h_2 + \frac{-t_2}{o_2}(-o_2 o_1)h_2\\ &= h_2(t_2 o_1 - t_1 + t_1 o_1)\\ &= h_2(o_1(t_1 + t_2) - t_1)\\ &= h_2(o_1 - t_1) \end{align} Note that in the last step, $t_1 + t_2 = 1$ because the vector $\mathbf{t}$ is a one-hot vector.
Backpropagation with Softmax / Cross Entropy While @GeoMatt22's answer is correct, I personally found it very useful to reduce the problem to a toy example and draw a picture: I then defined the operations each node was computing, treating the
3,592
Backpropagation with Softmax / Cross Entropy
In place of the $\{o_i\},\,$ I want a letter whose uppercase is visually distinct from its lowercase. So let me substitute $\{y_i\}$. Also, let's use the variable $\{p_i\}$ to designate the $\{o_i\}$ from the previous layer. Let $Y$ be the diagonal matrix whose diagonal equals the vector $y$, i.e. $$Y={\rm Diag}(y)$$ Using this new matrix variable and the Frobenius Inner Product we can calculate the gradient of $E$ wrt $W$. $$\eqalign{ z &= Wp+b &dz= dWp \cr y &= {\rm softmax}(z) &dy = (Y-yy^T)\,dz \cr E &= -t:\log(y) &dE = -t:Y^{-1}dy \cr\cr dE &= -t:Y^{-1}(Y-yy^T)\,dz \cr &= -t:(I-1y^T)\,dz \cr &= -t:(I-1y^T)\,dW\,p \cr &= (y1^T-I)tp^T:dW \cr &= ((1^Tt)yp^T - tp^T):dW \cr\cr \frac{\partial E}{\partial W} &= (1^Tt)yp^T - tp^T \cr }$$
Backpropagation with Softmax / Cross Entropy
In place of the $\{o_i\},\,$ I want a letter whose uppercase is visually distinct from its lowercase. So let me substitute $\{y_i\}$. Also, let's use the variable $\{p_i\}$ to designate the $\{o_i\}$
Backpropagation with Softmax / Cross Entropy In place of the $\{o_i\},\,$ I want a letter whose uppercase is visually distinct from its lowercase. So let me substitute $\{y_i\}$. Also, let's use the variable $\{p_i\}$ to designate the $\{o_i\}$ from the previous layer. Let $Y$ be the diagonal matrix whose diagonal equals the vector $y$, i.e. $$Y={\rm Diag}(y)$$ Using this new matrix variable and the Frobenius Inner Product we can calculate the gradient of $E$ wrt $W$. $$\eqalign{ z &= Wp+b &dz= dWp \cr y &= {\rm softmax}(z) &dy = (Y-yy^T)\,dz \cr E &= -t:\log(y) &dE = -t:Y^{-1}dy \cr\cr dE &= -t:Y^{-1}(Y-yy^T)\,dz \cr &= -t:(I-1y^T)\,dz \cr &= -t:(I-1y^T)\,dW\,p \cr &= (y1^T-I)tp^T:dW \cr &= ((1^Tt)yp^T - tp^T):dW \cr\cr \frac{\partial E}{\partial W} &= (1^Tt)yp^T - tp^T \cr }$$
Backpropagation with Softmax / Cross Entropy In place of the $\{o_i\},\,$ I want a letter whose uppercase is visually distinct from its lowercase. So let me substitute $\{y_i\}$. Also, let's use the variable $\{p_i\}$ to designate the $\{o_i\}$
3,593
Backpropagation with Softmax / Cross Entropy
The original question is answered by this post Derivative of Softmax Activation -Alijah Ahmed. However writing this out for those who have come here for the general question of Backpropagation with Softmax and Cross-Entropy. $$ \mathbf { \bbox[10px, border:2px solid red] { \color{red}{ \begin{aligned} a^0 \rightarrow \bbox[5px, border:2px solid black] { \underbrace{\text{hidden layers}}_{a^{l-2}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{w^{l-1} a^{l-2}+b^{l-1}}_{z^{l-1} } } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{\sigma(z^{l-1})}_{a^{l-1}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{w^l a^{l-1}+b^l}_{z^{l}/logits } } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{P(z^l)}_{\vec P/ \text{softmax} /a^{l}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{L ( \vec P, \vec Y)}_{\text{CrossEntropyLoss}} } \end{aligned} }}} $$ Derivative CrossEntropy Loss wrto Weight in last layer $$ \mathbf { \frac {\partial L}{\partial w^l} = \color{red}{\frac {\partial L}{\partial z^l}}.\color{green}{\frac {\partial z^l}{\partial w^l}} \rightarrow \quad EqA1 } $$ Where $$ \mathbf { L = -\sum_k y_k \log \color{red}{p_k} \,\,and \,p_j = \frac {e^ \color{red}{z_j}} {\sum_k e^{z_k}} } $$ Following from Derivative of Softmax Activation -Alijah Ahmed for the first term $$ \color{red} { \begin{aligned} \frac {\partial L}{\partial z_i} = \frac {\partial ({-\sum_j y_k \log {p_k})}}{\partial z_i} \\ \\ \text {taking the summation outside} \\ \\ = -\sum_j y_k\frac {\partial ({ \log {p_k})}}{\partial z_i} \\ \\ \color{black}{ \text {since } \frac{d}{dx} (f(g(x))) = f'(g(x))g'(x) } \\ \\ = -\sum_k y_k * \frac {1}{p_k} *\frac {\partial { p_k}}{\partial z_i} \end{aligned} } $$ The last term $\frac {\partial { p_k}}{\partial z_i}$ is the derivative of Softmax wrto it's inputs also called logits. This is easy to derive and there are many sites that descirbe it. Example Dertivative of SoftMax Antoni Parellada. The more rigorous derivative via the Jacobian matrix is here The Softmax function and its derivative-Eli Bendersky $$ \color{red} { \begin{aligned} \frac {\partial { p_i}}{\partial z_i} = p_i(\delta_{ij} -p_j) \\ \\ \delta_{ij} = 1 \text{ when i =j} \\ \delta_{ij} = 0 \text{ when i} \ne \text{j} \end{aligned} } $$ Using this above and repeating as is from Derivative of Softmax Activation -Alijah Ahmed we get the below $$ \color{red} { \begin{aligned} \frac {\partial L}{\partial z_i} = -\sum_k y_k * \frac {1}{p_k} *\frac {\partial { p_k}}{\partial z_i} \\ \\ =-\sum_k y_k * \frac {1}{p_k} * p_i(\delta_{ij} -p_j) \\ \\ \text{these i and j are dummy indices and we can rewrite this as} \\ \\ =-\sum_k y_k * \frac {1}{p_k} * p_k(\delta_{ik} -p_i) \\ \\ \text{taking the two cases and adding in above equation } \\ \\ \delta_{ij} = 1 \text{ when i =k} \text{ and } \delta_{ij} = 0 \text{ when i} \ne \text{k} \\ \\ = [- \sum_i y_i * \frac {1}{p_i} * p_i(1 -p_i)]+[-\sum_{k \ne i} y_k * \frac {1}{p_k} * p_k(0 -p_i) ] \\ \\ = [- y_i * \frac {1}{p_i} * p_i(1 -p_i)]+[-\sum_{k \ne i} y_k * \frac {1}{p_k} * p_k(0 -p_i) ] \\ \\ = [- y_i(1 -p_i)]+[-\sum_{k \ne i} y_k *(0 -p_i) ] \\ \\ = -y_i + y_i.p_i + \sum_{k \ne i} y_k.p_i \\ \\ = -y_i + p_i( y_i + \sum_{k \ne i} y_k) \\ \\ = -y_i + p_i( \sum_{k} y_k) \\ \\ \text {note that } \sum_{k} y_k = 1 \, \text{as it is a One hot encoded Vector} \\ \\ = p_i - y_i \\ \\ \frac {\partial L}{\partial z^l} = p_i - y_i \rightarrow \quad \text{EqA.1.1} \end{aligned} } $$ We now need to calculate the second term, to complete the equation $$ \begin{aligned} \frac {\partial L}{\partial w^l} = \color{red}{\frac {\partial L}{\partial z^l}}.\color{green}{\frac {\partial z^l}{\partial w^l}} \\ \\ \\ \color{green}{\frac {\partial z^l}{\partial w^l} = a^{l-1}} \text{ as } z^{l} = (w^l a^{l-1}+b^l) \\ \\ \text{Putting all together} \\ \\ \frac {\partial L}{\partial w^l} = (p_i - y_i) *a^{l-1} \quad \rightarrow \quad \mathbf {EqA1} \end{aligned} $$ Using Gradient descent we can keep adjusting the last layer like $$ w{^l}{_i} = w{^l}{_i} -\alpha * \frac {\partial L}{\partial w^l} $$ Now let's do the derivation for the inner layers, which is where the Chain Rule Magic happens Derivative of Loss wrto Weight in Inner Layers The trick here is to derivative the Loss wrto the inner layer as a composition of the partial derivative we computed earlier. $$ \begin{aligned} \frac {\partial L}{\partial w^{l-1}} = \color{blue}{\frac {\partial L}{\partial z^{l-1}}}. \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}} \rightarrow \text{EqA.2} \\ \\ \text{the trick is to represent the first part in terms of what we computed earlier; in terms of } \color{blue}{\frac {\partial L}{\partial z^{l}}} \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l-1}}} = \color{blue}{\frac {\partial L}{\partial z^{l}}}. \frac {\partial z^{l}}{\partial a^{l-1}}. \frac {\partial a^{l-1}}{\partial z^{l-1}} \rightarrow \text{ EqMagic} \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l}}} = \color{blue}{(p_i- y_i)} \text{ from the previous layer (from EqA1.1) } \\ \\ z^l = w^l a^{l-1}+b^l \text{ which makes } {\frac {\partial z^{l} }{\partial a^{l-1}} = w^l} \text{ and } a^{l-1} = \sigma (z^{l-1}) \text{ which makes } \frac {\partial a^{l-1}}{\partial z^{l-1}} = \sigma \color{red}{'} (z^{l-1} ) \\ \\ \text{ Putting together we get the first part of Eq A.2 } \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l-1}}} =\color{blue}{(p_i- y_i)}.w^l.\sigma \color{red}{'} (z^{l-1} ) \rightarrow \text{EqA.2.1 } \\ \\ \text{Value of EqA.2.1 to be used in the next layer derivation in EqMagic)} \\ \\ z^{l-1} = w^{l-1} a^{l-2}+b^{l-1} \text{ which makes } \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}=a^{l-2}} \\ \\ \frac {\partial L}{\partial w^{l-1}} = \color{blue}{\frac {\partial L}{\partial z^{l-1}}}. \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}} = \color{blue}{(p_i- y_i)}.w^l.\sigma \color{red}{'} (z^{l-1} ). \color{green}{a^{l-2}} \end{aligned} $$ Disclaimer We see that with Chain Rule we can write out an expression that looks correct; and is correct in index notation. However when we implement with an actual case, with the above equation, your weights won't match out. This is due to the fact that we need to convert from index notation to Matrix notation, and there some Matrix products have to be written out as Hadamard product $\odot$. Wihthout having some idea of these you cannot really understand this fully. A Primer on Index Notation John Crimaldi and The Matrix Calculus You Need For Deep Learning Terence,Jermy
Backpropagation with Softmax / Cross Entropy
The original question is answered by this post Derivative of Softmax Activation -Alijah Ahmed. However writing this out for those who have come here for the general question of Backpropagation with So
Backpropagation with Softmax / Cross Entropy The original question is answered by this post Derivative of Softmax Activation -Alijah Ahmed. However writing this out for those who have come here for the general question of Backpropagation with Softmax and Cross-Entropy. $$ \mathbf { \bbox[10px, border:2px solid red] { \color{red}{ \begin{aligned} a^0 \rightarrow \bbox[5px, border:2px solid black] { \underbrace{\text{hidden layers}}_{a^{l-2}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{w^{l-1} a^{l-2}+b^{l-1}}_{z^{l-1} } } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{\sigma(z^{l-1})}_{a^{l-1}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{w^l a^{l-1}+b^l}_{z^{l}/logits } } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{P(z^l)}_{\vec P/ \text{softmax} /a^{l}} } \,\rightarrow \bbox[5px, border:2px solid black] { \underbrace{L ( \vec P, \vec Y)}_{\text{CrossEntropyLoss}} } \end{aligned} }}} $$ Derivative CrossEntropy Loss wrto Weight in last layer $$ \mathbf { \frac {\partial L}{\partial w^l} = \color{red}{\frac {\partial L}{\partial z^l}}.\color{green}{\frac {\partial z^l}{\partial w^l}} \rightarrow \quad EqA1 } $$ Where $$ \mathbf { L = -\sum_k y_k \log \color{red}{p_k} \,\,and \,p_j = \frac {e^ \color{red}{z_j}} {\sum_k e^{z_k}} } $$ Following from Derivative of Softmax Activation -Alijah Ahmed for the first term $$ \color{red} { \begin{aligned} \frac {\partial L}{\partial z_i} = \frac {\partial ({-\sum_j y_k \log {p_k})}}{\partial z_i} \\ \\ \text {taking the summation outside} \\ \\ = -\sum_j y_k\frac {\partial ({ \log {p_k})}}{\partial z_i} \\ \\ \color{black}{ \text {since } \frac{d}{dx} (f(g(x))) = f'(g(x))g'(x) } \\ \\ = -\sum_k y_k * \frac {1}{p_k} *\frac {\partial { p_k}}{\partial z_i} \end{aligned} } $$ The last term $\frac {\partial { p_k}}{\partial z_i}$ is the derivative of Softmax wrto it's inputs also called logits. This is easy to derive and there are many sites that descirbe it. Example Dertivative of SoftMax Antoni Parellada. The more rigorous derivative via the Jacobian matrix is here The Softmax function and its derivative-Eli Bendersky $$ \color{red} { \begin{aligned} \frac {\partial { p_i}}{\partial z_i} = p_i(\delta_{ij} -p_j) \\ \\ \delta_{ij} = 1 \text{ when i =j} \\ \delta_{ij} = 0 \text{ when i} \ne \text{j} \end{aligned} } $$ Using this above and repeating as is from Derivative of Softmax Activation -Alijah Ahmed we get the below $$ \color{red} { \begin{aligned} \frac {\partial L}{\partial z_i} = -\sum_k y_k * \frac {1}{p_k} *\frac {\partial { p_k}}{\partial z_i} \\ \\ =-\sum_k y_k * \frac {1}{p_k} * p_i(\delta_{ij} -p_j) \\ \\ \text{these i and j are dummy indices and we can rewrite this as} \\ \\ =-\sum_k y_k * \frac {1}{p_k} * p_k(\delta_{ik} -p_i) \\ \\ \text{taking the two cases and adding in above equation } \\ \\ \delta_{ij} = 1 \text{ when i =k} \text{ and } \delta_{ij} = 0 \text{ when i} \ne \text{k} \\ \\ = [- \sum_i y_i * \frac {1}{p_i} * p_i(1 -p_i)]+[-\sum_{k \ne i} y_k * \frac {1}{p_k} * p_k(0 -p_i) ] \\ \\ = [- y_i * \frac {1}{p_i} * p_i(1 -p_i)]+[-\sum_{k \ne i} y_k * \frac {1}{p_k} * p_k(0 -p_i) ] \\ \\ = [- y_i(1 -p_i)]+[-\sum_{k \ne i} y_k *(0 -p_i) ] \\ \\ = -y_i + y_i.p_i + \sum_{k \ne i} y_k.p_i \\ \\ = -y_i + p_i( y_i + \sum_{k \ne i} y_k) \\ \\ = -y_i + p_i( \sum_{k} y_k) \\ \\ \text {note that } \sum_{k} y_k = 1 \, \text{as it is a One hot encoded Vector} \\ \\ = p_i - y_i \\ \\ \frac {\partial L}{\partial z^l} = p_i - y_i \rightarrow \quad \text{EqA.1.1} \end{aligned} } $$ We now need to calculate the second term, to complete the equation $$ \begin{aligned} \frac {\partial L}{\partial w^l} = \color{red}{\frac {\partial L}{\partial z^l}}.\color{green}{\frac {\partial z^l}{\partial w^l}} \\ \\ \\ \color{green}{\frac {\partial z^l}{\partial w^l} = a^{l-1}} \text{ as } z^{l} = (w^l a^{l-1}+b^l) \\ \\ \text{Putting all together} \\ \\ \frac {\partial L}{\partial w^l} = (p_i - y_i) *a^{l-1} \quad \rightarrow \quad \mathbf {EqA1} \end{aligned} $$ Using Gradient descent we can keep adjusting the last layer like $$ w{^l}{_i} = w{^l}{_i} -\alpha * \frac {\partial L}{\partial w^l} $$ Now let's do the derivation for the inner layers, which is where the Chain Rule Magic happens Derivative of Loss wrto Weight in Inner Layers The trick here is to derivative the Loss wrto the inner layer as a composition of the partial derivative we computed earlier. $$ \begin{aligned} \frac {\partial L}{\partial w^{l-1}} = \color{blue}{\frac {\partial L}{\partial z^{l-1}}}. \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}} \rightarrow \text{EqA.2} \\ \\ \text{the trick is to represent the first part in terms of what we computed earlier; in terms of } \color{blue}{\frac {\partial L}{\partial z^{l}}} \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l-1}}} = \color{blue}{\frac {\partial L}{\partial z^{l}}}. \frac {\partial z^{l}}{\partial a^{l-1}}. \frac {\partial a^{l-1}}{\partial z^{l-1}} \rightarrow \text{ EqMagic} \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l}}} = \color{blue}{(p_i- y_i)} \text{ from the previous layer (from EqA1.1) } \\ \\ z^l = w^l a^{l-1}+b^l \text{ which makes } {\frac {\partial z^{l} }{\partial a^{l-1}} = w^l} \text{ and } a^{l-1} = \sigma (z^{l-1}) \text{ which makes } \frac {\partial a^{l-1}}{\partial z^{l-1}} = \sigma \color{red}{'} (z^{l-1} ) \\ \\ \text{ Putting together we get the first part of Eq A.2 } \\ \\ \color{blue}{\frac {\partial L}{\partial z^{l-1}}} =\color{blue}{(p_i- y_i)}.w^l.\sigma \color{red}{'} (z^{l-1} ) \rightarrow \text{EqA.2.1 } \\ \\ \text{Value of EqA.2.1 to be used in the next layer derivation in EqMagic)} \\ \\ z^{l-1} = w^{l-1} a^{l-2}+b^{l-1} \text{ which makes } \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}=a^{l-2}} \\ \\ \frac {\partial L}{\partial w^{l-1}} = \color{blue}{\frac {\partial L}{\partial z^{l-1}}}. \color{green}{\frac {\partial z^{l-1}}{\partial w^{l-1}}} = \color{blue}{(p_i- y_i)}.w^l.\sigma \color{red}{'} (z^{l-1} ). \color{green}{a^{l-2}} \end{aligned} $$ Disclaimer We see that with Chain Rule we can write out an expression that looks correct; and is correct in index notation. However when we implement with an actual case, with the above equation, your weights won't match out. This is due to the fact that we need to convert from index notation to Matrix notation, and there some Matrix products have to be written out as Hadamard product $\odot$. Wihthout having some idea of these you cannot really understand this fully. A Primer on Index Notation John Crimaldi and The Matrix Calculus You Need For Deep Learning Terence,Jermy
Backpropagation with Softmax / Cross Entropy The original question is answered by this post Derivative of Softmax Activation -Alijah Ahmed. However writing this out for those who have come here for the general question of Backpropagation with So
3,594
Backpropagation with Softmax / Cross Entropy
Other answers have provided the correct way of calculating the derivative, but they do not point out where you have gone wrong. In fact, $t_j$ is always 1 in your last equation, cause you have assumed that $o_j$ takes that node of target 1 in your output; $o_j$ of other nodes have different forms of probability function, thus lead to different forms of derivative, so you should now understand why other people have treated $i=j$ and $i\neq j $ differently.
Backpropagation with Softmax / Cross Entropy
Other answers have provided the correct way of calculating the derivative, but they do not point out where you have gone wrong. In fact, $t_j$ is always 1 in your last equation, cause you have assumed
Backpropagation with Softmax / Cross Entropy Other answers have provided the correct way of calculating the derivative, but they do not point out where you have gone wrong. In fact, $t_j$ is always 1 in your last equation, cause you have assumed that $o_j$ takes that node of target 1 in your output; $o_j$ of other nodes have different forms of probability function, thus lead to different forms of derivative, so you should now understand why other people have treated $i=j$ and $i\neq j $ differently.
Backpropagation with Softmax / Cross Entropy Other answers have provided the correct way of calculating the derivative, but they do not point out where you have gone wrong. In fact, $t_j$ is always 1 in your last equation, cause you have assumed
3,595
Logistic Regression - Error Term and its Distribution
In linear regression observations are assumed to follow a Gaussian distribution with a mean parameter conditional on the predictor values. If you subtract the mean from the observations you get the error: a Gaussian distribution with mean zero, & independent of predictor values—that is errors at any set of predictor values follow the same distribution. In logistic regression observations $y\in\{0,1\}$ are assumed to follow a Bernoulli distribution† with a mean parameter (a probability) conditional on the predictor values. So for any given predictor values determining a mean $\pi$ there are only two possible errors: $1-\pi$ occurring with probability $\pi$, & $0-\pi$ occurring with probability $1-\pi$. For other predictor values the errors will be $1-\pi'$ occurring with probability $\pi'$, & $0-\pi'$ occurring with probability $1-\pi'$. So there's no common error distribution independent of predictor values, which is why people say "no error term exists" (1). "The error term has a binomial distribution" (2) is just sloppiness—"Gaussian models have Gaussian errors, ergo binomial models have binomial errors". (Or, as @whuber points out, it could be taken to mean "the difference between an observation and its expectation has a binomial distribution translated by the expectation".) "The error term has a logistic distribution" (3) arises from the derivation of logistic regression from the model where you observe whether or not a latent variable with errors following a logistic distribution exceeds some threshold. So it's not the same error defined above. (It would seem an odd thing to say IMO outside that context, or without explicit reference to the latent variable.) † If you have $k$ observations with the same predictor values, giving the same probability $\pi$ for each, then their sum $\sum y$ follows a binomial distribution with probability $\pi$ and no. trials $k$. Considering $\sum y -k\pi$ as the error leads to the same conclusions.
Logistic Regression - Error Term and its Distribution
In linear regression observations are assumed to follow a Gaussian distribution with a mean parameter conditional on the predictor values. If you subtract the mean from the observations you get the er
Logistic Regression - Error Term and its Distribution In linear regression observations are assumed to follow a Gaussian distribution with a mean parameter conditional on the predictor values. If you subtract the mean from the observations you get the error: a Gaussian distribution with mean zero, & independent of predictor values—that is errors at any set of predictor values follow the same distribution. In logistic regression observations $y\in\{0,1\}$ are assumed to follow a Bernoulli distribution† with a mean parameter (a probability) conditional on the predictor values. So for any given predictor values determining a mean $\pi$ there are only two possible errors: $1-\pi$ occurring with probability $\pi$, & $0-\pi$ occurring with probability $1-\pi$. For other predictor values the errors will be $1-\pi'$ occurring with probability $\pi'$, & $0-\pi'$ occurring with probability $1-\pi'$. So there's no common error distribution independent of predictor values, which is why people say "no error term exists" (1). "The error term has a binomial distribution" (2) is just sloppiness—"Gaussian models have Gaussian errors, ergo binomial models have binomial errors". (Or, as @whuber points out, it could be taken to mean "the difference between an observation and its expectation has a binomial distribution translated by the expectation".) "The error term has a logistic distribution" (3) arises from the derivation of logistic regression from the model where you observe whether or not a latent variable with errors following a logistic distribution exceeds some threshold. So it's not the same error defined above. (It would seem an odd thing to say IMO outside that context, or without explicit reference to the latent variable.) † If you have $k$ observations with the same predictor values, giving the same probability $\pi$ for each, then their sum $\sum y$ follows a binomial distribution with probability $\pi$ and no. trials $k$. Considering $\sum y -k\pi$ as the error leads to the same conclusions.
Logistic Regression - Error Term and its Distribution In linear regression observations are assumed to follow a Gaussian distribution with a mean parameter conditional on the predictor values. If you subtract the mean from the observations you get the er
3,596
Logistic Regression - Error Term and its Distribution
This has been covered before. A model that is constrained to have predicted values in $[0,1]$ cannot possibly have an additive error term that would make the predictions go outside $[0,1]$. Think of the simplest example of a binary logistic model -- a model containing only an intercept. This is equivalent to the Bernoulli one-sample problem, often called (in this simple case) the binomial problem because (1) all the information is contained in the sample size and number of events or (2) the Bernoulli distribution is a special case of the binomial distribution with $n=1$. The raw data in this situation are a series of binary values, and each has a Bernoulli distribution with unknown parameter $\theta$ representing the probability of the event. There is no error term in the Bernoulli distribution, there's just an unknown probability. The logistic model is a probability model.
Logistic Regression - Error Term and its Distribution
This has been covered before. A model that is constrained to have predicted values in $[0,1]$ cannot possibly have an additive error term that would make the predictions go outside $[0,1]$. Think of
Logistic Regression - Error Term and its Distribution This has been covered before. A model that is constrained to have predicted values in $[0,1]$ cannot possibly have an additive error term that would make the predictions go outside $[0,1]$. Think of the simplest example of a binary logistic model -- a model containing only an intercept. This is equivalent to the Bernoulli one-sample problem, often called (in this simple case) the binomial problem because (1) all the information is contained in the sample size and number of events or (2) the Bernoulli distribution is a special case of the binomial distribution with $n=1$. The raw data in this situation are a series of binary values, and each has a Bernoulli distribution with unknown parameter $\theta$ representing the probability of the event. There is no error term in the Bernoulli distribution, there's just an unknown probability. The logistic model is a probability model.
Logistic Regression - Error Term and its Distribution This has been covered before. A model that is constrained to have predicted values in $[0,1]$ cannot possibly have an additive error term that would make the predictions go outside $[0,1]$. Think of
3,597
Logistic Regression - Error Term and its Distribution
To me the unification of logistic, linear, poisson regression etc... has always been in terms of specification of the mean and variance in the Generalized Linear Model framework. We start by specifying a probability distribution for our data, normal for continuous data, Bernoulli for dichotomous, Poisson for counts, etc...Then we specify a link function that describes how the mean is related to the linear predictor: $g(\mu_i) = \alpha + x_i^T\beta$ For linear regression, $g(\mu_i) = \mu_i$. For logistic regression, $g(\mu_i) = \log(\frac{\mu_i}{1-\mu_i})$. For Poisson regression, $g(\mu_i) = \log(\mu_i)$. The only thing one might be able to consider in terms of writing an error term would be to state: $y_i = g^{-1}(\alpha+x_i^T\beta) + e_i$ where $E(e_i) = 0$ and $Var(e_i) = \sigma^2(\mu_i)$. For example, for logistic regression, $\sigma^2(\mu_i) = \mu_i(1-\mu_i) = g^{-1}(\alpha+x_i^T\beta)(1-g^{-1}(\alpha+x_i^T\beta))$. But, you cannot explicitly state that $e_i$ has a Bernoulli distribution as mentioned above. Note, however, that basic Generalized Linear Models only assume a structure for the mean and variance of the distribution. It can be shown that the estimating equations and the Hessian matrix only depend on the mean and variance you assume in your model. So you don't necessarily need to be concerned with the distribution of $e_i$ for this model because the higher order moments don't play a role in the estimation of the model parameters.
Logistic Regression - Error Term and its Distribution
To me the unification of logistic, linear, poisson regression etc... has always been in terms of specification of the mean and variance in the Generalized Linear Model framework. We start by specifyin
Logistic Regression - Error Term and its Distribution To me the unification of logistic, linear, poisson regression etc... has always been in terms of specification of the mean and variance in the Generalized Linear Model framework. We start by specifying a probability distribution for our data, normal for continuous data, Bernoulli for dichotomous, Poisson for counts, etc...Then we specify a link function that describes how the mean is related to the linear predictor: $g(\mu_i) = \alpha + x_i^T\beta$ For linear regression, $g(\mu_i) = \mu_i$. For logistic regression, $g(\mu_i) = \log(\frac{\mu_i}{1-\mu_i})$. For Poisson regression, $g(\mu_i) = \log(\mu_i)$. The only thing one might be able to consider in terms of writing an error term would be to state: $y_i = g^{-1}(\alpha+x_i^T\beta) + e_i$ where $E(e_i) = 0$ and $Var(e_i) = \sigma^2(\mu_i)$. For example, for logistic regression, $\sigma^2(\mu_i) = \mu_i(1-\mu_i) = g^{-1}(\alpha+x_i^T\beta)(1-g^{-1}(\alpha+x_i^T\beta))$. But, you cannot explicitly state that $e_i$ has a Bernoulli distribution as mentioned above. Note, however, that basic Generalized Linear Models only assume a structure for the mean and variance of the distribution. It can be shown that the estimating equations and the Hessian matrix only depend on the mean and variance you assume in your model. So you don't necessarily need to be concerned with the distribution of $e_i$ for this model because the higher order moments don't play a role in the estimation of the model parameters.
Logistic Regression - Error Term and its Distribution To me the unification of logistic, linear, poisson regression etc... has always been in terms of specification of the mean and variance in the Generalized Linear Model framework. We start by specifyin
3,598
Logistic Regression - Error Term and its Distribution
No errors exist. We are modeling the mean! The mean is just a true number. This doesn't make sense to me. Think the response variable as a latent variable. If you assume the error term is normally distributed, then the model becomes a probit model. If you assume the distribution of the error term is logistic, then the model is logistic regression.
Logistic Regression - Error Term and its Distribution
No errors exist. We are modeling the mean! The mean is just a true number. This doesn't make sense to me. Think the response variable as a latent variable. If you assume the error term is normally di
Logistic Regression - Error Term and its Distribution No errors exist. We are modeling the mean! The mean is just a true number. This doesn't make sense to me. Think the response variable as a latent variable. If you assume the error term is normally distributed, then the model becomes a probit model. If you assume the distribution of the error term is logistic, then the model is logistic regression.
Logistic Regression - Error Term and its Distribution No errors exist. We are modeling the mean! The mean is just a true number. This doesn't make sense to me. Think the response variable as a latent variable. If you assume the error term is normally di
3,599
How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics?
Summary: the "random-effects model" in econometrics and a "random intercept mixed model" are indeed the same models, but they are estimated in different ways. The econometrics way is to use FGLS, and the mixed model way is to use ML. There are different algorithms of doing FGLS, and some of them (on this dataset) produce results that are very close to ML. 1. Differences between estimation methods in plm I will answer with my testing on plm(..., model = "random") and lmer(), using the data generated by @ChristophHanck. According to the plm package manual, there are four options for random.method: the method of estimation for the variance components in the random effects model. @amoeba used the default one swar (Swamy and Arora, 1972). For random effects models, four estimators of the transformation parameter are available by setting random.method to one of "swar" (Swamy and Arora (1972)) (default), "amemiya" (Amemiya (1971)), "walhus" (Wallace and Hussain (1969)), or "nerlove" (Nerlove (1971)). I tested all the four options using the same data, getting an error for amemiya, and three totally different coefficient estimates for the variable stackX. The ones from using random.method='nerlove' and 'amemiya' are nearly equivalent to that from lmer(), -1.029 and -1.025 vs -1.026. They are also not very different from that obtained in the "fixed-effects" model, -1.045. # "amemiya" only works using the most recent version: # install.packages("plm", repos="http://R-Forge.R-project.org") re0 <- plm(stackY~stackX, data = paneldata, model = "random") #random.method='swar' re1 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='amemiya') re2 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='walhus') re3 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='nerlove') l2 <- lmer(stackY~stackX+(1|as.factor(unit)), data = paneldata) coef(re0) # (Intercept) stackX 18.3458553 0.7703073 coef(re1) # (Intercept) stackX 30.217721 -1.025186 coef(re2) # (Intercept) stackX -1.15584 3.71973 coef(re3) # (Intercept) stackX 30.243678 -1.029111 fixef(l2) # (Intercept) stackX 30.226295 -1.026482 Unfortunately I do not have time right now, but interested readers can find the four references, to check their estimation procedures. It would be very helpful to figure out why they make such a difference. I expect that for some cases, the plm estimation procedure using the lm() on transformed data should be equivalent to the maximum likelihood procedure utilized in lmer(). 2. Comparison between GLS and ML The authors of plm package did compare the two in Section 7 of their paper: Yves Croissant and Giovanni Millo, 2008, Panel Data Econometrics in R: The plm package. Econometrics deal mostly with non-experimental data. Great emphasis is put on specification procedures and misspecification testing. Model specifications tend therefore to be very simple, while great attention is put on the issues of endogeneity of the regressors, dependence structures in the errors and robustness of the estimators under deviations from normality. The preferred approach is often semi- or non-parametric, and heteroskedasticity-consistent techniques are becoming standard practice both in estimation and testing. For all these reasons, [...] panel model estimation in econometrics is mostly accomplished in the generalized least squares framework based on Aitken’s Theorem [...]. On the contrary, longitudinal data models in nlme and lme4 are estimated by (restricted or unrestricted) maximum likelihood. [...] The econometric GLS approach has closed-form analytical solutions computable by standard linear algebra and, although the latter can sometimes get computationally heavy on the machine, the expressions for the estimators are usually rather simple. ML estimation of longitudinal models, on the contrary, is based on numerical optimization of nonlinear functions without closed-form solutions and is thus dependent on approximations and convergence criteria. 3. Update on mixed models I appreciate that @ChristophHanck provided a thorough introduction about the four random.method used in plm and explained why their estimates are so different. As requested by @amoeba, I will add some thoughts on the mixed models (likelihood-based) and its connection with GLS. The likelihood-based method usually assumes a distribution for both the random effect and the error term. A normal distribution assumption is commonly used, but there are also some studies assuming a non-normal distribution. I will follow @ChristophHanck's notations for a random intercept model, and allow unbalanced data, i.e., let $T=n_i$. The model is \begin{equation} y_{it}= \boldsymbol x_{it}^{'}\boldsymbol\beta + \eta_i + \epsilon_{it}\qquad i=1,\ldots,m,\quad t=1,\ldots,n_i \end{equation} with $\eta_i \sim N(0,\sigma^2_\eta), \epsilon_{it} \sim N(0,\sigma^2_\epsilon)$. For each $i$, $$\boldsymbol y_i \sim N(\boldsymbol X_{i}\boldsymbol\beta, \boldsymbol\Sigma_i), \qquad\boldsymbol\Sigma_i = \sigma^2_\eta \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \sigma^2_\epsilon \boldsymbol I_{n_i}.$$ So the log-likelihood function is $$const -\frac{1}{2} \sum_i\mathrm{log}|\boldsymbol\Sigma_i| - \frac{1}{2} \sum_i(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta)^{'}\boldsymbol\Sigma_i^{-1}(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta).$$ When all the variances are known, as shown in Laird and Ware (1982), the MLE is $$\hat{\boldsymbol\beta} = \left(\sum_i\boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol X_i \right)^{-1} \left(\sum_i \boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol y_i \right),$$ which is equivalent to the GLS $\hat\beta_{RE}$ derived by @ChristophHanck. So the key difference is in the estimation for the variances. Given that there is no closed-form solution, there are several approaches: directly maximization of the log-likelihood function using optimization algorithms; Expectation-Maximization (EM) algorithm: closed-form solutions exist, but the estimator for $\boldsymbol \beta$ involves empirical Bayesian estimates of the random intercept; a combination of the above two, Expectation/Conditional Maximization Either (ECME) algorithm (Schafer, 1998; R package lmm). With a different parameterization, closed-form solutions for $\boldsymbol \beta$ (as above) and $\sigma^2_\epsilon$ exist. The solution for $\sigma^2_\epsilon$ can be written as $$\sigma^2_\epsilon = \frac{1}{\sum_i n_i}\sum_i(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta})^{'}(\hat\xi \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \boldsymbol I_{n_i})^{-1}(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta}),$$ where $\xi$ is defined as $\sigma^2_\eta/\sigma^2_\epsilon$ and can be estimated in an EM framework. In summary, MLE has distribution assumptions, and it is estimated in an iterative algorithm. The key difference between MLE and GLS is in the estimation for the variances. Croissant and Millo (2008) pointed out that While under normality, homoskedasticity and no serial correlation of the errors OLS are also the maximum likelihood estimator, in all the other cases there are important differences. In my opinion, for the distribution assumption, just as the difference between parametric and non-parametric approaches, MLE would be more efficient when the assumption holds, while GLS would be more robust.
How exactly does a "random effects model" in econometrics relate to mixed models outside of economet
Summary: the "random-effects model" in econometrics and a "random intercept mixed model" are indeed the same models, but they are estimated in different ways. The econometrics way is to use FGLS, and
How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? Summary: the "random-effects model" in econometrics and a "random intercept mixed model" are indeed the same models, but they are estimated in different ways. The econometrics way is to use FGLS, and the mixed model way is to use ML. There are different algorithms of doing FGLS, and some of them (on this dataset) produce results that are very close to ML. 1. Differences between estimation methods in plm I will answer with my testing on plm(..., model = "random") and lmer(), using the data generated by @ChristophHanck. According to the plm package manual, there are four options for random.method: the method of estimation for the variance components in the random effects model. @amoeba used the default one swar (Swamy and Arora, 1972). For random effects models, four estimators of the transformation parameter are available by setting random.method to one of "swar" (Swamy and Arora (1972)) (default), "amemiya" (Amemiya (1971)), "walhus" (Wallace and Hussain (1969)), or "nerlove" (Nerlove (1971)). I tested all the four options using the same data, getting an error for amemiya, and three totally different coefficient estimates for the variable stackX. The ones from using random.method='nerlove' and 'amemiya' are nearly equivalent to that from lmer(), -1.029 and -1.025 vs -1.026. They are also not very different from that obtained in the "fixed-effects" model, -1.045. # "amemiya" only works using the most recent version: # install.packages("plm", repos="http://R-Forge.R-project.org") re0 <- plm(stackY~stackX, data = paneldata, model = "random") #random.method='swar' re1 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='amemiya') re2 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='walhus') re3 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='nerlove') l2 <- lmer(stackY~stackX+(1|as.factor(unit)), data = paneldata) coef(re0) # (Intercept) stackX 18.3458553 0.7703073 coef(re1) # (Intercept) stackX 30.217721 -1.025186 coef(re2) # (Intercept) stackX -1.15584 3.71973 coef(re3) # (Intercept) stackX 30.243678 -1.029111 fixef(l2) # (Intercept) stackX 30.226295 -1.026482 Unfortunately I do not have time right now, but interested readers can find the four references, to check their estimation procedures. It would be very helpful to figure out why they make such a difference. I expect that for some cases, the plm estimation procedure using the lm() on transformed data should be equivalent to the maximum likelihood procedure utilized in lmer(). 2. Comparison between GLS and ML The authors of plm package did compare the two in Section 7 of their paper: Yves Croissant and Giovanni Millo, 2008, Panel Data Econometrics in R: The plm package. Econometrics deal mostly with non-experimental data. Great emphasis is put on specification procedures and misspecification testing. Model specifications tend therefore to be very simple, while great attention is put on the issues of endogeneity of the regressors, dependence structures in the errors and robustness of the estimators under deviations from normality. The preferred approach is often semi- or non-parametric, and heteroskedasticity-consistent techniques are becoming standard practice both in estimation and testing. For all these reasons, [...] panel model estimation in econometrics is mostly accomplished in the generalized least squares framework based on Aitken’s Theorem [...]. On the contrary, longitudinal data models in nlme and lme4 are estimated by (restricted or unrestricted) maximum likelihood. [...] The econometric GLS approach has closed-form analytical solutions computable by standard linear algebra and, although the latter can sometimes get computationally heavy on the machine, the expressions for the estimators are usually rather simple. ML estimation of longitudinal models, on the contrary, is based on numerical optimization of nonlinear functions without closed-form solutions and is thus dependent on approximations and convergence criteria. 3. Update on mixed models I appreciate that @ChristophHanck provided a thorough introduction about the four random.method used in plm and explained why their estimates are so different. As requested by @amoeba, I will add some thoughts on the mixed models (likelihood-based) and its connection with GLS. The likelihood-based method usually assumes a distribution for both the random effect and the error term. A normal distribution assumption is commonly used, but there are also some studies assuming a non-normal distribution. I will follow @ChristophHanck's notations for a random intercept model, and allow unbalanced data, i.e., let $T=n_i$. The model is \begin{equation} y_{it}= \boldsymbol x_{it}^{'}\boldsymbol\beta + \eta_i + \epsilon_{it}\qquad i=1,\ldots,m,\quad t=1,\ldots,n_i \end{equation} with $\eta_i \sim N(0,\sigma^2_\eta), \epsilon_{it} \sim N(0,\sigma^2_\epsilon)$. For each $i$, $$\boldsymbol y_i \sim N(\boldsymbol X_{i}\boldsymbol\beta, \boldsymbol\Sigma_i), \qquad\boldsymbol\Sigma_i = \sigma^2_\eta \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \sigma^2_\epsilon \boldsymbol I_{n_i}.$$ So the log-likelihood function is $$const -\frac{1}{2} \sum_i\mathrm{log}|\boldsymbol\Sigma_i| - \frac{1}{2} \sum_i(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta)^{'}\boldsymbol\Sigma_i^{-1}(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta).$$ When all the variances are known, as shown in Laird and Ware (1982), the MLE is $$\hat{\boldsymbol\beta} = \left(\sum_i\boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol X_i \right)^{-1} \left(\sum_i \boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol y_i \right),$$ which is equivalent to the GLS $\hat\beta_{RE}$ derived by @ChristophHanck. So the key difference is in the estimation for the variances. Given that there is no closed-form solution, there are several approaches: directly maximization of the log-likelihood function using optimization algorithms; Expectation-Maximization (EM) algorithm: closed-form solutions exist, but the estimator for $\boldsymbol \beta$ involves empirical Bayesian estimates of the random intercept; a combination of the above two, Expectation/Conditional Maximization Either (ECME) algorithm (Schafer, 1998; R package lmm). With a different parameterization, closed-form solutions for $\boldsymbol \beta$ (as above) and $\sigma^2_\epsilon$ exist. The solution for $\sigma^2_\epsilon$ can be written as $$\sigma^2_\epsilon = \frac{1}{\sum_i n_i}\sum_i(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta})^{'}(\hat\xi \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \boldsymbol I_{n_i})^{-1}(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta}),$$ where $\xi$ is defined as $\sigma^2_\eta/\sigma^2_\epsilon$ and can be estimated in an EM framework. In summary, MLE has distribution assumptions, and it is estimated in an iterative algorithm. The key difference between MLE and GLS is in the estimation for the variances. Croissant and Millo (2008) pointed out that While under normality, homoskedasticity and no serial correlation of the errors OLS are also the maximum likelihood estimator, in all the other cases there are important differences. In my opinion, for the distribution assumption, just as the difference between parametric and non-parametric approaches, MLE would be more efficient when the assumption holds, while GLS would be more robust.
How exactly does a "random effects model" in econometrics relate to mixed models outside of economet Summary: the "random-effects model" in econometrics and a "random intercept mixed model" are indeed the same models, but they are estimated in different ways. The econometrics way is to use FGLS, and
3,600
How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics?
This answer doesn't comment on mixed models, but I can explain what the random-effects estimator does and why it screws up on that graph. Summary: the random-effects estimator assumes $E[u_i \mid x ] = 0$, which is not true in this example. What is the random effects estimator doing? Assume we have the model: $$ y_{it} = \beta x_{it} + u_i + \epsilon_{it}$$ We have two dimensions of variation: groups $i$ and time $t$. To estimate $\beta$ we could: Only use time-series variation within a group. This is what the fixed-effect estimator does (and this is why it's also often called the within estimator.) If $u_i$ is random, we could use only cross-sectional variation between the time-series means of groups. This is known as the between estimator. Specifically, for each group $i$, take the average over time of the above panel data model to get: $$ \bar{y}_{i} = \beta \bar{x}_{i} + v_i \quad \quad \text{ where } v_i = u_i + \bar{\epsilon}_i$$ If we run this regression, we get the between estimator. Observe that it is a consistent estimator if the effects $u_i$ are random white noise, uncorrelated with $x$! If this is the case, then completely tossing the between group variation (as we do with the fixed effects estimator) is inefficient. The random-effects estimator of econometrics combines the (1) within estimator (i.e. fixed effects estimator) and (2) the between estimator in a way to maximize efficiency. It is an application of generalized least squares and the basic idea is inverse variance weighting. To maximize efficiency, the random-effects estimator calculates $\hat \beta$ as a weighted average of the within estimator and the between estimator. What's going on in that graph... Just eyeballing that graph, you can clearly see what's going on: Within each group $i$ (i.e. dots of the same color), a higher $x_{it}$ is associated with a lower $y_{it}$ A group $i$ with a higher $\bar{x}_i$ has a higher $u_i$. The random effects assumption that $E[u_i \mid x ] = 0$ is clearly not satisfied. The group effects $u_i$ are not orthogonal to $x$ (in a statistical sense), rather, the group effects have a clear positive relationship with $x$. The between estimator assumes $E[u_i \mid x ] = 0$. The between estimator says, "sure I can impose $E[u_i \mid x ] = 0$, by making $\hat \beta$ positive!" Then in turn, the random-effects estimator is off because it's a weighted average of the within estimator and the between estimator.
How exactly does a "random effects model" in econometrics relate to mixed models outside of economet
This answer doesn't comment on mixed models, but I can explain what the random-effects estimator does and why it screws up on that graph. Summary: the random-effects estimator assumes $E[u_i \mid x ]
How exactly does a "random effects model" in econometrics relate to mixed models outside of econometrics? This answer doesn't comment on mixed models, but I can explain what the random-effects estimator does and why it screws up on that graph. Summary: the random-effects estimator assumes $E[u_i \mid x ] = 0$, which is not true in this example. What is the random effects estimator doing? Assume we have the model: $$ y_{it} = \beta x_{it} + u_i + \epsilon_{it}$$ We have two dimensions of variation: groups $i$ and time $t$. To estimate $\beta$ we could: Only use time-series variation within a group. This is what the fixed-effect estimator does (and this is why it's also often called the within estimator.) If $u_i$ is random, we could use only cross-sectional variation between the time-series means of groups. This is known as the between estimator. Specifically, for each group $i$, take the average over time of the above panel data model to get: $$ \bar{y}_{i} = \beta \bar{x}_{i} + v_i \quad \quad \text{ where } v_i = u_i + \bar{\epsilon}_i$$ If we run this regression, we get the between estimator. Observe that it is a consistent estimator if the effects $u_i$ are random white noise, uncorrelated with $x$! If this is the case, then completely tossing the between group variation (as we do with the fixed effects estimator) is inefficient. The random-effects estimator of econometrics combines the (1) within estimator (i.e. fixed effects estimator) and (2) the between estimator in a way to maximize efficiency. It is an application of generalized least squares and the basic idea is inverse variance weighting. To maximize efficiency, the random-effects estimator calculates $\hat \beta$ as a weighted average of the within estimator and the between estimator. What's going on in that graph... Just eyeballing that graph, you can clearly see what's going on: Within each group $i$ (i.e. dots of the same color), a higher $x_{it}$ is associated with a lower $y_{it}$ A group $i$ with a higher $\bar{x}_i$ has a higher $u_i$. The random effects assumption that $E[u_i \mid x ] = 0$ is clearly not satisfied. The group effects $u_i$ are not orthogonal to $x$ (in a statistical sense), rather, the group effects have a clear positive relationship with $x$. The between estimator assumes $E[u_i \mid x ] = 0$. The between estimator says, "sure I can impose $E[u_i \mid x ] = 0$, by making $\hat \beta$ positive!" Then in turn, the random-effects estimator is off because it's a weighted average of the within estimator and the between estimator.
How exactly does a "random effects model" in econometrics relate to mixed models outside of economet This answer doesn't comment on mixed models, but I can explain what the random-effects estimator does and why it screws up on that graph. Summary: the random-effects estimator assumes $E[u_i \mid x ]