chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
You now have most of the skills to start statistical inference, but you need one more concept. First, it would be helpful to state what statistical inference is in more accurate terms. Definition $1$:Statistical Inference Statistical Inference: to make accurate decisions about parameters from statistics. When it says “accurate decision,” you want to be able to measure how accurate. You measure how accurate using probability. In both binomial and normal distributions, you needed to know that the random variable followed either distribution. You need to know how the statistic is distributed and then you can find probabilities. In other words, you need to know the shape of the sample mean or whatever statistic you want to make a decision about. How is the statistic distributed? This is answered with a sampling distribution. Definition $2$: Sampling Distribution Sampling Distribution: how a sample statistic is distributed when repeated trials of size n are taken. Example $1$ sampling distribution Suppose you throw a penny and count how often a head comes up. The random variable is x = number of heads. The probability distribution (pdf) of this random variable is presented in Figure $1$. Solution Repeat this experiment 10 times, which means n = 10. Here is the data set: {1, 1, 1, 1, 0, 0, 0, 0, 0, 0}. The mean of this sample is 0.4. Now take another sample. Here is that data set: {1, 1, 1, 0, 1, 0, 1, 1, 0, 0}. The mean of this sample is 0.6. Another sample looks like: {0, 1, 0, 1, 1, 1, 1, 1, 0, 1}. The mean of this sample is 0.7. Repeat this 40 times. You could get these means: 0.4 0.6 0.7 0.3 0.3 0.2 0.5 0.5 0.5 0.5 0.4 0.4 0.5 0.7 0.7 0.6 0.4 0.4 0.4 0.6 0.7 0.7 0.3 0.5 0.6 0.3 0.3 0.8 0.3 0.6 0.4 0.3 0.5 0.6 0.5 0.6 0.3 0.5 0.6 0.2 Table $1$: Sample Means When n=10 Example $2$ contains the distribution of these sample means (just count how many of each number there are and then divide by 40 to obtain the relative frequency). Sample Mean Probability 0.1 0 0.2 0.05 0.3 0.2 0.4 0.175 0.5 0.225 0.6 0.2 0.7 0.125 0.8 0.025 0.9 0 Table $2$: Distribution of Sample Means When n=10 Figure $2$ contains the histogram of these sample means. This distribution (represented graphically by the histogram) is a sampling distribution. That is all a sampling distribution is. It is a distribution created from statistics. Notice the histogram does not look anything like the histogram of the original random variable. It also doesn’t look anything like a normal distribution, which is the only one you really know how to find probabilities. Granted you have the binomial, but the normal is better. What does this distribution look like if instead of repeating the experiment 10 times you repeat it 20 times instead? Example $3$ contains 40 means when the experiment of flipping the coin is repeated 20 times. 0.5 0.45 0.7 0.55 0.65 0.6 0.4 0.35 0.45 0.6 0.5 0.5 0.65 0.5 0.5 0.35 0.55 0.4 0.65 0.3 0.4 0.5 0.45 0.45 0.65 0.7 0.6 0.5 0.7 0.7 0.7 0.45 0.35 0.6 0.65 0.55 0.35 0.4 0.55 0.6 Table $3$: Sample Means When n=20 Example $3$ contains the sampling distribution of the sample means. Mean Probability 0.1 0 0.2 0 0.3 0.125 0.4 0.2 0.5 0.3 0.6 0.25 0.7 0.125 0.8 0 0.9 0 Table $3$: Distribution of Sample Means When n=20 This histogram of the sampling distribution is displayed in Figure $3$. Notice this histogram of the sample mean looks approximately symmetrical and could almost be called normal. What if you keep increasing n? What will the sampling distribution of the sample mean look like? In other words, what does the sampling distribution of $\overline{x}$ look like as n gets even larger? This depends on how the original distribution is distributed. In Example $1$, the random variable was uniform looking. But as n increased to 20, the distribution of the mean looked approximately normal. What if the original distribution was normal? How big would n have to be? Before that question is answered, another concept is needed. Note Suppose you have a random variable that has a population mean, $\mu$, and a population standard deviation, $\sigma$. If a sample of size n is taken, then the sample mean, $\overline{x}$ has a mean $\mu_{\overline{x}}=\mu$ and standard deviation of $\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}$. The standard deviation of $\overline{x}$ is lower because by taking the mean you are averaging out the extreme values, which makes the distribution of the original random variable spread out. You now know the center and the variability of $\overline{x}$. You also want to know the shape of the distribution of $\overline{x}$. You hope it is normal, since you know how to find probabilities using the normal curve. The following theorem tells you the requirement to have $\overline{x}$ normally distributed. Theorem $1$ central limit theorem Suppose a random variable is from any distribution. If a sample of size n is taken, then the sample mean, $\overline{x}$, becomes normally distributed as n increases. What this says is that no matter what x looks like, $\overline{x}$ would look normal if n is large enough. Now, what size of n is large enough? That depends on how x is distributed in the first place. If the original random variable is normally distributed, then n just needs to be 2 or more data points. If the original random variable is somewhat mound shaped and symmetrical, then n needs to be greater than or equal to 30. Sometimes the sample size can be smaller, but this is a good rule of thumb. The sample size may have to be much larger if the original random variable is really skewed one way or another. Now that you know when the sample mean will look like a normal distribution, then you can find the probability related to the sample mean. Remember that the mean of the sample mean is just the mean of the original data ($\mu_{\overline{x}}=\mu$ ), but the standard deviation of the sample mean, $\sigma_{\overline{x}}$, also known as the standard error of the mean, is actually $\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}$. Make sure you use this in all calculations. If you are using the z-score, the formula when working with $\overline{x}$ is $z=\dfrac{\overline{x}-\mu_{\overline{x}}}{\sigma_{\overline{x}}}=\dfrac{\overline{x}-\mu}{\sigma / \sqrt{n}}$. If you are using the TI-83/84 calculator, then the input would be normalcdf(lower limit, upper limit, $\mu$, $\sigma / \sqrt{n}$ ). If you are using R, then the input would be pnorm( $\overline{x}, \mu, \sigma / \operatorname{sqrt}(n)$) to find the area to the left of $\overline{x}$. Remember to subtract pnorm( $\overline{x}, \mu, \sigma / \operatorname{sqrt}(n))$ ) from 1 if you want the area to the right of $\overline{x}$. Example $2$ Finding probabilities for sample means The birth weight of boy babies of European descent who were delivered at 40 weeks is normally distributed with a mean of 3687.6 g with a standard deviation of 410.5 g (Janssen, Thiessen, Klein, Whitfield, MacNab & Cullis-Kuhl, 2007). Suppose there were nine European descent boy babies born on a given day and the mean birth weight is calculated. 1. State the random variable. 2. What is the mean of the sample mean? 3. What is the standard deviation of the sample mean? 4. What distribution is the sample mean distributed as? 5. Find the probability that the mean weight of the nine boy babies born was less than 3500.4 g. 6. Find the probability that the mean weight of the nine babies born was less than 3452.5 g. Solution a. x = birth weight of boy babies (Note: the random variable is something you measure, and it is not the mean birth weight. Mean birth weight is calculated.) b. $\mu_{\overline{x}}=\mu=3687.6 \mathrm{g}$ c. $\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}=\dfrac{410.5}{\sqrt{9}}=\dfrac{410.5}{3} \approx 136.8 \mathrm{g}$ d. Since the original random variable is distributed normally, then the sample mean is distributed normally. e. You are looking for the $P(\overline{x}<3500.4)$. You use the normalcdf command on the calculator. Remember to use the standard deviation you found in part c. However to reduce rounding error, type the division into the command. On the TI-83/84 you would have $P(\overline{x}<3500.4)=\text { normalcdf }(-1 E 99,3500.4,3687.6,410.5 \div \sqrt{9}) \approx 0.086$ On R you would have $P(\overline{x}<3500.4)=\text { pnorm }(3500.4,3687.6,410.5 / s q r(9)) \approx 0.086$ There is an 8.6% chance that the mean birth weight of the nine boy babies born would be less than 3500.4 g. Since this is more than 5%, this is not unusual. f. You are looking for the $P(\overline{x}<3452.5)$. On TI-83/84: $P(\overline{x}<3452.5)=\text { normalcdf }(-1 E 99,3452.5,3687.6,410.5 \div \sqrt{9}) \approx 0.043$ On R: $P(\overline{x}<3452.5)=\text { pnorm }(3452.5,3687.6,410.5 \div \sqrt{9}) \approx 0.043$ There is a 4.3% chance that the mean birth weight of the nine boy babies born would be less than 3452.5 g. Since this is less than 5%, this would be an unusual event. If it actually happened, then you may think there is something unusual about this sample. Maybe some of the nine babies were born as multiples, which brings the mean weight down, or some or all of the babies were not of European descent (in fact the mean weight of South Asian boy babies is 3452.5 g), or some were born before 40 weeks, or the babies were born at high altitudes. Example $3$ finding probabilities for sample means The age that American females first have intercourse is on average 17.4 years, with a standard deviation of approximately 2 years ("The Kinsey institute," 2013). This random variable is not normally distributed, though it is somewhat mound shaped. 1. State the random variable. 2. Suppose a sample of 35 American females is taken. Find the probability that the mean age that these 35 females first had intercourse is more than 21 years. Solution a. x = age that American females first have intercourse. b. Even though the original random variable is not normally distributed, the sample size is over 30, by the central limit theorem the sample mean will be normally distributed. The mean of the sample mean is $\mu_{\mathrm{\overline{x}}}=\mu=17.4$ years. The standard deviation of the sample mean is $\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}=\dfrac{2}{\sqrt{35}} \approx 0.33806$. You have all the information you need to use the normal command on your technology. Without the central limit theorem, you couldn’t use the normal command, and you would not be able to answer this question. On the TI-83/84: $P(\overline{x}>21)=\text { normalcdf }(21,1 E 99,17.4,2 \div \sqrt{35}) \approx 9.0 \times 10^{-27}$ On R: $P(\overline{x}>21)=1-\text { pnorm }(21,17.4,2 / \operatorname{sqrt} (35)) \approx 9.0 \times 10^{-27}$ The probability of a sample mean of 35 women being more than 21 years when they had their first intercourse is very small. This is extremely unlikely to happen. If it does, it may make you wonder about the sample. Could the population mean have increased from the 17.4 years that was stated in the article? Could the sample not have been random, and instead have been a group of women who had similar beliefs about intercourse? These questions, and more, are ones that you would want to ask as a researcher. Homework Exercise $1$ 1. A random variable is not normally distributed, but it is mound shaped. It has a mean of 14 and a standard deviation of 3. 1. If you take a sample of size 10, can you say what the shape of the sampling distribution for the sample mean is? Why? 2. For a sample of size 10, state the mean of the sample mean and the standard deviation of the sample mean. 3. If you take a sample of size 35, can you say what the shape of the distribution of the sample mean is? Why? 4. For a sample of size 35, state the mean of the sample mean and the standard deviation of the sample mean. 2. A random variable is normally distributed. It has a mean of 245 and a standard deviation of 21. 1. If you take a sample of size 10, can you say what the shape of the distribution for the sample mean is? Why? 2. For a sample of size 10, state the mean of the sample mean and the standard deviation of the sample mean. 3. For a sample of size 10, find the probability that the sample mean is more than 241. 4. If you take a sample of size 35, can you say what the shape of the distribution of the sample mean is? Why? 5. For a sample of size 35, state the mean of the sample mean and the standard deviation of the sample mean. 6. For a sample of size 35, find the probability that the sample mean is more than 241. 7. Compare your answers in part d and f. Why is one smaller than the other? 3. The mean starting salary for nurses is $67,694 nationally ("Staff nurse -," 2013). The standard deviation is approximately$10,333. The starting salary is not normally distributed but it is mound shaped. A sample of 42 starting salaries for nurses is taken. 1. State the random variable. 2. What is the mean of the sample mean? 3. What is the standard deviation of the sample mean? 4. What is the shape of the sampling distribution of the sample mean? Why? 5. Find the probability that the sample mean is more than $75,000. 6. Find the probability that the sample mean is less than$60,000. 7. If you did find a sample mean of more than \$75,000 would you find that unusual? What could you conclude? 4. According to the WHO MONICA Project the mean blood pressure for people in China is 128 mmHg with a standard deviation of 23 mmHg (Kuulasmaa, Hense & Tolonen, 1998). Blood pressure is normally distributed. 1. State the random variable. 2. Suppose a sample of size 15 is taken. State the shape of the distribution of the sample mean. 3. Suppose a sample of size 15 is taken. State the mean of the sample mean. 4. Suppose a sample of size 15 is taken. State the standard deviation of the sample mean. 5. Suppose a sample of size 15 is taken. Find the probability that the sample mean blood pressure is more than 135 mmHg. 6. Would it be unusual to find a sample mean of 15 people in China of more than 135 mmHg? Why or why not? 7. If you did find a sample mean for 15 people in China to be more than 135 mmHg, what might you conclude? 5. The size of fish is very important to commercial fishing. A study conducted in 2012 found the length of Atlantic cod caught in nets in Karlskrona to have a mean of 49.9 cm and a standard deviation of 3.74 cm (Ovegard, Berndt & Lunneryd, 2012). The length of fish is normally distributed. A sample of 15 fish is taken. 1. State the random variable. 2. Find the mean of the sample mean. 3. Find the standard deviation of the sample mean 4. What is the shape of the distribution of the sample mean? Why? 5. Find the probability that the sample mean length of the Atlantic cod is less than 52 cm. 6. Find the probability that the sample mean length of the Atlantic cod is more than 74 cm. 7. If you found sample mean length for Atlantic cod to be more than 74 cm, what could you conclude? 6. The mean cholesterol levels of women age 45-59 in Ghana, Nigeria, and Seychelles is 5.1 mmol/l and the standard deviation is 1.0 mmol/l (Lawes, Hoorn, Law & Rodgers, 2004). Assume that cholesterol levels are normally distributed. 1. State the random variable. 2. Find the probability that a woman age 45-59 in Ghana has a cholesterol level above 6.2 mmol/l (considered a high level). 3. Suppose doctors decide to test the woman’s cholesterol level again and average the two values. Find the probability that this woman’s mean cholesterol level for the two tests is above 6.2 mmol/l. 4. Suppose doctors being very conservative decide to test the woman’s cholesterol level a third time and average the three values. Find the probability that this woman’s mean cholesterol level for the three tests is above 6.2 mmol/l. 5. If the sample mean cholesterol level for this woman after three tests is above 6.2 mmol/l, what could you conclude? 7. In the United States, males between the ages of 40 and 49 eat on average 103.1 g of fat every day with a standard deviation of 4.32 g ("What we eat," 2012). The amount of fat a person eats is not normally distributed but it is relatively mound shaped. 1. State the random variable. 2. Find the probability that a sample mean amount of daily fat intake for 35 men age 40-59 in the U.S. is more than 100 g. 3. Find the probability that a sample mean amount of daily fat intake for 35 men age 40-59 in the U.S. is less than 93 g. 4. If you found a sample mean amount of daily fat intake for 35 men age 40-59 in the U.S. less than 93 g, what would you conclude? 8. A dishwasher has a mean life of 12 years with an estimated standard deviation of 1.25 years ("Appliance life expectancy," 2013). The life of a dishwasher is normally distributed. Suppose you are a manufacturer and you take a sample of 10 dishwashers that you made. 1. State the random variable. 2. Find the mean of the sample mean. 3. Find the standard deviation of the sample mean. 4. What is the shape of the sampling distribution of the sample mean? Why? 5. Find the probability that the sample mean of the dishwashers is less than 6 years. 6. If you found the sample mean life of the 10 dishwashers to be less than 6 years, would you think that you have a problem with the manufacturing process? Why or why not? Answer 1. a. See solutions, b. $\mu_{\mathrm{\overline{x}}}=14$, $\sigma_{\overline{x}}=0.9487$, c. See solutions, d. $\mu_{\mathrm{\overline{x}}}=14$, $\sigma_{\overline{x}}=0.5071$ 3. a. See solutions, b. $\mu_{\mathrm{\overline{x}}}=\ 67,694$, c. $\sigma_{\overline{x}}=\ 1594.42$, d. See solutions, e. $P(\overline{x}>\ 75,000)=2.302 \times 10^{-6}$, f. $P(\overline{x}<\ 60,000)=6.989 \times 10^{-7}$, g. See solutions 5. a. See solutions, b. $\mu_{\mathrm{\overline{x}}}=49.9 \mathrm{cm}$, c. $\sigma_{\overline{x}}=0.9657 \mathrm{cm}$, d. See solutions, e. $P(\overline{x}<52 \mathrm{cm})=0.9852$ f. $P(\overline{x}>74 \mathrm{cm}) \approx 0$, g. See solutions 7. a. See solutions, b. $P(\overline{x}>100 \mathrm{g})=0.99999$, c. $P(\overline{x}<93 \mathrm{g}) \approx 0$ or $8.22 \times 10^{-44}$, d. See solutions Data Sources: Annual maximums of daily rainfall in Sydney. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/sydrain.html Appliance life expectancy. (2013, November 8). Retrieved from http://www.mrappliance.com/expert/life-guide/ Bhat, R., & Kushtagi, P. (2006). A re-look at the duration of human pregnancy. Singapore Med J., 47(12), 1044-8. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/17139400 College Board, SAT. (2012). Total group profile report. Retrieved from website: media.collegeboard.com/digita...lGroup2012.pdf Greater Cleveland Regional Transit Authority, (2012). 2012 annual report. Retrieved from website: http://www.riderta.com/annual/2012 Janssen, P. A., Thiessen, P., Klein, M. C., Whitfield, M. F., MacNab, Y. C., & CullisKuhl, S. C. (2007). Standards for the measurement of birth weight, length and head circumference at term in neonates of european, chinese and south asian ancestry. Open Medicine, 1(2), e74-e88. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2802014/ Kiama blowhole eruptions. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/kiama.html Kuulasmaa, K., Hense, H., & Tolonen, H. World Health Organization (WHO), WHO Monica Project. (1998). Quality assessment of data on blood pressure in the who monica project (ISSN 2242-1246). Retrieved from WHO MONICA Project e-publications website: http://www.thl.fi/publications/monica/bp/bpqa.htm Lawes, C., Hoorn, S., Law, M., & Rodgers, A. (2004). High cholesterol. In M. Ezzati, A. Lopez, A. Rodgers & C. Murray (Eds.), Comparative Quantification of Health Risks (1 ed., Vol. 1, pp. 391-496). Retrieved from http://www.who.int/publications/cra/.../0391-0496.pdf Ovegard, M., Berndt, K., & Lunneryd, S. (2012). Condition indices of atlantic cod (gadus morhua) biased by capturing method. ICES Journal of Marine Science, doi: 10.1093/icesjms/fss145 Staff nurse - RN salary. (2013, November 08). Retrieved from http://www1.salary.com/Staff-Nurse-RN-salary.html The Kinsey institute - sexuality information links. (2013, November 08). Retrieved from www.iub.edu/~kinsey/resources/FAQ.html US Department of Argriculture, Agricultural Research Service. (2012). What we eat in America. Retrieved from website: http://www.ars.usda.gov/Services/docs.htm?docid=18349
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/06%3A_Continuous_Probability_Distributions/6.05%3A_Sampling_Distribution_and_the_Central_Limit_Theorem.txt
Now that you have all this information about descriptive statistics and probabilities, it is time to start inferential statistics. There are two branches of inferential statistics: hypothesis testing and confidence intervals. Definition \(1\) Hypothesis Testing: making a decision about a parameter(s) based on a statistic(s). Definition \(2\) Confidence Interval: estimating a parameter(s) based on a statistic(s). • 7.1: Basics of Hypothesis Testing In hypothesis testing, you need to first have an understanding of what a hypothesis is, which is an educated guess about a parameter. Once you have the hypothesis, you collect data and use the data to make a determination to see if there is enough evidence to show that the hypothesis is true. However, in hypothesis testing you actually assume something else is true, and then you look at your data to see how likely it is to get an event that your data demonstrates with that assumption. • 7.2: One-Sample Proportion Test • 7.3: One-Sample Test for the Mean 07: One-Sample Inference To understand the process of a hypothesis tests, you need to first have an understanding of what a hypothesis is, which is an educated guess about a parameter. Once you have the hypothesis, you collect data and use the data to make a determination to see if there is enough evidence to show that the hypothesis is true. However, in hypothesis testing you actually assume something else is true, and then you look at your data to see how likely it is to get an event that your data demonstrates with that assumption. If the event is very unusual, then you might think that your assumption is actually false. If you are able to say this assumption is false, then your hypothesis must be true. This is known as a proof by contradiction. You assume the opposite of your hypothesis is true and show that it can’t be true. If this happens, then your hypothesis must be true. All hypothesis tests go through the same process. Once you have the process down, then the concept is much easier. It is easier to see the process by looking at an example. Concepts that are needed will be detailed in this example. Example $1$ basics of hypothesis testing Suppose a manufacturer of the XJ35 battery claims the mean life of the battery is 500 days with a standard deviation of 25 days. You are the buyer of this battery and you think this claim is inflated. You would like to test your belief because without a good reason you can’t get out of your contract. What do you do? Solution Well first, you should know what you are trying to measure. Define the random variable. Let x = life of a XJ35 battery Now you are not just trying to find different x values. You are trying to find what the true mean is. Since you are trying to find it, it must be unknown. You don’t think it is 500 days. If you did, you wouldn’t be doing any testing. The true mean, $\mu$, is unknown. That means you should define that too. Let $\mu$= mean life of a XJ35 battery Now what? You may want to collect a sample. What kind of sample? You could ask the manufacturers to give you batteries, but there is a chance that there could be some bias in the batteries they pick. To reduce the chance of bias, it is best to take a random sample. How big should the sample be? A sample of size 30 or more means that you can use the central limit theorem. Pick a sample of size 30. Example $1$ contains the data for the sample you collected: 491 485 503 492 282 490 489 495 497 487 493 480 483 504 501 486 478 492 482 502 485 503 497 500 488 475 478 490 487 486 Table $1$: Data on Battery Life Now what should you do? Looking at the data set, you see some of the times are above 500 and some are below. But looking at all of the numbers is too difficult. It might be helpful to calculate the mean for this sample. The sample mean is $\overline{x} = 490$ days. Looking at the sample mean, one might think that you are right. However, the standard deviation and the sample size also plays a role, so maybe you are wrong. Before going any farther, it is time to formalize a few definitions. You have a guess that the mean life of a battery is less than 500 days. This is opposed to what the manufacturer claims. There really are two hypotheses, which are just guesses here – the one that the manufacturer claims and the one that you believe. It is helpful to have names for them. Definition $1$ Null Hypothesis: historical value, claim, or product specification. The symbol used is $H_{o}$. Definition $2$ Alternate Hypothesis: what you want to prove. This is what you want to accept as true when you reject the null hypothesis. There are two symbols that are commonly used for the alternative hypothesis: $H_{A}$ or $H_{I}$. The symbol $H_{A}$ will be used in this book. In general, the hypotheses look something like this: $H_{o} : \mu=\mu_{o}$ $H_{A} : \mu<\mu_{o}$ where $\mu_{o}$ just represents the value that the claim says the population mean is actually equal to. Also, $H_{A}$ can be less than, greater than, or not equal to. For this problem: $H_{o} : \mu=500$ days, since the manufacturer says the mean life of a battery is 500 days. $H_{A} : \mu<500$ days, since you believe that the mean life of the battery is less than 500 days. Now back to the mean. You have a sample mean of 490 days. Is this small enough to believe that you are right and the manufacturer is wrong? How small does it have to be? If you calculated a sample mean of 235, you would definitely believe the population mean is less than 500. But even if you had a sample mean of 435 you would probably believe that the true mean was less than 500. What about 475? Or 483? There is some point where you would stop being so sure that the population mean is less than 500. That point separates the values of where you are sure or pretty sure that the mean is less than 500 from the area where you are not so sure. How do you find that point? Well it depends on how much error you want to make. Of course you don’t want to make any errors, but unfortunately that is unavoidable in statistics. You need to figure out how much error you made with your sample. Take the sample mean, and find the probability of getting another sample mean less than it, assuming for the moment that the manufacturer is right. The idea behind this is that you want to know what is the chance that you could have come up with your sample mean even if the population mean really is 500 days. You want to find $P\left(\overline{x}<490 | H_{o} \text { is true }\right)=P(\overline{x}<490 | \mu=500)$ To compute this probability, you need to know how the sample mean is distributed. Since the sample size is at least 30, then you know the sample mean is approximately normally distributed. Remember $\mu_{\overline{x}}=\mu$ and $\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}$ A picture is always useful. Before calculating the probability, it is useful to see how many standard deviations away from the mean the sample mean is. Using the formula for the z-score from chapter 6, you find $z=\dfrac{\overline{x}-\mu_{o}}{\sigma / \sqrt{n}}=\dfrac{490-500}{25 / \sqrt{30}}=-2.19$ This sample mean is more than two standard deviations away from the mean. That seems pretty far, but you should look at the probability too. On TI-83/84: $P(\overline{x}<490 | \mu=500)=\text { normalcdf }(-1 E 99,490,500,25 \div \sqrt{30}) \approx 0.0142$ On R: $P(\overline{x}<490 \mu=500)=\text { pnorm }(490,500,25 / \operatorname{sqrt}(30)) \approx 0.0142$ There is a 1.42% chance that you could find a sample mean less than 490 when the population mean is 500 days. This is really small, so the chances are that the assumption that the population mean is 500 days is wrong, and you can reject the manufacturer’s claim. But how do you quantify really small? Is 5% or 10% or 15% really small? How do you decide? Before you answer that question, a couple more definitions are needed. Definition $3$ Test Statistic: $z=\dfrac{\overline{x}-\mu_{o}}{\sigma / \sqrt{n}}$ since it is calculated as part of the testing of the hypothesis. Definition $4$ p – value: probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true. It is the probability that was calculated above. Now, how small is small enough? To answer that, you really want to know the types of errors you can make. There are actually only two errors that can be made. The first error is if you say that $H_{o}$ is false, when in fact it is true. This means you reject $H_{o}$ when $H_{o}$ was true. The second error is if you say that $H_{o}$ is true, when in fact it is false. This means you fail to reject $H_{o}$ when $H_{o}$ is false. The following table organizes this for you: Type of errors: $H_{o}$ true $H_{o}$ false Reject $H_{o}$ Type 1 error No error Fail to reject $H_{o}$ No error Type II error Table $2$: Types of Errors Thus Definition $5$ Type I Error is rejecting $H_{o}$ when $H_{o}$ is true, and Definition $6$ Type II Error is failing to reject $H_{o}$ when $H_{o}$ is false. Since these are the errors, then one can define the probabilities attached to each error. Definition $7$ $\alpha$ = P(type I error) = P(rejecting $H_{o} / H_{o}$ is true) Definition $8$ $\beta$ = P(type II error) = P(failing to reject $H_{o} / H_{o}$ is false) $\alpha$ is also called the level of significance. Another common concept that is used is Power = $1-\beta$. Now there is a relationship between $\alpha$ and $\beta$. They are not complements of each other. How are they related? If $\alpha$ increases that means the chances of making a type I error will increase. It is more likely that a type I error will occur. It makes sense that you are less likely to make type II errors, only because you will be rejecting $H_{o}$ more often. You will be failing to reject $H_{o}$ less, and therefore, the chance of making a type II error will decrease. Thus, as $\alpha$ increases, $\beta$ will decrease, and vice versa. That makes them seem like complements, but they aren’t complements. What gives? Consider one more factor – sample size. Consider if you have a larger sample that is representative of the population, then it makes sense that you have more accuracy then with a smaller sample. Think of it this way, which would you trust more, a sample mean of 490 if you had a sample size of 35 or sample size of 350 (assuming a representative sample)? Of course the 350 because there are more data points and so more accuracy. If you are more accurate, then there is less chance that you will make any error. By increasing the sample size of a representative sample, you decrease both $\alpha$ and $\beta$. Summary of all of this: 1. For a certain sample size, n, if $\alpha$ increases, $\beta$ decreases. 2. For a certain level of significance, $\alpha$, if n increases, $\beta$ decreases. Now how do you find $\alpha$ and $\beta$? Well $\alpha$ is actually chosen. There are only three values that are usually picked for $\alpha$: 0.01, 0.05, and 0.10. $\beta$ is very difficult to find, so usually it isn’t found. If you want to make sure it is small you take as large of a sample as you can afford provided it is a representative sample. This is one use of the Power. You want $\beta$ to be small and the Power of the test is large. The Power word sounds good. Which pick of $\alpha$ do you pick? Well that depends on what you are working on. Remember in this example you are the buyer who is trying to get out of a contract to buy these batteries. If you create a type I error, you said that the batteries are bad when they aren’t, most likely the manufacturer will sue you. You want to avoid this. You might pick $\alpha$ to be 0.01. This way you have a small chance of making a type I error. Of course this means you have more of a chance of making a type II error. No big deal right? What if the batteries are used in pacemakers and you tell the person that their pacemaker’s batteries are good for 500 days when they actually last less, that might be bad. If you make a type II error, you say that the batteries do last 500 days when they last less, then you have the possibility of killing someone. You certainly do not want to do this. In this case you might want to pick $\alpha$ as 0.10. If both errors are equally bad, then pick $\alpha$ as 0.05. The above discussion is why the choice of $\alpha$ depends on what you are researching. As the researcher, you are the one that needs to decide what $\alpha$ level to use based on your analysis of the consequences of making each error is. Note If a type I error is really bad, then pick $\alpha$ = 0.01. If a type II error is really bad, then pick $\alpha$ = 0.10 If neither error is bad, or both are equally bad, then pick $\alpha$ = 0.05 The main thing is to always pick the $\alpha$ before you collect the data and start the test. The above discussion was long, but it is really important information. If you don’t know what the errors of the test are about, then there really is no point in making conclusions with the tests. Make sure you understand what the two errors are and what the probabilities are for them. Now it is time to go back to the example and put this all together. This is the basic structure of testing a hypothesis, usually called a hypothesis test. Since this one has a test statistic involving z, it is also called a z-test. And since there is only one sample, it is usually called a one-sample z-test. Example $2$ battery example revisited 1. State the random variable and the parameter in words. 2. State the null and alternative hypothesis and the level of significance. 3. State and check the assumptions for a hypothesis test. 1. A random sample of size n is taken. 2. The population standard derivation is known. 3. The sample size is at least 30 or the population of the random variable is normally distributed. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. x = life of battery $\mu$ = mean life of a XJ35 battery 2. $H_{o} : \mu=500$ days $H_{A} : \mu<500$ days $\alpha = 0.10$ (from above discussion about consequences) 3. Every hypothesis has some assumptions that be met to make sure that the results of the hypothesis are valid. The assumptions are different for each test. This test has the following assumptions. 1. This occurred in this example, since it was stated that a random sample of 30 battery lives were taken. 2. This is true, since it was given in the problem. 3. The sample size was 30, so this condition is met. 4. The test statistic depends on how many samples there are, what parameter you are testing, and assumptions that need to be checked. In this case, there is one sample and you are testing the mean. The assumptions were checked above. Sample statistic: $\overline{x} = 490$ Test statistic: $z=\dfrac{\overline{x}-\mu_{o}}{\sigma / \sqrt{n}}=\dfrac{490-500}{25 / \sqrt{30}}=-2.19$ p-value: Using TI-83/84: $P(\overline{x}<490 | \mu=500)=\text { normalcdf }(-1 \mathrm{E} 99,490,500,25 / \sqrt{30}) \approx 0.0142$ Using R: $P(\overline{x}<490 | \mu=500)=\operatorname{pnorm}(490,500,25 / \operatorname{sqrt}(30)) \approx 0.0142$ 5. Now what? Well, this p-value is 0.0142. This is a lot smaller than the amount of error you would accept in the problem -$\alpha$ = 0.10. That means that finding a sample mean less than 490 days is unusual to happen if $H_{o}$ is true. This should make you think that $H_{o}$ is not true. You should reject $H_{o}$. Note In fact, in general: Reject $H_{o}$ if the p-value < $\alpha$ and Fail to reject $H_{o}$ if the p-value $\geq \alpha$. 6. Since you rejected $H_{o}$, what does this mean in the real world? That is what goes in the interpretation. Since you rejected the claim by the manufacturer that the mean life of the batteries is 500 days, then you now can believe that your hypothesis was correct. In other words, there is enough evidence to show that the mean life of the battery is less than 500 days. Now that you know that the batteries last less than 500 days, should you cancel the contract? Statistically, there is evidence that the batteries do not last as long as the manufacturer says they should. However, based on this sample there are only ten days less on average that the batteries last. There may not be practical significance in this case. Ten days do not seem like a large difference. In reality, if the batteries are used in pacemakers, then you would probably tell the patient to have the batteries replaced every year. You have a large buffer whether the batteries last 490 days or 500 days. It seems that it might not be worth it to break the contract over ten days. What if the 10 days was practically significant? Are there any other things you should consider? You might look at the business relationship with the manufacturer. You might also look at how much it would cost to find a new manufacturer. These are also questions to consider before making any changes. What this discussion should show you is that just because a hypothesis has statistical significance does not mean it has practical significance. The hypothesis test is just one part of a research process. There are other pieces that you need to consider. That’s it. That is what a hypothesis test looks like. All hypothesis tests are done with the same six steps. Those general six steps are outlined below. 1. State the random variable and the parameter in words. This is where you are defining what the unknowns are in this problem. x = random variable $\mu$ = mean of random variable, if the parameter of interest is the mean. There are other parameters you can test, and you would use the appropriate symbol for that parameter. 2. State the null and alternative hypotheses and the level of significance $H_{o} : \mu=\mu_{o}$, where $\mu_{o}$ is the known mean $H_{A} : \mu<\mu_{o}$ $H_{A} : \mu>\mu_{o}$, use the appropriate one for your problem $H_{A} : \mu \neq \mu_{o}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for a hypothesis test. Each hypothesis test has its own assumptions. They will be stated when the different hypothesis tests are discussed. 4. Find the sample statistic, test statistic, and p-value. This depends on what parameter you are working with, how many samples, and the assumptions of the test. The p-value depends on your $H_{A}$. If you are doing the $H_{A}$ with the less than, then it is a left-tailed test, and you find the probability of being in that left tail. If you are doing the $H_{A}$ with the greater than, then it is a right-tailed test, and you find the probability of being in the right tail. If you are doing the $H_{A}$ with the not equal to, then you are doing a two-tail test, and you find the probability of being in both tails. Because of symmetry, you could find the probability in one tail and double this value to find the probability in both tails. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Sorry, one more concept about the conclusion and interpretation. First, the conclusion is that you reject $H_{o}$ or you fail to reject $H_{o}$. Why was it said like this? It is because you never accept the null hypothesis. If you wanted to accept the null hypothesis, then why do the test in the first place? In the interpretation, you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. You wouldn’t want to go to all this work and then find out you wanted to accept the claim. Why go through the trouble? You always want to show that the alternative hypothesis is true. Sometimes you can do that and sometimes you can’t. It doesn’t mean you proved the null hypothesis; it just means you can’t prove the alternative hypothesis. Here is an example to demonstrate this. Example $3$ conclusion in hypothesis tests In the U.S. court system a jury trial could be set up as a hypothesis test. To really help you see how this works, let’s use OJ Simpson as an example. In the court system, a person is presumed innocent until he/she is proven guilty, and this is your null hypothesis. OJ Simpson was a football player in the 1970s. In 1994 his ex-wife and her friend were killed. OJ Simpson was accused of the crime, and in 1995 the case was tried. The prosecutors wanted to prove OJ was guilty of killing his wife and her friend, and that is the alternative hypothesis Solution $H_{0}$: OJ is innocent of killing his wife and her friend $H_{A}$: OJ is guilty of killing his wife and her friend In this case, a verdict of not guilty was given. That does not mean that he is innocent of this crime. It means there was not enough evidence to prove he was guilty. Many people believe that OJ was guilty of this crime, but the jury did not feel that the evidence presented was enough to show there was guilt. The verdict in a jury trial is always guilty or not guilty! The same is true in a hypothesis test. There is either enough or not enough evidence to show that alternative hypothesis. It is not that you proved the null hypothesis true. When identifying hypothesis, it is important to state your random variable and the appropriate parameter you want to make a decision about. If count something, then the random variable is the number of whatever you counted. The parameter is the proportion of what you counted. If the random variable is something you measured, then the parameter is the mean of what you measured. (Note: there are other parameters you can calculate, and some analysis of those will be presented in later chapters.) Example $4$ stating hypotheses Identify the hypotheses necessary to test the following statements: 1. The average salary of a teacher is more than \$30,000. 2. The proportion of students who like math is less than 10%. 3. The average age of students in this class differs from 21. Solution a. x = salary of teacher $\mu$ = mean salary of teacher The guess is that $\mu>\ 30,000$ and that is the alternative hypothesis. The null hypothesis has the same parameter and number with an equal sign. $\begin{array}{l}{H_{0} : \mu=\ 30,000} \ {H_{A} : \mu>\ 30,000}\end{array}$ b. x = number od students who like math p = proportion of students who like math The guess is that p < 0.10 and that is the alternative hypothesis. $\begin{array}{l}{H_{0} : p=0.10} \ {H_{A} : p<0.10}\end{array}$ c. x = age of students in this class $\mu$ = mean age of students in this class The guess is that $\mu \neq 21$ and that is the alternative hypothesis. $\begin{array}{c}{H_{0} : \mu=21} \ {H_{A} : \mu \neq 21}\end{array}$ Example $5$ Stating Type I and II Errors and Picking Level of Significance 1. The plant-breeding department at a major university developed a new hybrid raspberry plant called YumYum Berry. Based on research data, the claim is made that from the time shoots are planted 90 days on average are required to obtain the first berry with a standard deviation of 9.2 days. A corporation that is interested in marketing the product tests 60 shoots by planting them and recording the number of days before each plant produces its first berry. The sample mean is 92.3 days. The corporation wants to know if the mean number of days is more than the 90 days claimed. State the type I and type II errors in terms of this problem, consequences of each error, and state which level of significance to use. 2. A concern was raised in Australia that the percentage of deaths of Aboriginal prisoners was higher than the percent of deaths of non-indigenous prisoners, which is 0.27%. State the type I and type II errors in terms of this problem, consequences of each error, and state which level of significance to use. Solution a. x = time to first berry for YumYum Berry plant $\mu$ = mean time to first berry for YumYum Berry plant $\begin{array}{l}{H_{0} : \mu=90} \ {H_{A} : \mu>90}\end{array}$ Type I Error: If the corporation does a type I error, then they will say that the plants take longer to produce than 90 days when they don’t. They probably will not want to market the plants if they think they will take longer. They will not market them even though in reality the plants do produce in 90 days. They may have loss of future earnings, but that is all. Type II error: The corporation do not say that the plants take longer then 90 days to produce when they do take longer. Most likely they will market the plants. The plants will take longer, and so customers might get upset and then the company would get a bad reputation. This would be really bad for the company. Level of significance: It appears that the corporation would not want to make a type II error. Pick a 10% level of significance, $\alpha = 0.10$. b. x = number of Aboriginal prisoners who have died p = proportion of Aboriginal prisoners who have died $\begin{array}{l}{H_{o} : p=0.27 \%} \ {H_{A} : p>0.27 \%}\end{array}$ Type I error: Rejecting that the proportion of Aboriginal prisoners who died was 0.27%, when in fact it was 0.27%. This would mean you would say there is a problem when there isn’t one. You could anger the Aboriginal community, and spend time and energy researching something that isn’t a problem. Type II error: Failing to reject that the proportion of Aboriginal prisoners who died was 0.27%, when in fact it is higher than 0.27%. This would mean that you wouldn’t think there was a problem with Aboriginal prisoners dying when there really is a problem. You risk causing deaths when there could be a way to avoid them. Level of significance: It appears that both errors may be issues in this case. You wouldn’t want to anger the Aboriginal community when there isn’t an issue, and you wouldn’t want people to die when there may be a way to stop it. It may be best to pick a 5% level of significance, $\alpha = 0.05$. Note Hypothesis testing is really easy if you follow the same recipe every time. The only differences in the various problems are the assumptions of the test and the test statistic you calculate so you can find the p-value. Do the same steps, in the same order, with the same words, every time and these problems become very easy. Homework Exercise $1$ For the problems in this section, a question is being asked. This is to help you understand what the hypotheses are. You are not to run any hypothesis tests and come up with any conclusions in this section. 1. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a given time period and found that 11% of all lenses had defects of some type. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Are there more defects from scratches than from all other causes? State the random variable, population parameter, and hypotheses. 2. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? State the random variable, population parameter, and hypotheses. 3. The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. In 2004, the mean CO2 emission was 4.87 metric tons per capita. Is there enough evidence to show that the mean CO2 emission is lower in 2010 than in 2004? State the random variable, population parameter, and hypotheses. 4. The FDA regulates that fish that is consumed is allowed to contain 1.0 mg/kg of mercury. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example $5$ ("Multi-disciplinary niser activity," 2013). Do the data provide enough evidence to show that the fish in Florida lakes has more mercury than the allowable amount? State the random variable, population parameter, and hypotheses. 5. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a given time period and found that 11% of all lenses had defects of some type. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Are there more defects from scratches than from all other causes? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the manufacturer, and the appropriate alpha level to use. State why you picked this alpha level. 6. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the state of Arizona, and the appropriate alpha level to use. State why you picked this alpha level. 7. The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. In 2004, the mean CO2 emission was 4.87 metric tons per capita. Is there enough evidence to show that the mean CO2 emission is lower in 2010 than in 2004? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the agency overseeing the protocol, and the appropriate alpha level to use. State why you picked this alpha level. 8. The FDA regulates that fish that is consumed is allowed to contain 1.0 mg/kg of mercury. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example $5$ ("Multi-disciplinary niser activity," 2013). Do the data provide enough evidence to show that the fish in Florida lakes has more mercury than the allowable amount? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the FDA, and the appropriate alpha level to use. State why you picked this alpha level. Answer 1. $H_{o} : p=0.11, H_{A} : p>0.11$ 3. $H_{o} : \mu=4.87 \text { metric tons per capita, } H_{A} : \mu<4.87 \text { metric tons per capita }$ 5. See solutions 7. See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/07%3A_One-Sample_Inference/7.01%3A_Basics_of_Hypothesis_Testing.txt
There are many different parameters that you can test. There is a test for the mean, such as was introduced with the z-test. There is also a test for the population proportion, p. This is where you might be curious if the proportion of students who smoke at your school is lower than the proportion in your area. Or you could question if the proportion of accidents caused by teenage drivers who do not have a drivers’ education class is more than the national proportion. To test a population proportion, there are a few things that need to be defined first. Usually, Greek letters are used for parameters and Latin letters for statistics. When talking about proportions, it makes sense to use p for proportion. The Greek letter for p is $\pi$, but that is too confusing to use. Instead, it is best to use p for the population proportion. That means that a different symbol is needed for the sample proportion. The convention is to use, $\hat{p}$, known as p-hat. This way you know that p is the population proportion, and that $\hat{p}$ is the sample proportion related to it. Now proportion tests are about looking for the percentage of individuals who have a particular attribute. You are really looking for the number of successes that happen. Thus, a proportion test involves a binomial distribution. Hypothesis Test for One Population Proportion (1-Prop Test) 1. State the random variable and the parameter in words. x = number of successes I = proportion of successes 2. State the null and alternative hypotheses and the level of significance $H_{o} : p=p_{o}$, where $p_{o}$ is the known proportion $H_{A} : p<p_{o}$ $H_{A} : p>p_{o}$, use the appropriate one for your problem $H_{A} : p \neq p_{o}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for a hypothesis test 1. A simple random sample of size n is taken. 2. The conditions for the binomial distribution are satisfied 3. To determine the sampling distribution of $\hat{p}$, you need to show that $n p \geq 5$ and $n q \geq 5$, where $q=1-p$. If this requirement is true, then the sampling distribution of $\hat{p}$ is well approximated by a normal curve. 4. Find the sample statistic, test statistic, and p-value Sample Proportion: $\hat{p}=\dfrac{x}{n}=\dfrac{\# \text { of successes }}{\# \text { of trials }}$ Test Statistic: $z=\dfrac{\hat{p}-p}{\sqrt{\stackrel{p q}{n}}}$ p-value: TI-83/84: Use normalcdf(lower limit, upper limit, 0, 1) Note if $H_{A} : p<p_{o}$, then lower limit is $-1 E 99$ and upper limit is your test statistic. If $H_{A} : p>p_{o}$, then lower limit is your test statistic and the upper limit is $1 E 99$. If $H_{A} : p \neq p_{o}$, then find the p-value for $H_{A} : p<p_{o}$, and multiply by 2. R: Use pnorm(z, 0, 1) Note If $H_{A} : p<p_{o}$, then you can use pnorm. If $H_{A} : p>p_{o}$, then you have to find pnorm and then subtract from 1. If $H_{A} : p \neq p_{o}$, then find the p-value for $H_{A} : p<p_{o}$, and multiply by 2. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Example $1$ hypothesis test for one proportion using formula A concern was raised in Australia that the percentage of deaths of Aboriginal prisoners was higher than the percent of deaths of non-Aboriginal prisoners, which is 0.27%. A sample of six years (1990-1995) of data was collected, and it was found that out of 14,495 Aboriginal prisoners, 51 died ("Indigenous deaths in," 1996). Do the data provide enough evidence to show that the proportion of deaths of Aboriginal prisoners is more than 0.27%? 1. State the random variable and the parameter in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for a hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. x = number of Aboriginal prisoners who die p = proportion of Aboriginal prisoners who die 2. $\begin{array}{l}{H_{o} : p=0.0027} \ {H_{A} : p>0.0027}\end{array}$ Example $5$b argued that the $\alpha =0.05$. 3. 1. A simple random sample of 14,495 Aboriginal prisoners was taken. However, the sample was not a random sample, since it was data from six years. It is the numbers for all prisoners in these six years, but the six years were not picked at random. Unless there was something special about the six years that were chosen, the sample is probably a representative sample. This assumption is probably met. 2. There are 14,495 prisoners in this case. The prisoners are all Aboriginals, so you are not mixing Aboriginal with non-Aboriginal prisoners. There are only two outcomes, either the prisoner dies or doesn’t. The chance that one prisoner dies over another may not be constant, but if you consider all prisoners the same, then it may be close to the same probability. Thus the conditions for the binomial distribution are satisfied 3. In this case p = 0.0027 and n = 14,495. $n p=14495^{*} 0.0027 \approx 39 \geq 5$ and $n q=14495^{*}(1-0.0027) \approx 14456 \geq 5$. So, the sampling distribution for $\hat{p}$ is a normal distribution. 4. Sample Proportion: x = 51 n = 14495 $\hat{p}=\dfrac{x}{n}=\dfrac{51}{14495} \approx 0.003518$ Test Statistic: $z=\dfrac{\hat{p}-p}{\sqrt{\dfrac{p q}{n}}}=\dfrac{0.003518-0.0027}{\sqrt{\dfrac{0.0027(1-0.0027)}{14495}}} \approx 1.8979$ p-value: TI-83/84: p-value = $P(z>1.8979)=\text { normalcdf }(1.8979,1 E 99,0,1) \approx 0.029$ R: p-value = $P(z>1.8979)=1-\text { pnorm }(1.8979,0,1) \approx 0.029$ 5. Since the p-value < 0.05, then reject $H_{o}$. 6. There is enough evidence to show that the proportion of deaths of Aboriginal prisoners is more than for non-Aboriginal prisoners. Example $2$ hypothesis test for one proportion using technology A researcher who is studying the effects of income levels on breastfeeding of infants hypothesizes that countries where the income level is lower have a higher rate of infant breastfeeding than higher income countries. It is known that in Germany, considered a high-income country by the World Bank, 22% of all babies are breastfeed. In Tajikistan, considered a low-income country by the World Bank, researchers found that in a random sample of 500 new mothers that 125 were breastfeeding their infant. At the 5% level of significance, does this show that low-income countries have a higher incident of breastfeeding? 1. State you random variable and the parameter in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for a hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. x = number of woman who breastfeed in a low-income country p = proportion of woman who breastfeed in a low-income country 2. $\begin{array}{l}{H_{o} : p=0.22} \ {H_{A} : p>0.22} \ {\alpha=0.05}\end{array}$ 3. 1. A simple random sample of 500 breastfeeding habits of woman in a low-income country was taken as was stated in the problem. 2. There were 500 women in the study. The women are considered identical, though they probably have some differences. There are only two outcomes, either the woman breastfeeds or she doesn’t. The probability of a woman breastfeeding is probably not the same for each woman, but it is probably not very different for each woman. The conditions for the binomial distribution are satisfied 3. In this case, n = 500 and p = 0.22. $n p=500(0.22)=110 \geq 5$ and $n q=500(1-0.22)=390 \geq 5$, so the sampling distribution of $\hat{p}$ is well approximated by a normal curve. 4. This time, all calculations will be done with technology. On the TI-83/84 calculator. Go into the STAT menu, then arrow over to TESTS. This test is a 1-propZTest. Then type in the information just as shown in Figure $1$. Once you press Calculate, you will see the results as in Figure $2$. The z in the results is the test statistic. The p = 0.052683219 is the p-value, and the $\hat{p}=0.25$ is the sample proportion. The p-value is approximately 0.053. On R, the command is prop.test(x, n, po, alternative = "less" or "greater"), where po is what $\mathrm{H}_{\mathrm{o}}$ says p equals, and you use less if your $\mathrm{H}_{\mathrm{A}}$ is less and greater if your $\mathrm{H}_{\mathrm{A}}$ is greater. If your $\mathrm{H}_{\mathrm{A}}$ is not equal to, then leave off the alternative statement. So for this example, the command would be prop.test(125, 500, .22, alternative = "greater") 1-sample proportions test with continuity correction data: 125 out of 500, null probability 0.22 X-squared = 2.4505, df = 1, p-value = 0.05874 alternative hypothesis: true p is greater than 0.22 95 percent confidence interval: 0.218598 1.000000 sample estimates: p 0.25 Note R does a continuity correction that the formula and the TI-83/84 calculator do not do. You can put in a command that says not to use the continuity correction, but it is correct to use it. Also, R doesn’t give the z test statistic, so you don’t need to worry about this. It does give a p-value that is slightly off from the formula and the calculator due to the continuity correction. p-value = 0.05874 5. Since the p-value is more than 0.05, you fail to reject $H_{o}$. 6. There is not enough evidence to show that the proportion of women who breastfeed in low-income countries is more than in high-income countries. Notice, the conclusion is that there wasn't enough evidence to show what $H_{1}$ said. The conclusion was not that you proved $H_{o}$ true. There are many reasons why you can’t say that $H_{o}$ is true. It could be that the countries you chose were not very representative of what truly happens. If you instead looked at all high-income countries and compared them to low-income countries, you might have different results. It could also be that the sample you collected in the low-income country was not representative. It could also be that income level is not an indication of breastfeeding habits. There could be other factors involved. This is why you can’t say that you have proven $H_{o}$ is true. There are too many other factors that could be the reason that you failed to reject $H_{o}$. Homework Exercise $1$ In each problem show all steps of the hypothesis test. If some of the assumptions are not met, note that the results of the test may not be correct and then continue the process of the hypothesis test. 1. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a given time period and found that 11% of all lenses had defects of some type. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Are there more defects from scratches than from all other causes? Use a 1% level of significance. 2. In July of 1997, Australians were asked if they thought unemployment would increase, and 47% thought that it would increase. In November of 1997, they were asked again. At that time 284 out of 631 said that they thought unemployment would increase ("Morgan gallup poll," 2013). At the 5% level, is there enough evidence to show that the proportion of Australians in November 1997 who believe unemployment would increase is less than the proportion who felt it would increase in July 1997? 3. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Arkansas had 1,601 complaints of identity theft out of 3,482 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Arkansas had a higher proportion of identity theft than 23%? Test at the 5% level. 4. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? Test at the 5% level. 5. In 2001, the Gallup poll found that 81% of American adults believed that there was a conspiracy in the death of President Kennedy. In 2013, the Gallup poll asked 1,039 American adults if they believe there was a conspiracy in the assassination, and found that 634 believe there was a conspiracy ("Gallup news service," 2013). Do the data show that the proportion of Americans who believe in this conspiracy has decreased? Test at the 1% level. 6. In 2008, there were 507 children in Arizona out of 32,601 who were diagnosed with Autism Spectrum Disorder (ASD) ("Autism and developmental," 2008). Nationally 1 in 88 children are diagnosed with ASD ("CDC features -," 2013). Is there sufficient data to show that the incident of ASD is more in Arizona than nationally? Test at the 1% level. Answer For all hypothesis tests, just the conclusion is given. See solutions for the entire answer. 1. Reject Ho. 3. Reject Ho. 5. Reject Ho.
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/07%3A_One-Sample_Inference/7.02%3A_One-Sample_Proportion_Test.txt
It is time to go back to look at the test for the mean that was introduced in section 7.1 called the z-test. In the example, you knew what the population standard deviation, $\sigma$, was. What if you don’t know $\sigma$? You could just use the sample standard deviation, s, as an approximation of $\sigma$. That means the test statistic is now $\dfrac{\overline{x}-\mu}{s / \sqrt{n}}$. Great, now you can go and find the p-value using the normal curve. Or can you? Is this new test statistic normally distributed? Actually, it is not. How is it distributed? A man named W. S. Gossett figured out what this distribution is and called it the Student’s t-distribution. There are some assumptions that must be made for this formula to be a Student’s t-distribution. These are outlined in the following theorem. Note: the t-distribution is called the Student’s t-distribution because that is the name he published under because he couldn’t publish under his own name due to employer not wanting him to publish under his own name. His employer by the way was Guinness and they didn't want competitors knowing they had a chemist working for them. It is not called the Student’s t-distribution because it is only used by students. Theorem: If the following assumptions are met 1. A random sample of size n is taken. 2. The distribution of the random variable is normal or the sample size is 30 or more. Then the distribution of $t=\dfrac{\overline{x}-\mu}{s / \sqrt{n}}$ is a Student’s t-distribution with $n-1$ degrees of freedom. Explanation of degrees of freedom: Recall the formula for sample standard deviation is $s=\sqrt{\dfrac{\sum(x-\overline{x})^{2}}{n-1}}$. Notice the denominator is n - 1. This is the same as the degrees of freedom. This is no accident. The reason the denominator and the degrees of freedom are both n -1 comes from how the standard deviation is calculated. Remember, first you take each data value and subtract $\overline{x}$. If you add up all of these new values, you will get 0. This must happen. Since it must happen, the first n - 1 data values you have “freedom of choice”, but the nth data value, you have no freedom to choose. Hence, you have n - 1 degrees of freedom. Another way to think about it is that if you five people and five chairs, the first four people have a choice of where they are sitting, but the last person does not. They have no freedom of where to sit. Only 5 - 1 =4 people have freedom of choice. The Student’s t-distribution is a bell-shape that is more spread out than the normal distribution. There are many t-distributions, one for each different degree of freedom. Here is a graph of the normal distribution and the Student’s t-distribution for df = 1 and df = 2. As the degrees of freedom increases, the student’s t-distribution looks more like the normal distribution. To find probabilities for the t-distribution, again technology can do this for you. There are many technologies out there that you can use. On the TI-83/84, the command is in the DISTR menu and is tcdf(. The syntax for this command is tcdf(lower limit, upper limit, df) On R: the command to find the area to the left of a t value is pt(t value, df) Hypothesis Test for One Population Mean (t-Test) 1. State the random variable and the parameter in words. x = random variable $\mu$ = mean of random variable 2. State the null and alternative hypotheses and the level of significance $H_{o} : \mu=\mu_{o}$, where $\mu_{o}$ is the known mean $H_{A} : \mu<\mu_{o}$ $H_{A} : \mu>\mu_{o}$, use the appropriate one for your problem $H_{A} : \mu \neq \mu_{o}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for a hypothesis test 1. A random sample of size n is taken. 2. The population of the random variable is normally distributed, though the t-test is fairly robust to the condition if the sample size is large. This means that if this condition isn’t met, but your sample size is quite large (over 30), then the results of the t-test are valid. 3. The population standard deviation, $\sigma$, is unknown. 4. Find the sample statistic, test statistic, and p-value Test Statistic: $t=\dfrac{\overline{x}-\mu}{\dfrac{s}{\sqrt{n}}}$ with degrees of freedom df = n - 1 p-value: Using TI-83/84: tcdf(lower limit, upper limit, df) Note If $H_{A} : \mu<\mu_{o}$, then lower limit is $-1 E 99$ and upper limit is your test statistic. If $H_{A} : \mu>\mu_{o}$, then lower limit is your test statistic and the upper limit is $1 E 99$. If $H_{A} : \mu \neq \mu_{o}$, then find the p-value for $H_{A} : \mu<\mu_{o}$, and multiply by 2. Using R: pt(t value, df) Note If $H_{A} : \mu<\mu_{o}$, then the command is pt(t value, df). If $H_{A} : \mu>\mu_{o}$, then the command is $1- \text {pt(t value, df })$. If $H_{A} : \mu \neq \mu_{o}$, then find the p-value for $H_{A} : \mu<\mu_{o}$, and multiply by 2. 5. This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. How to check the assumptions of t-test: In order for the t-test to be valid, the assumptions of the test must be true. Whenever you run a t-test, you must make sure the assumptions are true. You need to check them. Here is how you do this: 1. For the condition that the sample is a random sample, describe how you took the sample. Make sure your sampling technique is random. 2. For the condition that population of the random variable is normal, remember the process of assessing normality from chapter 6. Note If the assumptions behind this test are not valid, then the conclusions you make from the test are not valid. If you do not have a random sample, that is your fault. Make sure the sample you take is as random as you can make it following sampling techniques from chapter 1. If the population of the random variable is not normal, then take a sample larger than 30. If you cannot afford to do that, or if it is not logistically possible, then you do different tests called non-parametric tests. There is an entire course on non-parametric tests, and they will not be discussed in this book. Example $1$ test of the mean using the formula A random sample of 20 IQ scores of famous people was taken from the website of IQ of Famous People ("IQ of famous," 2013) and a random number generator was used to pick 20 of them. The data are in Example $1$. Do the data provide evidence at the 5% level that the IQ of a famous person is higher than the average IQ of 100? 158 180 150 137 109 225 122 138 145 180 118 118 126 140 165 150 170 105 154 118 Table $1$: IQ Scores of Famous People 1. State the random variable and the parameter in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for a hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. x = IQ score of a famous person $\mu$ = mean IQ score of a famous person 2. $\begin{array}{l}{H_{o} : \mu=100} \ {H_{A} : \mu>100} \ {\alpha=0.05}\end{array}$ 3. 1. A random sample of 20 IQ scores was taken. This was said in the problem. 2. The population of IQ score is normally distributed. This was shown in Example $2$. 4. Sample Statistic: $\begin{array}{l}{\overline{x}=145.4} \ {s \approx 29.27}\end{array}$ Test Statistic: $t=\dfrac{\overline{x}-\mu}{\dfrac{s}{\sqrt{n}}}=\dfrac{145.4-100}{\dfrac{29.27}{\sqrt{20}}} \approx 6.937$ p-value: df = n - 1 = 20 - 1 = 19 TI-83/84: p-value = $\operatorname{tcdf}(6.937,1 E 99,19)=6.5 \times 10^{-7}$ R: p-value = $1-\text{pt}(6.937,19)=6.5 \times 10^{-7}$ 5. Since the p-value is less than 5%, then reject $H_{o}$. 6. There is enough evidence to show that famous people have a higher IQ than the average IQ of 100. Example $2$ test of the mean using technology In 2011, the average life expectancy for a woman in Europe was 79.8 years. The data in Example $2$ are the life expectancies for men in European countries in 2011 ("WHO life expectancy," 2013). Do the data indicate that men’s life expectancy is less than women’s? Test at the 1% level. Table $2$: Life Expectancies for Men in European Countries in 2011 1. State the random variable and the parameter in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for a hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. x = life expectancy for a European man in 2011 $\mu$ = mean life expectancy for European men in 2011 2. $\begin{array}{l}{H_{o} : \mu=79.8 \text { years }} \ {H_{A} : \mu<79.8 \text { years }} \ {\alpha=0.01}\end{array}$ 3. 1. A random sample of 53 life expectancies of European men in 2011 was taken. The data is actually all of the life expectancies for every country that is considered part of Europe by the World Health Organization. However, the information is still sample information since it is only for one year that the data was collected. It may not be a random sample, but that is probably not an issue in this case. 2. The distribution of life expectancies of European men in 2011 is normally distributed. To see if this condition has been met, look at the histogram, number of outliers, and the normal probability plot. (If you wish, you can look at the normal probability plot first. If it doesn’t look linear, then you may want to look at the histogram and number of outliers at this point.) Not bell shaped Number of outliers: or: IQR = 79 - 69 = 10 1.5 * IQR = 15 Q1 - 1.5 * IQR = 69 - 15 = 54 Q3 + 1.5 * IQR = 79 + 15 = 94 Outliers are numbers below 54 and above 94. There are no outliers for this data set. Not linear This population does not appear to be normally distributed. This sample is larger than 30, so it is good that the t-test is robust. 4. The calculations will be conducted using technology. On the TI-83/84 calculator. Go into STAT and type the data into L1. Then go into STAT and move over to TESTS. Choose T-Test. The setup for the calculator is in Figure $4$. Once you press ENTER on Calculate you will see the result shown in Figure $6$. On R, the command is t.test(variable, mu = number in $\mathrm{H}_{0}$, alternative = "less" or "greater"), where mu = what $\mathrm{H}_{0}$ says the mean equals, and you use less if your $\mathrm{H}_{A}$ is less and greater if your $\mathrm{H}_{A}$ is greater. If your $\mathrm{H}_{A}$ is not equal to, then leave off the alternative statement. For this example, the command would be t.test(expectancy, mu=79.8, alternative = "less") One Sample t-test data: expectancy t = -7.7069, df = 52, p-value = 1.853e-10 alternative hypothesis: true mean is less than 79.8 95 percent confidence interval: -Inf 75.05357 sample estimates: mean of x 73.73585 Most of the output you don’t need. You need the test statistic and the p-value. The t = -7.707 is the test statistic. The p-value is $1.8534 \times 10^{-10}$. 5. Since the p-value is less than 1%, then reject $H_{o}$. 6. There is enough evidence to show that the mean life expectancy for European men in 2011 was less than the mean life expectancy for European women in 2011 of 79.8 years. Homework Exercise $1$ In each problem show all steps of the hypothesis test. If some of the assumptions are not met, note that the results of the test may not be correct and then continue the process of the hypothesis test. 1. The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. In 2004, the mean CO2 emission was 4.87 metric tons per capita. Table $3$ contains a random sample of CO2 emissions in 2010 ("CO2 emissions," 2013). Is there enough evidence to show that the mean CO2 emission is lower in 2010 than in 2004? Test at the 1% level. 1.36 1.42 5.93 5.36 0.06 9.11 7.32 7.93 6.72 0.78 1.80 0.20 2.27 0.28 5.86 3.46 1.46 0.14 2.62 0.79 7.48 0.86 7.84 2.87 2.45 Table $3$: CO2 Emissions (in metric tons per capita) in 2010 2. The amount of sugar in a Krispy Kream glazed donut is 10 g. Many people feel that cereal is a healthier alternative for children over glazed donuts. Example $4$ contains the amount of sugar in a sample of cereal that is geared towards children ("Healthy breakfast story," 2013). Is there enough evidence to show that the mean amount of sugar in children’s cereal is more than in a glazed donut? Test at the 5% level. 10 14 12 9 13 13 13 11 12 15 9 10 11 3 6 12 15 12 12 Table $4$: Sugar Amounts in Children's Cereal 3. The FDA regulates that fish that is consumed is allowed to contain 1.0 mg/kg of mercury. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example $5$ ("Multi-disciplinary niser activity," 2013). Do the data provide enough evidence to show that the fish in Florida lakes has more mercury than the allowable amount? Test at the 10% level. 1.23 1.33 0.04 0.44 1.20 0.27 0.48 0.19 0.83 0.81 0.81 0.5 0.49 1.16 0.05 0.15 0.19 0.77 1.08 0.98 0.63 0.56 0.41 0.73 0.34 0.59 0.34 0.84 0.50 0.34 0.28 0.34 0.87 0.56 0.17 0.18 0.19 0.04 0.49 1.10 0.16 0.10 0.48 0.21 0.86 0.52 0.65 0.27 0.94 0.40 0.43 0.25 0.27 Table $5$: Average Mercury Levels (mg/kg) in Fish 4. Stephen Stigler determined in 1977 that the speed of light is 299,710.5 km/sec. In 1882, Albert Michelson had collected measurements on the speed of light ("Student t-distribution," 2013). His measurements are given in Example $6$. Is there evidence to show that Michelson’s data is different from Stigler’s value of the speed of light? Test at the 5% level. 299883 299816 299778 299796 299682 299711 299611 299599 300051 299781 299578 299796 299774 299820 299772 299696 299573 299748 299748 299797 299851 299809 299723 Table $6$: Speed of Light Measurements in (km/sec) 5. Example $7$ contains pulse rates after running for 1 minute, collected from females who drink alcohol ("Pulse rates before," 2013). The mean pulse rate after running for 1 minute of females who do not drink is 97 beats per minute. Do the data show that the mean pulse rate of females who do drink alcohol is higher than the mean pulse rate of females who do not drink? Test at the 5% level. 176 150 150 115 129 160 120 125 89 132 120 120 68 87 88 72 77 84 92 80 60 67 59 64 88 74 68 Table $7$: Pulse Rates of Woman Who Use Alcohol 6. The economic dynamism, which is the index of productive growth in dollars for countries that are designated by the World Bank as middle-income are in Example $8$ ("SOCR data 2008," 2013). Countries that are considered high-income have a mean economic dynamism of 60.29. Do the data show that the mean economic dynamism of middle-income countries is less than the mean for high-income countries? Test at the 5% level. 25.8057 37.4511 51.915 43.6952 47.8506 43.7178 58.0767 41.1648 38.0793 37.7251 39.6553 42.0265 48.6159 43.8555 49.1361 61.9281 41.9543 44.9346 46.0521 48.3652 43.6252 50.9866 59.1724 39.6282 33.6074 21.6643 Table $8$: Economic Dynamism of Middle Income Countries 7. In 1999, the average percentage of women who received prenatal care per country is 80.1%. Example $9$ contains the percentage of woman receiving prenatal care in 2009 for a sample of countries ("Pregnant woman receiving," 2013). Do the data show that the average percentage of women receiving prenatal care in 2009 is higher than in 1999? Test at the 5% level. 70.08 72.73 74.52 75.79 76.28 76.28 76.65 80.34 80.60 81.90 86.30 87.70 87.76 88.40 90.70 91.50 91.80 92.10 92.20 92.41 92.47 93.00 93.20 93.40 93.63 93.69 93.80 94.30 94.51 95.00 95.80 95.80 96.23 96.24 97.30 97.90 97.95 98.20 99.00 99.00 99.10 99.10 100.00 100.00 100.00 100.00 100.00 Table $9$: Percentage of Woman Receiving Prenatal Care 8. Maintaining your balance may get harder as you grow older. A study was conducted to see how steady the elderly is on their feet. They had the subjects stand on a force platform and have them react to a noise. The force platform then measured how much they swayed forward and backward, and the data is in Example $10$ ("Maintaining balance while," 2013). Do the data show that the elderly sway more than the mean forward sway of younger people, which is 18.125 mm? Test at the 5% level. 19 30 20 19 29 25 21 24 50 Table $10$: Forward/Backward Sway (in mm) of Elderly Subjects Answer For all hypothesis tests, just the conclusion is given. See solutions for the entire answer. 1. Fail to reject Ho. 3. Fail to reject Ho. 5. Fail to reject Ho. 7. Reject Ho. Data Sources: Australian Human Rights Commission, (1996). Indigenous deaths in custody 1989 - 1996. Retrieved from website: www.humanrights.gov.au/public...deaths-custody CDC features - new data on autism spectrum disorders. (2013, November 26). Retrieved from www.cdc.gov/features/countingautism/ Center for Disease Control and Prevention, Prevalence of Autism Spectrum Disorders - Autism and Developmental Disabilities Monitoring Network. (2008). Autism and developmental disabilities monitoring network-2012. Retrieved from website: www.cdc.gov/ncbddd/autism/doc...nityReport.pdf CO2 emissions. (2013, November 19). Retrieved from http://data.worldbank.org/indicator/EN.ATM.CO2E.PC Federal Trade Commission, (2008). Consumer fraud and identity theft complaint data: January-December 2007. Retrieved from website: www.ftc.gov/opa/2008/02/fraud.pdf Gallup news service. (2013, November 7-10). Retrieved from www.gallup.com/file/poll/1658...acy_131115.pdf Healthy breakfast story. (2013, November 16). Retrieved from lib.stat.cmu.edu/DASL/Stories...Breakfast.html IQ of famous people. (2013, November 13). Retrieved from http://www.kidsiqtestcenter.com/IQ-famous-people.html Maintaining balance while concentrating. (2013, September 25). Retrieved from http://www.statsci.org/data/general/balaconc.html Morgan Gallup poll on unemployment. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/gallup.html Multi-disciplinary niser activity - mercury in bass. (2013, November 16). Retrieved from http://gozips.uakron.edu/~nmimoto/pa.../MercuryInBass - description.txt Pregnant woman receiving prenatal care. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.STA.ANVC.ZS SOCR data 2008 world countries rankings. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index...ntriesRankings Student t-distribution. (2013, November 25). Retrieved from lib.stat.cmu.edu/DASL/Stories/student.html WHO life expectancy. (2013, September 19). Retrieved from www.who.int/gho/mortality_bur...n_trends/en/in dex.html
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/07%3A_One-Sample_Inference/7.03%3A_One-Sample_Test_for_the_Mean.txt
In hypothesis tests, the purpose was to make a decision about a parameter, in terms of it being greater than, less than, or not equal to a value. But what if you want to actually know what the parameter is. You need to do estimation. There are two types of estimation – point estimator and confidence interval. 08: Estimation A point estimator is just the statistic that you have calculated previously. As an example, when you wanted to estimate the population mean, $\mu$, the point estimator is the sample mean, $\overline{x}$. To estimate the population proportion, p, you use the sample proportion, $\hat{p}$. In general, if you want to estimate any population parameter, we will call it $\theta$, you use the sample statistic, $\hat{\theta}$. Point estimators are really easy to find, but they have some drawbacks. First, if you have a large sample size, then the estimate is better. But with a point estimator, you don’t know what the sample size is. Also, you don’t know how accurate the estimate is. Both of these problems are solved with a confidence interval. Definition $1$ Confidence interval: This is where you have an interval surrounding your parameter, and the interval has a chance of being a true statement. In general, a confidence interval looks like: $\hat{\theta}{ \pm E}$, where $\hat{\theta}$ is the point estimator and E is the margin of error term that is added and subtracted from the point estimator. Thus making an interval. Interpreting a confidence interval: The statistical interpretation is that the confidence interval has a probability (1 - $\alpha$, where $\alpha$ is the complement of the confidence level) of containing the population parameter. As an example, if you have a 95% confidence interval of 0.65 < p < 0.73, then you would say, “there is a 95% chance that the interval 0.65 to 0.73 contains the true population proportion.” This means that if you have 100 intervals, 95 of them will contain the true proportion, and 5% will not. The wrong interpretation is that there is a 95% chance that the true value of p will fall between 0.65 and 0.73. The reason that this interpretation is wrong is that the true value is fixed out there somewhere. You are trying to capture it with this interval. So this is the chance is that your interval captures it, and not that the true value falls in the interval. There is also a real world interpretation that depends on the situation. It is where you are telling people what numbers you found the parameter to lie between. So your real world is where you tell what values your parameter is between. There is no probability attached to this statement. That probability is in the statistical interpretation. The common probabilities used for confidence intervals are 90%, 95%, and 99%. These are known as the confidence level. The confidence level and the alpha level are related. For a two-tailed test, the confidence level is C = 1 - $\alpha$. This is because the $\alpha$ is both tails and the confidence level is area between the two tails. As an example, for a two-tailed test ($\mathrm{H}_{\mathrm{A}}$ is not equal to) with $\alpha$ equal to 0.10, the confidence level would be 0.90 or 90%. If you have a one-tailed test, then your $\alpha$ is only one tail. Because of symmetry the other tail is also $\alpha$. So you have 2$\alpha$ with both tails. So the confidence level, which is the area between the two tails, is C = 1 - 2$\alpha$. Example $1$ stating the statistical and real world interpretations for a confidence interval 1. Suppose you have a 95% confidence interval for the mean age a woman gets married in 2013 is $26<\mu<28$. State the statistical and real world interpretations of this statement. 2. Suppose a 99% confidence interval for the proportion of Americans who have tried marijuana as of 2013 is $0.35<p<0.41$. State the statistical and real world interpretations of this statement Solution 1. Statistical Interpretation: There is a 95% chance that the interval $26<\mu<28$ contains the mean age a woman gets married in 2013. Real World Interpretation: The mean age that a woman married in 2013 is between 26 and 28 years of age. 2. Statistical Interpretation: There is a 99% chance that the interval $0.35<p<0.41$ contains the proportion of Americans who have tried marijuana as of 2013. Real World Interpretation: The proportion of Americans who have tried marijuana as of 2013 is between 0.35 and 0.41. One last thing to know about confidence is how the sample size and confidence level affect how wide the interval is. The following discussion demonstrates what happens to the width of the interval as you get more confident. Think about shooting an arrow into the target. Suppose you are really good at that and that you have a 90% chance of hitting the bull’s eye. Now the bull’s eye is very small. Since you hit the bull’s eye approximately 90% of the time, then you probably hit inside the next ring out 95% of the time. You have a better chance of doing this, but the circle is bigger. You probably have a 99% chance of hitting the target, but that is a much bigger circle to hit. You can see, as your confidence in hitting the target increases, the circle you hit gets bigger. The same is true for confidence intervals. This is demonstrated in Figure $1$. The higher level of confidence makes a wider interval. There’s a trade off between width and confidence level. You can be really confident about your answer but your answer will not be very precise. Or you can have a precise answer (small margin of error) but not be very confident about your answer. Now look at how the sample size affects the size of the interval. Suppose Figure $2$ represents confidence intervals calculated on a 95% interval. A larger sample size from a representative sample makes the width of the interval narrower. This makes sense. Large samples are closer to the true population so the point estimate is pretty close to the true value. Now you know everything you need to know about confidence intervals except for the actual formula. The formula depends on which parameter you are trying to estimate. With different situations you will be given the confidence interval for that parameter. Homework Exercise $1$ 1. Suppose you compute a confidence interval with a sample size of 25. What will happen to the confidence interval if the sample size increases to 50? 2. Suppose you compute a 95% confidence interval. What will happen to the confidence interval if you increase the confidence level to 99%? 3. Suppose you compute a 95% confidence interval. What will happen to the confidence interval if you decrease the confidence level to 90%? 4. Suppose you compute a confidence interval with a sample size of 100. What will happen to the confidence interval if the sample size decreases to 80? 5. A 95% confidence interval is 6353 km < $\mu$ < 6384 km, where $\mu$ is the mean diameter of the Earth. State the statistical interpretation. 6. A 95% confidence interval is 6353 km < $\mu$ < 6384 km, where $\mu$ is the mean diameter of the Earth. State the real world interpretation. 7. In 2013, Gallup conducted a poll and found a 95% confidence interval of 0.52 < p < 0.60, where p is the proportion of Americans who believe it is the government’s responsibility for health care. Give the real world interpretation. 8. In 2013, Gallup conducted a poll and found a 95% confidence interval of 0.52 < p < 0.60, where p is the proportion of Americans who believe it is the government’s responsibility for health care. Give the statistical interpretation. Answer 1. Narrower 3. Narrower 5. See solutions 7. See solutions
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/08%3A_Estimation/8.01%3A_Basics_of_Confidence_Intervals.txt
Suppose you want to estimate the population proportion, p. As an example you may be curious what proportion of students at your school smoke. Or you could wonder what is the proportion of accidents caused by teenage drivers who do not have a drivers’ education class. Confidence Interval for One Population Proportion (1-Prop Interval) 1. State the random variable and the parameter in words. x = number of successes p = proportion of successes 2. State and check the assumptions for confidence interval 1. A simple random sample of size n is taken. 2. The condition for the binomial distribution are satisfied 3. To determine the sampling distribution of $\hat{p}$, you need to show that $n \hat{p} \geq 5$ and $n \hat{q} \geq 5$, where $\hat{q}=1-\hat{p}$. If this requirement is true, then the sampling distribution of $\hat{p}$ is well approximated by a normal curve. (In reality this is not really true, since the correct assumption deals with p. However, in a confidence interval you do not know p, so you must use $\hat{p}$. This means you just need to show that $x \geq 5$ and $n-x \geq 5$.) 3. Find the sample statistic and the confidence interval Sample Proportion: $\hat{p}=\dfrac{x}{n}=\dfrac{\# \text { of successes }}{\# \text { of trials }}$ Confidence Interval: $\hat{p}-E<p<\hat{p}+E$ Where p = population proportion $\hat{p}$ = sample proportion n = number of sample values E = margin of error $z_{c}=$ = critical value $\hat{q}=1-\hat{p}$ $E=z_{c} \sqrt{\dfrac{\hat{p} \hat{q}}{n}}$ 4. Statistical Interpretation: In general this looks like, “there is a C% chance that $\hat{p}-E<p<\hat{p}+E$ contains the true proportion.” 5. Real World Interpretation: This is where you state what interval contains the true proportion. The critical value is a value from the normal distribution. Since a confidence interval is found by adding and subtracting a margin of error amount from the sample proportion, and the interval has a probability of containing the true proportion, then you can think of this as the statement $P(\hat{p}-E<p<\hat{p}+E)=C$. You can use the invNorm command on the TI-83/84 calculator or qnorm command on R to find the critical value. The critical values will always be the same value, so it is easier to just look at table A.1 in the appendix. Example $1$ confidence interval for the population proportion using the formula A concern was raised in Australia that the percentage of deaths of Aboriginal prisoners was higher than the percent of deaths of non-Aboriginal prisoners, which is 0.27%. A sample of six years (1990-1995) of data was collected, and it was found that out of 14,495 Aboriginal prisoners, 51 died ("Indigenous deaths in," 1996). Find a 95% confidence interval for the proportion of Aboriginal prisoners who died. 1. State the random variable and the parameter in words. 2. State and check the assumptions for a confidence interval. 3. Find the sample statistic and the confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. x = number of Aboriginal prisoners who die p = proportion of Aboriginal prisoners who die 2. 1. A simple random sample of 14,495 Aboriginal prisoners was taken. However, the sample was not a random sample, since it was data from six years. It is the numbers for all prisoners in these six years, but the six years were not picked at random. Unless there was something special about the six years that were chosen, the sample is probably a representative sample. This assumption is probably met. 2. There are 14,495 prisoners in this case. The prisoners are all Aboriginals, so you are not mixing Aboriginal with non-Aboriginal prisoners. There are only two outcomes, either the prisoner dies or doesn’t. The chance that one prisoner dies over another may not be constant, but if you consider all prisoners the same, then it may be close to the same probability. Thus the assumptions for the binomial distribution are satisfied 3. In this case, x = 51 and n - x = 14495 - 51 = 14444 and both are greater than or equal to 5. The sampling distribution for $\hat{p}$ is a normal distribution. 3. Sample Proportion: $\hat{p}=\dfrac{x}{n}=\dfrac{51}{14495} \approx 0.003518$ Confidence Interval: $z_{c}=1.96$, since 95% confidence level $E=z_{c} \sqrt{\dfrac{\hat{p} \hat{q}}{n}}=1.96 \sqrt{\dfrac{0.003518(1-0.003518)}{14495}} \approx 0.000964$ $\hat{p}-E<p<\hat{p}+E$ $0.003518-0.000964<p<0.003518+0.000964$ $0.002554<p<0.004482$ 4. There is a 95% chance that $0.002554<p<0.004482$ contains the proportion of Aboriginal prisoners who died. 5. The proportion of Aboriginal prisoners who died is between 0.0026 and 0.0045. You can also do the calculations for the confidence interval with technology. The following example shows the process on the TI-83/84. Example $2$ confidence interval for the population proportion using technology A researcher studying the effects of income levels on breastfeeding of infants hypothesizes that countries where the income level is lower have a higher rate of infant breastfeeding than higher income countries. It is known that in Germany, considered a high-income country by the World Bank, 22% of all babies are breastfeed. In Tajikistan, considered a low-income country by the World Bank, researchers found that in a random sample of 500 new mothers that 125 were breastfeeding their infants. Find a 90% confidence interval of the proportion of mothers in low-income countries who breastfeed their infants? 1. State you random variable and the parameter in words. 2. State and check the assumptions for a confidence interval. 3. Find the sample statistic and the confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. x = number of woman who breastfeed in a low-income country p = proportion of woman who breastfeed in a low-income country 2. 1. A simple random sample of 500 breastfeeding habits of woman in a low-income country was taken as was stated in the problem. 2. There were 500 women in the study. The women are considered identical, though they probably have some differences. There are only two outcomes, either the woman breastfeeds or she doesn’t. The probability of a woman breastfeeding is probably not the same for each woman, but it is probably not very different for each woman. The assumptions for the binomial distribution are satisfied 3. x = 125 and n - x = 500 - 125 = 375 and both are greater than or equal to 5, so the sampling distribution of $\hat{p}$ is well approximated by a normal curve. 3. On the TI-83/84: Go into the STAT menu. Move over to TESTS and choose 1-PropZInt. Once you press Calculate, you will see the results as in Figure $2$. On R: the command is prop.test(x, n, conf.level = C), where C is given in decimal form. So for this example, the command is prop.test(125, 500, conf.level = 0.90) 1-sample proportions test with continuity correction data: 125 out of 500, null probability 0.5 X-squared = 124, df = 1, p-value < 2.2e-16 alternative hypothesis: true p is not equal to 0.5 90 percent confidence interval: 0.2185980 0.2841772 sample estimates: p 0.25 Again, R does a continuity correction, so the answer is slightly off from the formula and the TI-83/84 calculator. 0.219 < p < 0.284 4. There is a 90% chance that 0.219 < p < 0.284 contains the proportion of women in low-income countries who breastfeed their infants. 5. The proportion of women in low-income countries who breastfeed their infants is between 0.219 and 0.284. Homework Exercise $1$ In each problem show all steps of the confidence interval. If some of the assumptions are not met, note that the results of the interval may not be correct and then continue the process of the confidence interval. 1. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they make. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Find a 99% confidence interval for the proportion of defects that are from scratches. 2. In November of 1997, Australians were asked if they thought unemployment would increase. At that time 284 out of 631 said that they thought unemployment would increase ("Morgan gallup poll," 2013). Estimate the proportion of Australians in November 1997 who believed unemployment would increase using a 95% confidence interval? 3. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, Arkansas had 1,601 complaints of identity theft out of 3,482 consumer complaints ("Consumer fraud and," 2008). Calculate a 90% confidence interval for the proportion of identity theft in Arkansas. 4. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Calculate a 90% confidence interval for the proportion of identity theft in Alaska. 5. In 2013, the Gallup poll asked 1,039 American adults if they believe there was a conspiracy in the assassination of President Kennedy, and found that 634 believe there was a conspiracy ("Gallup news service," 2013). Estimate the proportion of American’s who believe in this conspiracy using a 98% confidence interval. 6. In 2008, there were 507 children in Arizona out of 32,601 who were diagnosed with Autism Spectrum Disorder (ASD) ("Autism and developmental," 2008). Find the proportion of ASD in Arizona with a confidence level of 99%. Answer For all confidence intervals, just the interval using technology is given. See solution for the entire answer. 1. 0.1641 < p < 0.1745 3. 0.4458 < p < 0.4739 5. 0.5740 < p < 0.6452
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/08%3A_Estimation/8.02%3A_One-Sample_Interval_for_the_Proportion.txt
Suppose you want to estimate the mean height of Americans, or you want to estimate the mean salary of college graduates. A confidence interval for the mean would be the way to estimate these means. Confidence Interval for One Population Mean (t-Interval) 1. State the random variable and the parameter in words. x = random variable $\mu$ = mean of random variable 2. State and check the assumptions for a hypothesis test 1. A random sample of size n is taken. 2. The population of the random variable is normally distributed, though the t-test is fairly robust to the assumption if the sample size is large. This means that if this assumption isn’t met, but your sample size is quite large (over 30), then the results of the t-test are valid. 3. Find the sample statistic and confidence interval $\overline{x}-E<\mu<\overline{x}+E$ where $E=t_{c} \dfrac{s}{\sqrt{n}}$ $\overline{x}$ is the point estimator for $\mu$ $t_{c}$ is the critical value where degrees of freedom: df = n - 1 s is the sample standard deviation n is the sample size 4. Statistical Interpretation: In general this looks like, “there is a C% chance that the statement $\overline{x}-E<\mu<\overline{x}+E$ contains the true mean.” 5. Real World Interpretation: This is where you state what interval contains the true mean. The critical value is a value from the Student’s t-distribution. Since a confidence interval is found by adding and subtracting a margin of error amount from the sample mean, and the interval has a probability of containing the true mean, then you can think of this as the statement $P(\overline{x}-E<\mu<\overline{x}+E)=C$. The critical values are found in table A.2 in the appendix. How to check the assumptions of confidence interval: In order for the confidence interval to be valid, the assumptions of the test must be true. Whenever you run a confidence interval, you must make sure the assumptions are true. You need to check them. Here is how you do this: 1. For the assumption that the sample is a random sample, describe how you took the sample. Make sure your sampling technique is random. 2. For the assumption that population is normal, remember the process of assessing normality from chapter 6. Example $1$ confidence interval for the population mean using the formula A random sample of 20 IQ scores of famous people was taken information from the website of IQ of Famous People ("IQ of famous," 2013) and then using a random number generator to pick 20 of them. The data are in Example $1$ (this is the same data set that was used in Example $2$). Find a 98% confidence interval for the IQ of a famous person. 158 180 150 137 109 225 122 138 145 180 118 118 126 140 165 150 170 105 154 118 Table $1$: IQ Scores of Famous People 1. State the random variable and the parameter in words. 2. State and check the assumptions for a confidence interval. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. x = IQ score of a famous person $\mu$ = mean IQ score of a famous person 2. 1. A random sample of 20 IQ scores was taken. This was stated in the problem. 2. The population of IQ score is normally distributed. This was shown in Example $2$. 3. Sample Statistic: $\overline{x} = 145.4$ $s \approx 29.27$ Now you need the degrees of freedom, df = n - 1 = 20 - 1 = 19 and the C, which is 98%. Now go to table A.2, go down the first column to 19 degrees of freedom. Then go over to the column headed with 98%. Thus $t_{c}=2.539$. (See Example $2$.) Table $2$: Excerpt From Table A.2 $E=t_{c} \dfrac{s}{\sqrt{n}}=2.539 \dfrac{29.27}{\sqrt{20}} \approx 16.6$ $\overline{x}-E<\mu<\overline{x}+E$ $145.4-16.6<\mu<145.4+16.6$ $128.8<\mu<162$ 4. There is a 98% chance that $128.8<\mu<162$ contains the mean IQ score of a famous person. 5. The mean IQ score of a famous person is between 128.8 and 162. Example $2$ confidence interval for the population mean using technology The data in Example $3$ are the life expectancies for men in European countries in 2011 ("WHO life expectancy," 2013). Find the 99% confident interval for the mean life expectancy of men in Europe. 7365 79 67 78 69 66 78 74 71 74 79 75 77 71 78 78 68 78 78 71 81 79 80 80 62 65 69 68 79 79 79 73 79 79 72 77 67 70 63 82 72 72 77 79 80 80 67 73 73 60 65 79 66 Table $3$: Life Expectancies for Men in European Countries in 2011 1. State the random variable and the parameter in words. 2. State and check the assumptions for a confidence interval. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. x = life expectancy for a European man in 2011 $\mu$ = mean life expectancy for European men in 2011 2. 1. A random sample of 53 life expectancies of European men in 2011 was taken. The data is actually all of the life expectancies for every country that is considered part of Europe by the World Health Organization. However, the information is still sample information since it is only for one year that the data was collected. It may not be a random sample, but that is probably not an issue in this case. 2. The distribution of life expectancies of European men in 2011 is normally distributed. To see if this assumption has been met, look at the histogram, number of outliers, and the normal probability plot. (If you wish, you can look at the normal probability plot first. If it doesn’t look linear, then you may want to look at the histogram and number of outliers at this point.) Not normally distributed Number of outliers: IQR = 79 - 69 = 10 1.5 * IQR = 15 Q1 - 1.5 * IQR = 69 - 15 = 54 Q3 + 1.5 * IQR = 79 + 15 = 94 Outliers are numbers below 54 and above 94. There are no outliers for this data set. Not linear This population does not appear to be normally distributed. The t-test is robust for sample sizes larger than 30 so you can go ahead and calculate the interval. 3. Find the sample statistic and confidence interval On the TI-83/84: Go into the STAT menu, and type the data into L1. Then go into STAT and over to TESTS. Choose TInterval. On R: t.test(variable, conf.level = C), where C is given in decimal form. So for this example it would be t.test(expectancy, conf.level = 0.99) One Sample t-test data: expectancy t = 93.711, df = 52, p-value < 2.2e-16 alternative hypothesis: true mean is not equal to 0 99 percent confidence interval: 71.63204 75.83966 sample estimates: mean of x 73.73585 71.6 years < $\mu$ 75.8 years 4. There is a 99% chance that 71.6 years < $\mu$ 75.8 years contains the mean life expectancy of European men. 5. The mean life expectancy of European men is between 71.6 and 75.8 years. Homework Exercise $1$ In each problem show all steps of the confidence interval. If some of the assumptions are not met, note that the results of the interval may not be correct and then continue the process of the confidence interval. 1. The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. Example $4$ contains a random sample of CO2 emissions in 2010 ("CO2 emissions," 2013). Compute a 99% confidence interval to estimate the mean CO2 emission in 2010. 1.36 1.42 5.93 5.36 0.06 9.11 7.32 7.93 6.72 0.78 1.80 0.20 2.27 0.28 5.86 3.46 1.46 0.14 2.62 0.79 7.48 0.86 7.84 2.87 2.45 Table $4$: CO2 Emissions (metric tons per capita) in 2010 2. Many people feel that cereal is healthier alternative for children over glazed donuts. Example $5$ contains the amount of sugar in a sample of cereal that is geared towards children ("Healthy breakfast story," 2013). Estimate the mean amount of sugar in children cereal using a 95% confidence level. 10 14 12 9 13 13 13 11 12 15 9 10 11 3 6 12 15 12 12 Table $5$: Sugar Amounts (g) in Children's Cereal 3. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example $6$ ("Multi-disciplinary niser activity," 2013). Compute a 90% confidence interval for the mean amount of mercury in fish in Florida lakes. 1.23 1.33 0.04 0.44 1.20 0.27 0.48 0.19 0.83 0.81 0.81 0.5 0.49 1.16 0.05 0.15 0.19 0.77 1.08 0.98 0.63 0.56 0.41 0.73 0.34 0.59 0.34 0.84 0.50 0.34 0.28 0.34 0.87 0.56 0.17 0.18 0.19 0.04 0.49 1.10 0.16 0.10 0.48 0.21 0.86 0.52 0.65 0.27 0.94 0.40 0.43 0.25 0.27 Table $6$: Average Mercury Levels (mg/kg) in Fish 4. In 1882, Albert Michelson collected measurements on the speed of light ("Student t-distribution," 2013). His measurements are given in Example $7$. Find the speed of light value that Michelson estimated from his data using a 95% confidence interval. 299883 299816 299778 299796 299682 299711 299611 299599 300051 299781 299578 299796 299774 299820 299772 299696 299573 299748 299748 299797 299851 299809 299723 Table $7$: Speed of Light Measurements in (km/sec) 5. Example $8$ contains pulse rates after running for 1 minute, collected from females who drink alcohol ("Pulse rates before," 2013). The mean pulse rate after running for 1 minute of females who do not drink is 97 beats per minute. Do the data show that the mean pulse rate of females who do drink alcohol is higher than the mean pulse rate of females who do not drink? Test at the 5% level. 176 150 150 115 129 160 120 125 89 132 120 120 68 87 88 72 77 84 92 80 60 67 59 64 88 74 68 Table $8$: Pulse Rates of Woman Who Use Alcohol 6. The economic dynamism, which is the index of productive growth in dollars for countries that are designated by the World Bank as middle-income are in Example $9$ ("SOCR data 2008," 2013). Countries that are considered high-income have a mean economic dynamism of 60.29. Do the data show that the mean economic dynamism of middle-income countries is less than the mean for high-income countries? Test at the 5% level. 25.8057 37.4511 51.915 43.6952 47.8506 43.7178 58.0767 41.1648 38.0793 37.7251 39.6553 42.0265 48.6159 43.8555 49.1361 61.9281 41.9543 44.9346 46.0521 48.3652 43.6252 50.9866 59.1724 39.6282 33.6074 21.6643 Table $9$: Economic Dynamism (\$) of Middle Income Countries 7. In 1999, the average percentage of women who received prenatal care per country is 80.1%. Example $10$ contains the percentage of woman receiving prenatal care in 2009 for a sample of countries ("Pregnant woman receiving," 2013). Do the data show that the average percentage of women receiving prenatal care in 2009 is higher than in 1999? Test at the 5% level. 70.08 72.73 74.52 75.79 76.28 76.28 76.65 80.34 80.60 81.90 86.30 87.70 87.76 88.40 90.70 91.50 91.80 92.10 92.20 92.41 92.47 93.00 93.20 93.40 93.63 93.69 93.80 94.30 94.51 95.00 95.80 95.80 96.23 96.24 97.30 97.90 97.95 98.20 99.00 99.00 99.10 99.10 100.00 100.00 100.00 100.00 100.00 Table $10$: Percentage of Woman Receiving Prenatal Care 8. Maintaining your balance may get harder as you grow older. A study was conducted to see how steady the elderly is on their feet. They had the subjects stand on a force platform and have them react to a noise. The force platform then measured how much they swayed forward and backward, and the data is in Example $11$ ("Maintaining balance while," 2013). Do the data show that the elderly sway more than the mean forward sway of younger people, which is 18.125 mm? Test at the 5% level. 19 30 20 19 29 25 21 24 50 Table $11$: Forward/Backward Sway (in mm) of Elderly Subjects Answer For all confidence intervals, just the interval using technology is given. See solution for the entire answer. 1. 1.7944 < $\mu$ < 5.1152 metric tons per capita 3. 0.44872 < $\mu$ < 0.60562 mg/kg 5. 87.2423 < $\mu$ < 113.795 beats/min 7. 88.8747% < $\mu$ < 93.0253% Data Sources: Australian Human Rights Commission, (1996). Indigenous deaths in custody 1989 - 1996. Retrieved from website: www.humanrights.gov.au/public...deaths-custody CDC features - new data on autism spectrum disorders. (2013, November 26). Retrieved from www.cdc.gov/features/countingautism/ Center for Disease Control and Prevention, Prevalence of Autism Spectrum Disorders - Autism and Developmental Disabilities Monitoring Network. (2008). Autism and developmental disabilities monitoring network-2012. Retrieved from website: www.cdc.gov/ncbddd/autism/doc...nityReport.pdf CO2 emissions. (2013, November 19). Retrieved from http://data.worldbank.org/indicator/EN.ATM.CO2E.PC Federal Trade Commission, (2008). Consumer fraud and identity theft complaint data: January-december 2007. Retrieved from website: www.ftc.gov/opa/2008/02/fraud.pdf Gallup news service. (2013, November 7-10). Retrieved from www.gallup.com/file/poll/1658...acy_131115.pdf Healthy breakfast story. (2013, November 16). Retrieved from lib.stat.cmu.edu/DASL/Stories...Breakfast.html Maintaining balance while concentrating. (2013, September 25). Retrieved from http://www.statsci.org/data/general/balaconc.html Morgan Gallup poll on unemployment. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/gallup.html Multi-disciplinary niser activity - mercury in bass. (2013, November 16). Retrieved from http://gozips.uakron.edu/~nmimoto/pa.../MercuryInBass - description.txt Pregnant woman receiving prenatal care. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.STA.ANVC.ZS Pulse rates before and after exercise. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/ms212.html SOCR data 2008 world countries rankings. (2013, November 16). Retrieved from wiki.stat.ucla.edu/socr/index...ountriesRankin gs Student t-distribution. (2013, November 25). Retrieved from lib.stat.cmu.edu/DASL/Stories/student.html WHO life expectancy. (2013, September 19). Retrieved from www.who.int/gho/mortality_bur...n_trends/en/in dex.html
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/08%3A_Estimation/8.03%3A_One-Sample_Interval_for_the_Mean.txt
Chapter 7 discussed methods of hypothesis testing about one-population parameters. Chapter 8 discussed methods of estimating population parameters from one sample using confidence intervals. This chapter will look at methods of confidence intervals and hypothesis testing for two populations. Since there are two populations, there are two random variables, two means or proportions, and two samples (though with paired samples you usually consider there to be one sample with pairs collected). Examples of where you would do this are: • Testing and estimating the difference in testosterone levels of men before and after they had children (Gettler, McDade, Feranil & Kuzawa, 2011). • Testing the claim that a diet works by looking at the weight before and after subjects are on the diet. • Estimating the difference in proportion of those who approve of President Obama in the age group 18 to 26 year olds and the 55 and over age group. All of these are examples of hypothesis tests or confidence intervals for two populations. The methods to conduct these hypothesis tests and confidence intervals will be explored in this method. As a reminder, all hypothesis tests are the same process. The only thing that changes is the formula that you use. Confidence intervals are also the same process, except that the formula is different. 09: Two-Sample Interference There are times you want to test a claim about two population proportions or construct a confidence interval estimate of the difference between two population proportions. As with all other hypothesis tests and confidence intervals, the process is the same though the formulas and assumptions are different. Hypothesis Test for Two Populations Proportion (2-Prop Test) 1. State the random variables and the parameters in words. $x_{1}$= number of successes from group 1 $x_{2}$ = number of successes from group 2 $p_{1}$ = proportion of successes in group 1 $p_{2}$ = proportion of successes in group 2 2. State the null and alternative hypotheses and the level of significance $\begin{array}{ll}{H_{o} : p_{1}=p_{2}} & {\text { or } \quad H_{o} : p_{1}-p_{2}=0} \ {H_{A} : p_{1}<p_{2}} &\quad\quad\: {H_{A} : p_{1}-p_{2}<0} \ {H_{A} : p_{1}>p_{2}} &\quad\quad\: {H_{A} : p_{1}-p_{2}>0} \ {H_{A} : p_{1} \neq p_{2}} & \quad\quad\:{H_{A} : p_{1}-p_{2} \neq 0}\end{array}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for a hypothesis test 1. A simple random sample of size $n_{1}$ is taken from population 1, and a simple random sample of size $n_{2}$ is taken from population 2. 2. The samples are independent. 3. The assumptions for the binomial distribution are satisfied for both populations. 4. To determine the sampling distribution of $\hat{p}_{1}$, you need to show that $n_{1} p_{1} \geq 5$ and $n_{1} q_{1} \geq 5$, where $q_{1}=1-p_{1}$. If this requirement is true, then the sampling distribution of $\hat{p}_{1}$ is well approximated by a normal curve. To determine the sampling distribution of $\hat{p}_{2}$, you need to show that $n_{2} p_{2} \geq 5$ and $n_{2} q_{2} \geq 5$, where $q_{2}=1-p_{2}$. If this requirement is true, then the sampling distribution of $\hat{p}_{2}$ is well approximated by a normal curve. However, you do not know $p_{1}$ and $p_{2}$, so you need to use $\hat{p}_{1}$ and instead $\hat{p}_{2}$. This is not perfect, but it is the best you can do. Since $n_{1} \hat{p}_{1}=n_{1} \dfrac{x_{1}}{n_{1}}=x_{1}$ (and similar for the other calculations) you just need to make sure that $x_{1}$, $n_{1}-x_{1}$, $n_{2}-x_{2}$,and are all more than 5. 4. Find the sample statistics, test statistic, and p-value Sample Proportion: $\begin{array}{ll}{n_{1}=\text { size of sample } 1} & {n_{2}=\text { size of sample } 2} \ {\hat{p}_{1}=\dfrac{x_{1}}{n_{1}}(\text { sample } 1 \text { proportion) }} & {\hat{p}_{2}=\dfrac{x_{2}}{n_{2}} \text { (sample } 2 \text { proportion) }} \ {\hat{q}_{1}=1-\hat{p}_{1} \text { (complement of } \hat{p}_{1} )} & {\hat{q}_{2}=1-\hat{p}_{2} \text { (complement of } \hat{p}_{2} )}\end{array}$ Pooled Sample Proportion, $\overline{p}$: \begin{aligned} \overline{p} &=\dfrac{x_{1}+x_{2}}{n_{1}+n_{2}} \ \overline{q} &=1-\overline{p} \end{aligned} Test Statistic: $z=\dfrac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\dfrac{\overline{p} \overline{q}}{n_{1}}+\dfrac{\overline{p} \overline{q}}{n_{2}}}}$ Usually $p_{1} - p_{2} = 0$, since $H_{o} : p_{1}=p_{2}$ p-value: On TI-83/84: use normalcdf(lower limit, upper limit, 0, 1) Note If $H_{A} : p_{1}<p_{2}$ then lower limit is $-1 E 99$ and upper limit is your test statistic. If $H_{A} : p_{1}>p_{2}$, then lower limit is your test statistic and the upper limit is $1 E 99$. If $H_{A} : p_{1} \neq p_{2}$, then find the p-value for $H_{A} : p_{1}<p_{2}$, and multiply by 2. On R: use pnorm(z, 0, 1) Note If $H_{A} : p_{1}<p_{2}$, then use pnorm(z, 0, 1). If $H_{A} : p_{1}>p_{2}$, then use 1 - pnorm(z, 0, 1). If $H_{A} : p_{1} \neq p_{2}$, then find the p-value for $H_{A} : p_{1}<p_{2}$, and multiply by 2. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Confidence Interval for the Difference Between Two Population Proportion (2-Prop Interval) The confidence interval for the difference in proportions has the same random variables and proportions and the same assumptions as the hypothesis test for two proportions. If you have already completed the hypothesis test, then you do not need to state them again. If you haven’t completed the hypothesis test, then state the random variables and proportions and state and check the assumptions before completing the confidence interval step 1. Find the sample statistics and the confidence interval Sample Proportion: $\begin{array}{ll}{n_{1}=\text { size of sample } 1} & {n_{2}=\text { size of sample } 2} \ {\hat{p}_{1}=\dfrac{x_{1}}{n_{1}}(\text { sample } 1 \text { proportion) }} & {\hat{p}_{2}=\dfrac{x_{2}}{n_{2}} \text { (sample } 2 \text { proportion) }} \ {\hat{q}_{1}=1-\hat{p}_{1}\left(\text { complement of } \hat{p}_{1}\right)} & {\hat{q}_{2}=1-\hat{p}_{2} \text { (complement of } \hat{p}_{2} )}\end{array}$ Confidence Interval: The confidence interval estimate of the difference $p_{1}-p_{2}$ is $\left(\hat{p}_{1}-\hat{p}_{2}\right)-E<p_{1}-p_{2}<\left(\hat{p}_{1}-\hat{p}_{2}\right)+E$ where the margin of error E is given by $E=z_{c} \sqrt{\dfrac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\dfrac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}}$ $z_{c}$ = critical value 2. Statistical Interpretation: In general this looks like, “there is a C% chance that $\left(\hat{p}_{1}-\hat{p}_{2}\right)-E<p_{1}-p_{2}<\left(\hat{p}_{1}-\hat{p}_{2}\right)+E$ contains the true difference in proportions.” 3. Real World Interpretation: This is where you state how much more (or less) the first proportion is from the second proportion. The critical value is a value from the normal distribution. Since a confidence interval is found by adding and subtracting a margin of error amount from the sample proportion, and the interval has a probability of being true, then you can think of this as the statement $P\left(\left(\hat{p}_{1}-\hat{p}_{2}\right)-E<p_{1}-p_{2}<\left(\hat{p}_{1}-\hat{p}_{2}\right)+E\right)=C$. So you can use the invNorm command on the TI-83/84 calculator or qnorm on R to find the critical value. These are always the same value, so it is easier to just look at the table A.1 in the Appendix. Example $1$ hypothesis test for two population proportions Do husbands cheat on their wives more than wives cheat on their husbands ("Statistics brain," 2013)? Suppose you take a group of 1000 randomly selected husbands and find that 231 had cheated on their wives. Suppose in a group of 1200 randomly selected wives, 176 cheated on their husbands. Do the data show that the proportion of husbands who cheat on their wives are more than the proportion of wives who cheat on their husbands. Test at the 5% level. 1. State the random variables and the parameters in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for a hypothesis test. 4. Find the sample statistics, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. $x_{1}$ = number of husbands who cheat on his wife $x_{2}$ = number of wives who cheat on her husband $p_{1}$ = proportion of husbands who cheat on his wife $p_{2}$ = proportion of wives who cheat on her husband 2. $\begin{array}{ll}{H_{o} : p_{1}=p_{2}} & {\text { or } \quad H_{o} : p_{1}-p_{2}=0} \ {H_{A} : p_{1}>p_{2}} &\quad\quad\: {H_{A} : p_{1}-p_{2}>0} \ {a=0.05}\end{array}$ 3. 1. A simple random sample of 1000 responses about cheating from husbands is taken. This was stated in the problem. A simple random sample of 1200 responses about cheating from wives is taken. This was stated in the problem. 2. The samples are independent. This is true since the samples involved different genders. 3. The properties of the binomial distribution are satisfied in both populations. This is true since there are only two responses, there are a fixed number of trials, the probability of a success is the same, and the trials are independent. 4. The sampling distributions of $\hat{p}_{1}$ and $\hat{p}_{2}$ can be approximated with a normal distribution. $x_{1}=231, n_{1}-x_{1}=1000-231=769, x_{2}=176$, and $n_{2}-x_{2}=1200-176=1024$ are all greater than or equal to 5. So both sampling distributions of $\hat{p}_{1}$ and $\hat{p}_{2}$ can be approximated with a normal distribution. 4. Sample Proportion: $\begin{array}{ll}{n_{1}=1000} & {n_{2}=1200} \ {\hat{p}_{1}=\dfrac{231}{1000}=0.231} & {\hat{p}_{2}=\dfrac{176}{1200} \approx 0.1467} \ {\hat{q}_{1}=1-\dfrac{231}{1000}=\dfrac{769}{1000}=0.769} & {\hat{q}_{2}=1-\dfrac{176}{1200}=\dfrac{1024}{1200} \approx 0.8533}\end{array}$ Pooled Sample Proportion, $\overline{p}$: $\begin{array}{l}{\overline{p}=\dfrac{231+176}{1000+1200}=\dfrac{407}{2200}=0.185} \ {\overline{q}=1-\dfrac{407}{2200}=\dfrac{1793}{2200}=0.815}\end{array}$ Test Statistic: $z=\dfrac{(0.231-0.1467)-0}{\sqrt{\dfrac{0.185 * 0.815}{1000}+\dfrac{0.185 * 0.815}{1200}}}$ $=5.0704$ p-value: On TI-83/84: normalcdf $(5.0704,1 E 99,0,1)=1.988 \times 10^{-7}$ On R: $1-\text { pnorm }(5.0704,0,1)=1.988 \times 10^{-7}$ On R: prop.test$\left(c\left(x_{1}, x_{2}\right), c\left(n_{1}, n_{2}\right), \text { alternative }=\right.$ "less" or "greater". For this example, prop.test(c(231,176), c(1000, 1200), alternative="greater") 2-sample test for equality of proportions with continuity correction data: c(231, 176) out of c(1000, 1200) X-squared = 25.173, df = 1, p-value = 2.621e-07 alternative hypothesis: greater 95 percent confidence interval: 0.05579805 1.00000000 sample estimates: prop 1 prop 2 0.2310000 0.1466667 Note The answer from R is the p-value. It is different from the formula or the TI-83/84 calculator due to a continuity correction that R does. 5. Conclusion Reject $H_{o}$, since the p-value is less than 5%. 6. Interpretation This is enough evidence to show that the proportion of husbands having affairs is more than the proportion of wives having affairs. Example $2$ confidence interval for two population properties Do more husbands cheat on their wives more than wives cheat on the husbands ("Statistics brain," 2013)? Suppose you take a group of 1000 randomly selected husbands and find that 231 had cheated on their wives. Suppose in a group of 1200 randomly selected wives, 176 cheated on their husbands. Estimate the difference in the proportion of husbands and wives who cheat on their spouses using a 95% confidence level. 1. State the random variables and the parameters in words. 2. State and check the assumptions for the confidence interval. 3. Find the sample statistics and the confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. These were stated in Example $1$, but are reproduced here for reference. $x_{1}$ = number of husbands who cheat on his wife $x_{2}$ = number of wives who cheat on her husband $p_{1}$ = proportion of husbands who cheat on his wife $p_{2}$ = proportion of wives who cheat on her husband 2. The assumptions were stated and checked in Example $1$. 3. Sample Proportion: $\begin{array}{ll}{n_{1}=1000} & {n_{2}=1200} \ {\hat{p}_{1}=\dfrac{231}{1000}=0.231} & {\hat{p}_{2}=\dfrac{176}{1200} \approx 0.1467} \ {\hat{q}_{1}=1-\dfrac{231}{1000}=\dfrac{769}{1000}=0.769} & {\hat{q}_{2}=1-\dfrac{176}{1200}=\dfrac{1024}{1200} \approx 0.8533}\end{array}$ Confidence Interval: $\begin{array}{l}{z_{c}=1.96} \ {E=1.96 \sqrt{\dfrac{0.231 * 0.769}{1000}+\dfrac{0.1467 * 0.8533}{1200}}=0.033}\end{array}$ The confidence interval estimate of the difference $p_{1}-p_{2}$ is $\begin{array}{l}{\left(\hat{p}_{1}-\hat{p}_{2}\right)-E<p_{1}-p_{2}<\left(\hat{p}_{1}-\hat{p}_{2}\right)+E} \ {(0.231-0.1467)-0.033<p_{1}-p_{2}<(0.231-0.1467)+0.033} \ {0.0513<p_{1}-p_{2}<0.1173}\end{array}$ On R: prop.test$\left(c\left(x_{1}, x_{2}\right), c\left(n_{1}, n_{2}\right), \text { conf.level }=\mathrm{C}\right)$, where C is in decimal form. For this example, prop.test(c(231,176), c(1000, 1200), conf.level=0.95) 2-sample test for equality of proportions with continuity correction data: c(231, 176) out of c(1000, 1200) X-squared = 25.173, df = 1, p-value = 5.241e-07 alternative hypothesis: two.sided 95 percent confidence interval: 0.05050705 0.11815962 sample estimates: prop 1 prop 2 0.2310000 0.1466667 Note The answer from R is the confidence interval. It is different from the formula or the TI-83/84 calculator due to a continuity correction that R does. 4. Statistical Interpretation: There is a 95% chance that $0.0505<p_{1}-p_{2}<0.1182$ contains the true difference in proportions. 5. Real World Interpretation: The proportion of husbands who cheat is anywhere from 5.05% to 11.82% higher than the proportion of wives who cheat. Homework Exercise $1$ In each problem show all steps of the hypothesis test or confidence interval. If some of the assumptions are not met, note that the results of the test or interval may not be correct and then continue the process of the hypothesis test or confidence interval. 1. Many high school students take the AP tests in different subject areas. In 2007, of the 144,796 students who took the biology exam 84,199 of them were female. In that same year, of the 211,693 students who took the calculus AB exam 102,598 of them were female ("AP exam scores," 2013). Is there enough evidence to show that the proportion of female students taking the biology exam is higher than the proportion of female students taking the calculus AB exam? Test at the 5% level. 2. Many high school students take the AP tests in different subject areas. In 2007, of the 144,796 students who took the biology exam 84,199 of them were female. In that same year, of the 211,693 students who took the calculus AB exam 102,598 of them were female ("AP exam scores," 2013). Estimate the difference in the proportion of female students taking the biology exam and female students taking the calculus AB exam using a 90% confidence level. 3. Many high school students take the AP tests in different subject areas. In 2007, of the 211,693 students who took the calculus AB exam 102,598 of them were female and 109,095 of them were male ("AP exam scores," 2013). Is there enough evidence to show that the proportion of female students taking the calculus AB exam is different from the proportion of male students taking the calculus AB exam? Test at the 5% level. 4. Many high school students take the AP tests in different subject areas. In 2007, of the 211,693 students who took the calculus AB exam 102,598 of them were female and 109,095 of them were male ("AP exam scores," 2013). Estimate using a 90% level the difference in proportion of female students taking the calculus AB exam versus male students taking the calculus AB exam. 5. Are there more children diagnosed with Autism Spectrum Disorder (ASD) in states that have larger urban areas over states that are mostly rural? In the state of Pennsylvania, a fairly urban state, there are 245 eight year olds diagnosed with ASD out of 18,440 eight year olds evaluated. In the state of Utah, a fairly rural state, there are 45 eight year olds diagnosed with ASD out of 2,123 eight year olds evaluated ("Autism and developmental," 2008). Is there enough evidence to show that the proportion of children diagnosed with ASD in Pennsylvania is more than the proportion in Utah? Test at the 1% level. 6. Are there more children diagnosed with Autism Spectrum Disorder (ASD) in states that have larger urban areas over states that are mostly rural? In the state of Pennsylvania, a fairly urban state, there are 245 eight year olds diagnosed with ASD out of 18,440 eight year olds evaluated. In the state of Utah, a fairly rural state, there are 45 eight year olds diagnosed with ASD out of 2,123 eight year olds evaluated ("Autism and developmental," 2008). Estimate the difference in proportion of children diagnosed with ASD between Pennsylvania and Utah. Use a 98% confidence level. 7. A child dying from an accidental poisoning is a terrible incident. Is it more likely that a male child will get into poison than a female child? To find this out, data was collected that showed that out of 1830 children between the ages one and four who pass away from poisoning, 1031 were males and 799 were females (Flanagan, Rooney & Griffiths, 2005). Do the data show that there are more male children dying of poisoning than female children? Test at the 1% level. 8. A child dying from an accidental poisoning is a terrible incident. Is it more likely that a male child will get into poison than a female child? To find this out, data was collected that showed that out of 1830 children between the ages one and four who pass away from poisoning, 1031 were males and 799 were females (Flanagan, Rooney & Griffiths, 2005). Compute a 99% confidence interval for the difference in proportions of poisoning deaths of male and female children ages one to four. Answer For all hypothesis tests, just the conclusion is given. For all confidence intervals, just the interval using technology (Software R) is given. See solution for the entire answer. 1. Reject Ho 2. $0.0941<p_{1}-p_{2}<0.0996$ 3. Reject Ho 4. $-0.0332<p_{1}-p_{2}<-0.0282$ 5. Fail to reject Ho 6. $-0.01547<p_{1}-p_{2}<-0.0001$ 7. Reject Ho 8. $0.0840<p_{1}-p_{2}<0.1696$
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/09%3A_Two-Sample_Interference/9.01%3A_Two_Proportions.txt
Are two populations the same? Is the average height of men taller than the average height of women? Is the mean weight less after a diet than before? You can compare populations by comparing their means. You take a sample from each population and compare the statistics. Anytime you compare two populations you need to know if the samples are independent or dependent. The formulas you use are different for different types of samples. If how you choose one sample has no effect on the way you choose the other sample, the two samples are independent. The way to think about it is that in independent samples, the individuals from one sample are overall different from the individuals from the other sample. This will mean that sample one has no affect on sample two. The sample values from one sample are not related or paired with values from the other sample. If you choose the samples so that a measurement in one sample is paired with a measurement from the other sample, the samples are dependent or matched or paired. (Often a before and after situation.) You want to make sure the there is a meaning for pairing data values from one sample with a specific data value from the other sample. One way to think about it is that in dependent samples, the individuals from one sample are the same individuals from the other sample, though there can be other reasons to pair values. This makes the sample values from each sample paired. Example $1$ independent or dependent samples Determine if the following are dependent or independent samples. 1. Randomly choose 5 men and 6 women and compare their heights. 2. Choose 10 men and weigh them. Give them a new wonder diet drug and later weigh them again. 3. Take 10 people and measure the strength of their dominant arm and their non-dominant arm. Solution 1. Independent, since there is no reason that one value belongs to another. The individuals are not the same for both samples. The individuals are definitely different. A way to think about this is that the knowledge that a man is chosen in one sample does not give any information about any of the woman chosen in the other sample. 2. Dependent, since each person’s before weight can be matched with their after weight. The individuals are the same for both samples. A way to think about this is that the knowledge that a person weighs 400 pounds at the beginning will tell you something about their weight after the diet drug. 3. Dependent, since you can match the two arm strengths. The individuals are the same for both samples. So the knowledge of one person’s dominant arm strength will tell you something about the strength of their non-dominant arm. To analyze data when there are matched or paired samples, called dependent samples, you conduct a paired t-test. Since the samples are matched, you can find the difference between the values of the two random variables. Hypothesis Test for Two Sample Paired t-Test 1. State the random variables and the parameters in words. $x_{1}$ = random variable 1 $x_{2}$ = random variable 2 $\mu_{1}$ = mean of random variable 1 $\mu_{2}$ = mean of random variable 2 2. State the null and alternative hypotheses and the level of significance The usual hypotheses would be $\begin{array}{ll}{H_{o} : \mu_{1}=\mu_{2} \text { or }} & {H_{o} : \mu_{1}-\mu_{2}=0} \ {H_{A} : \mu_{1}<\mu_{2}} & {H_{A} : \mu_{1}-\mu_{2}<0} \ {H_{A} : \mu_{1}>\mu_{2}} & {H_{A} : \mu_{1}-\mu_{2}>0} \ {H_{A} : \mu_{1} \neq \mu_{2}} & {H_{A} : \mu_{1}-\mu_{2} \neq 0}\end{array}$ However, since you are finding the differences, then you can actually think of $\mu_{1}-\mu_{2}=\mu_{\sigma} \mu_{d}=$ population mean value of the differences, So the hypotheses become $\begin{array}{l}{H_{o} : \mu_{d}=0} \ {H_{1} : \mu_{d}<0} \ {H_{A} : \mu_{d}>0} \ {H_{A} : \mu_{d} \neq 0}\end{array}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for the hypothesis test 1. A random sample of n pairs is taken. 2. The population of the difference between random variables is normally distributed. In this case the population you are interested in has to do with the differences that you find. It does not matter if each random variable is normally distributed. It is only important if the differences you find are normally distributed. Just as before, the t-test is fairly robust to the assumption if the sample size is large. This means that if this assumption isn’t met, but your sample size is quite large (over 30), then the results of the t-test are valid. 4. Find the sample statistic, test statistic, and p-value Sample Statistic: Difference: $d=x_{1}-x_{2}$for each pair Sample mean of the differences: $\overline{d}=\dfrac{\sum d}{n}$ Standard deviation of the differences: $s_{d}=\dfrac{\sum(d-\overline{d})^{2}}{n-1}$ Number of pairs: n Test Statistic: $t=\dfrac{\overline{d}-\mu_{d}}{\dfrac{s_{d}}{\sqrt{n}}}$ with degrees of freedom = df = n - 1 Note $\mu_{d}=0$ in most cases. p-value: On TI-83/84: Use tcdf ( lower limit, upper limit, df ) Note If $H_{A} : \mu_{d}<0$, then lower limit is $-1 E 99$ and upper limit is your test statistic. If $H_{A} : \mu_{d}>0$, then lower limit is your test statistic and the upper limit is $1 E 99$. If $H_{A} : \mu_{d} \neq 0$, then find the p-value for $H_{A} : \mu_{d}<0$, and multiply by 2.) On R: Use pt (t, df ) Note If $H_{A} : \mu_{d}<0$, use pt (t, df ). If $H_{A} : \mu_{d}>0$, use 1 - pt(t, df). If $H_{A} : \mu_{d} \neq 0$, then find the p-value for $H_{A} : \mu_{d}<0$, and multiply by 2 5. This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Confidence Interval for Difference in Means from Paired Samples (t-Interval) The confidence interval for the difference in means has the same random variables and means and the same assumptions as the hypothesis test for two paired samples. If you have already completed the hypothesis test, then you do not need to state them again. If you haven’t completed the hypothesis test, then state the random variables and means, and state and check the assumptions before completing the confidence interval step. 1. Find the sample statistic and confidence interval Sample Statistic: Difference: d = $x_{1}-x_{2}$ Sample mean of the differences: $\overline{d}=\dfrac{\sum{d}}{n}$ Standard deviation of the differences: $s_{d}=\dfrac{\sum(d-\overline{d})^{2}}{n-1}$ Number of pairs: n Confidence Interval: The confidence interval estimate of the difference $\mu_{d}=\mu_{1}-\mu_{2}$ is $\begin{array}{l}{\overline{d}-E<\mu_{d}<\overline{d}+E} \ {E=t_{c} \dfrac{s_{d}}{\sqrt{n}}}\end{array}$ $t_{c}$ is the critical value where degrees of freedom df = n - 1 2. Statistical Interpretation: In general this looks like, “there is a C% chance that the statement $\overline{d}-E<\mu_{d}<\overline{d}+E$ contains the true mean difference.” 3. Real World Interpretation: This is where you state what interval contains the true mean difference. The critical value is a value from the Student’s t-distribution. Since a confidence interval is found by adding and subtracting a margin of error amount from the sample mean, and the interval has a probability of containing the true mean difference, then you can think of this as the statement $P\left(\overline{d}-E<\mu_{d}<\overline{d}+E\right)=C$. To find the critical value, you use table A.2 in the Appendix. How to check the assumptions of t-test and confidence interval: In order for the t-test or confidence interval to be valid, the assumptions of the test must be met. So whenever you run a t-test or confidence interval, you must make sure the assumptions are met. So you need to check them. Here is how you do this: 1. For the assumption that the sample is a random sample, describe how you took the samples. Make sure your sampling technique is random and that the samples were dependent. 2. For the assumption that the population of the differences is normal, remember the process of assessing normality from chapter 6. Example $2$ hypothesis test for paired samples using the formula A researcher wants to see if a weight loss program is effective. She measures the weight of 6 randomly selected women before and after the weight loss program (see Example $1$). Is there evidence that the weight loss program is effective? Test at the 5% level. Person 1 2 3 4 5 6 Weight before 165 172 181 185 168 175 Weight after 143 151 156 161 152 154 Table $1$: Data of Before and After Weights 1. State the random variables and the parameters in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for the hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. $x_{1}$ = weight of a woman after the weight loss program $x_{2}$ = weight of a woman before the weight loss program $\mu_{1}$ = mean weight of a woman after the weight loss program $\mu_{2}$ = mean weight of a woman before the weight loss program 2. $\begin{array}{l}{H_{o} : \mu_{d}=0} \ {H_{A} : \mu_{d}<0} \ {\alpha=0.05}\end{array}$ 3. 1. A random sample of 6 pairs of weights before and after was taken. This was stated in the problem, since the women were chosen randomly. 2. The population of the difference in after and before weights is normally distributed. To see if this is true, look at the histogram, number of outliers, and the normal probability plot. (If you wish, you can look at the normal probability plot first. If it doesn’t look linear, then you may want to look at the histogram and number of outliers at this point.) This histogram looks somewhat bell shaped. There is only one outlier in the difference data set. The probability plot on the differences looks somewhat linear. So you can assume that the distribution of the difference in weights is normal. 4. Sample Statistics: Person 1 2 3 4 5 6 Weight after, $x_{1}$ 143 151 156 161 152 154 Weight before, $x_{2}$ 165 172 181 185 168 175 d = $x_{1}-x_{2}$ -22 -21 -25 -24 -16 -21 Table $2$: Differences Between Before and After Weights The mean and standard deviation are $\begin{array}{l}{\overline{d}=-21.5} \ {s_{d}=3.15}\end{array}$ Test Statistic: $t=\dfrac{\overline{d}-\mu_{d}}{s_{d} / \sqrt{n}}=\dfrac{-21.5-0}{3.15 / \sqrt{6}}=-16.779$ p-value: There are six pairs so the degrees of freedom are df = n - 1 = 6 - 1 = 5 Since $H_{1} : \mu_{d}<0$, then p-value Using TI-83/84: tcdf $(-1 E 99,-16.779,5) \approx 6.87 \times 10^{-6}$ Using R: pt $(-16.779,5) \approx 6.87 \times 10^{-6}$ 5. Since the p-value < 0.05, reject $H_{o}$. 6. There is enough evidence to show that the weight loss program is effective. Note Just because the hypothesis test says the program is effective doesn’t mean you should go out and use it right away. The program has statistical significance, but that doesn’t mean it has practical significance. You need to see how much weight a person loses, and you need to look at how safe it is, how expensive, does it work in the long term, and other type questions. Remember to look at the practical significance in all situations. In this case, the average weight loss was 21.5 pounds, which is very practically significant. Do remember to look at the safety and expense of the drug also. Example $3$ hypothesis Test for Paired Samples Using Technology The New Zealand Air Force purchased a batch of flight helmets. They then found out that the helmets didn’t fit. In order to make sure that they order the correct size helmets, they measured the head size of recruits. To save money, they wanted to use cardboard calipers, but were not sure if they will be accurate enough. So they took 18 recruits and measured their heads with the cardboard calipers and also with metal calipers. The data in centimeters (cm) is in Example $3$ ("NZ helmet size," 2013). Do the data provide enough evidence to show that there is a difference in measurements between the cardboard and metal calipers? Use a 5% level of significance. Cardboard Metal 146 145 151 153 163 161 152 151 151 145 151 150 149 150 166 163 149 147 155 154 155 150 156 156 162 161 150 152 156 154 158 154 149 147 163 160 Table $3$: Data for Head Measurements 1. State the random variables and the parameters in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for the hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. $x_{1}$ = head measurement of recruit using cardboard caliper $x_{2}$ = head measurement of recruit using metal caliper $\mu_{1}$ = mean head measurement of recruit using cardboard caliper $\mu_{2}$ = mean head measurement of recruit using metal caliper 2. $\begin{array}{l}{H_{o} : \mu_{d}=0} \ {H_{A} : \mu_{d} \neq 0} \ {\alpha=0.05}\end{array}$ 3. 1. A random sample of 18 pairs of head measures of recruits with cardboard and metal caliper was taken. This was not stated, but probably could be safely assumed. 2. The population of the difference in head measurements between cardboard and metal calipers is normally distributed. To see if this is true, look at the histogram, number of outliers, and the normal probability plot. (If you wish, you can look at the normal probability plot first. If it doesn’t look linear, then you may want to look at the histogram and number of outliers at this point.) This histogram looks bell shaped. There are no outliers in the difference data set. The probability plot on the differences looks somewhat linear. So you can assume that the distribution of the difference in weights is normal. 4. Using the TI-83/84, put $x_{1}$ into L1 and $x_{2}$ into L2. Then go onto the name L3, and type L1-L2. The calculator will calculate the differences for you and put them in L3. Now go into STAT and move over to TESTS. Choose T-Test. The setup for the calculator is in Figure $7$. Once you press ENTER on Calculate you will see the result shown in Figure $8$. Using R: command is t.test(variable1, variable2, paired = TRUE, alternative = "less" or "greater"). For this example, the command would be t.test(cardboard, metal, paired = TRUE) Paired t-test data: cardboard and metal t = 3.1854, df = 17, p-value = 0.005415 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.5440163 2.6782060 sample estimates: mean of the differences 1.611111 The t = 3.185 is the test statistic. The p-value is 0.0054147206. 5. Since the p-value < 0.05, reject $H_{o}$. 6. There is enough evidence to show that the mean head measurements using the cardboard calipers are not the same as when using the metal calipers. So it looks like the New Zealand Air Force shouldn’t use the cardboard calipers. Example $4$ confidence interval for paired samples using the formula A researcher wants to estimate the mean weight loss that people experience using a new program. She measures the weight of 6 randomly selected women before and after the weight loss program (see Example $1$). Find a 90% confidence interval for the mean the weight loss using the new program. 1. State the random variables and the parameters in words. 2. State and check the assumptions for the confidence interval. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. These were stated in Example $2$, but are reproduced here for reference. $x_{1}$ = weight of a woman after the weight loss program $x_{2}$ = weight of a woman before the weight loss program $\mu_{1}$ = mean weight of a woman after the weight loss program $\mu_{2}$ = mean weight of a woman before the weight loss program 2. The assumptions were stated and checked in Example $2$. 3. Sample Statistics: From Example $2$ $\begin{array}{l}{\overline{d}=-21.5} \ {s_{d}=3.15}\end{array}$ The confidence level is 90%, so C= 90% There are six pairs, so the degrees of freedom are df = n - 1 = 6 - 1 = 5 Now look in table A.2. Go down the first column to 5, then over to the column headed with 90%. $t_{c}=2.015$ $E=t_{c} \dfrac{s_{d}}{\sqrt{n}}=2.015 \dfrac{3.15}{\sqrt{6}} \approx 2.6$ $\overline{d}-E<\mu_{d}<\overline{d}+E$ $-21.5-2.6<\mu_{d}<-21.5+2.6$ $-24.1 \text { pounds }<\mu_{d}<-18.9 \text { pounds }$ 4. There is a 90% chance that $-24.1 \text { pounds }<\mu_{d}<-18.9 \text { pounds }$ contains the true mean difference in weight loss. 5. The mean weight loss is between 18.9 and 24.1 pounds. Note The negative signs tell you that the first mean is less than the second mean, and thus a weight loss in this case. Example $5$ confidence interval for paired samples using technology The New Zealand Air Force purchased a batch of flight helmets. They then found out that the helmets didn’t fit. In order to make sure that they order the correct size helmets, they measured the head size of recruits. To save money, they wanted to use cardboard calipers, but were not sure if they will be accurate enough. So they took 18 recruits and measured their heads with the cardboard calipers and also with metal calipers. The data in centimeters (cm) is in Example $3$ ("NZ helmet size," 2013). Estimate the mean difference in measurements between the cardboard and metal calipers using a 95% confidence interval. 1. State the random variables and the parameters in words. 2. State and check the assumptions for the hypothesis test. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. These were stated in Example $3$, but are reproduced here for reference. $x_{1}$ = head measurement of recruit using cardboard caliper $x_{2}$ = head measurement of recruit using metal caliper $\mu_{1}$ = mean head measurement of recruit using cardboard caliper $\mu_{2}$ = mean head measurement of recruit using metal caliper 2. The assumptions were stated and checked in Example $3$. 3. Using the TI-83/84, put $x_{1}$ into L1 and $x_{2}$ into L2. Then go onto the name L3, and type L1 - L2. The calculator will now calculate the differences for you and put them in L3. Now go into STAT and move over to TESTS. Then chose TInterval. The setup for the calculator is in Figure $9$. Once you press ENTER on Calculate you will see the result shown in Figure $10$. Using R: the command is t.test(variable1, variable2, paired = TRUE, conf.level = C), where C is in decimal form. For this example the command would be t.test(cardboard, metal, paired = TRUE, conf.level=0.95) Paired t-test data: cardboard and metal t = 3.1854, df = 17, p-value = 0.005415 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.5440163 2.6782060 sample estimates: mean of the differences 1.611111 So $0.54 \mathrm{cm}<\mu_{d}<2.68 \mathrm{cm}$ 4. There is a 95% chance that $0.54 \mathrm{cm}<\mu_{d}<2.68 \mathrm{cm}$ contains the true mean difference in head measurements between cardboard and metal calibers. 5. The mean difference in head measurements between the cardboard and metal calibers is between 0.54 and 2.68 cm. This means that the cardboard calibers measure on average the head of a recruit to be between 0.54 and 2.68 cm more in diameter than the metal calibers. That makes it seem that the cardboard calibers are not measuring the same as the metal calibers. (The positive values on the confidence interval imply that the first mean is higher than the second mean.) Examples 9.2.2 and 9.2.4 use the same data set, but one is conducting a hypothesis test and the other is conducting a confidence interval. Notice that the hypothesis test’s conclusion was to reject $H_{o}$ and say that there was a difference in the means, and the confidence interval does not contain the number 0. If the confidence interval did contain the number 0, then that would mean that the two means could be the same. Since the interval did not contain 0, then you could say that the means are different just as in the hypothesis test. This means that the hypothesis test and the confidence interval can produce the same interpretation. Do be careful though, you can run a hypothesis test with a particular significance level and a confidence interval with a confidence level that is not compatible with your significance level. This will mean that the conclusion from the confidence interval would not be the same as with a hypothesis test. So if you want to estimate the mean difference, then conduct a confidence interval. If you want to show that the means are different, then conduct a hypothesis test. Homework Exercise $1$ In each problem show all steps of the hypothesis test or confidence interval. If some of the assumptions are not met, note that the results of the test or interval may not be correct and then continue the process of the hypothesis test or confidence interval. 1. The cholesterol level of patients who had heart attacks was measured two days after the heart attack and then again four days after the heart attack. The researchers want to see if the cholesterol level of patients who have heart attacks reduces as the time since their heart attack increases. The data is in Example $4$ ("Cholesterol levels after," 2013). Do the data show that the mean cholesterol level of patients that have had a heart attack reduces as the time increases since their heart attack? Test at the 1% level. Patient Cholesterol Level Day 2 Cholesterol Level Day 4 1 270 218 2 236 234 3 210 214 4 142 116 5 280 200 6 272 276 7 160 146 8 220 182 9 225 238 10 242 288 11 186 190 12 266 236 13 206 244 14 318 258 15 294 240 16 282 294 17 234 220 18 224 200 19 276 220 20 282 186 21 360 352 22 310 202 23 280 218 24 278 248 25 288 278 26 288 248 27 244 270 28 236 242 Table $4$: Cholesterol Levels in (mg/dL) of Heart Attack Patients 2. The cholesterol level of patients who had heart attacks was measured two days after the heart attack and then again four days after the heart attack. The researchers want to see if the cholesterol level of patients who have heart attacks reduces as the time since their heart attack increases. The data is in Example $4$ ("Cholesterol levels after," 2013). Calculate a 98% confidence interval for the mean difference in cholesterol levels from day two to day four. 3. All Fresh Seafood is a wholesale fish company based on the east coast of the U.S. Catalina Offshore Products is a wholesale fish company based on the west coast of the U.S. Example $5$ contains prices from both companies for specific fish types ("Seafood online," 2013) ("Buy sushi grade," 2013). Do the data provide enough evidence to show that a west coast fish wholesaler is more expensive than an east coast wholesaler? Test at the 5% level. Fish All Fresh Seafood Prices Catalina Offshore Product Prices Cod 19.99 17.99 Tilapi 6.00 13.99 Farmed Salmon 19.99 22.99 Organic Salmon 24.99 24.99 Grouper Fillet 29.99 19.99 Tuna 28.99 31.99 Swordfish 23.99 23.99 Sea Bass 32.99 23.99 Striped Bass 29.99 14.99 Table $5$: Wholesale Prices of Fish in Dollars 4. All Fresh Seafood is a wholesale fish company based on the east coast of the U.S. Catalina Offshore Products is a wholesale fish company based on the west coast of the U.S. Example $5$ contains prices from both companies for specific fish types ("Seafood online," 2013) ("Buy sushi grade," 2013). Find a 95% confidence interval for the mean difference in wholesale price between the east coast and west coast suppliers. 5. The British Department of Transportation studied to see if people avoid driving on Friday the 13th. They did a traffic count on a Friday and then again on a Friday the 13th at the same two locations ("Friday the 13th," 2013). The data for each location on the two different dates is in Example $6$. Do the data show that on average fewer people drive on Friday the 13th? Test at the 5% level. Dates 6th 13th 1990, July 139246 138548 1990, July 134012 132909 1991, September 137055 136018 1991, September 133732 131843 1991, December 123552 121641 1991, December 121139 118723 1992, March 128293 125532 1992, March 124631 120249 1992, November 124609 122770 1992, November 117584 117263 Table $6$: Traffic Count 6. The British Department of Transportation studied to see if people avoid driving on Friday the 13th. They did a traffic count on a Friday and then again on a Friday the 13th at the same two locations ("Friday the 13th," 2013). The data for each location on the two different dates is in Example $6$. Estimate the mean difference in traffic count between the 6th and the 13th using a 90% level. 7. To determine if Reiki is an effective method for treating pain, a pilot study was carried out where a certified second-degree Reiki therapist provided treatment on volunteers. Pain was measured using a visual analogue scale (VAS) immediately before and after the Reiki treatment (Olson & Hanson, 1997). The data is in Example $7$. Do the data show that Reiki treatment reduces pain? Test at the 5% level. VAS before VAS after 6 3 2 1 2 0 9 1 3 0 3 2 4 1 5 2 2 2 3 0 5 1 2 2 3 0 5 1 1 0 6 4 6 1 4 4 4 1 7 6 2 1 4 3 8 8 Table $7$: Pain Measures Before and After Reiki Treatment 8. To determine if Reiki is an effective method for treating pain, a pilot study was carried out where a certified second-degree Reiki therapist provided treatment on volunteers. Pain was measured using a visual analogue scale (VAS) immediately before and after the Reiki treatment (Olson & Hanson, 1997). The data is in Example $7$. Compute a 90% confidence level for the mean difference in VAS score from before and after Reiki treatment. 9. The female labor force participation rates (FLFPR) of women in randomly selected countries in 1990 and latest years of the 1990s are in Example $8$ (Lim, 2002). Do the data show that the mean female labor force participation rate in 1990 is different from that in the latest years of the 1990s using a 5% level of significance? Region and country FLFPR 25-54 1990 FLFPR 25-54 Latest years of 1990s Iran 22.6 12.5 Morocco 41.4 34.5 Qatar 42.3 46.5 Syrian Arab Republic 25.6 19.5 United Arab Emirates 36.4 39.7 Cape Verde 46.7 50.9 Ghana 89.8 90.0 Kenya 82.1 82.6 Lesotho 51.9 68.0 South Africa 54.7 61.7 Bangladesh 73.5 60.6 Malaysia 49.0 50.2 Mongolia 84.7 71.3 Myanmar 72.1 72.3 Argentina 36.8 54 Belize 28.8 42.5 Bolivia 27.3 69.8 Brazil 51.1 63.2 Colombia 57.4 72.7 Ecuador 33.5 64 Nicaragua 50.1 42.5 Uruguay 59.5 71.5 Albania 77.4 78.8 Uzbekistan 79.6 82.8 Table $8$: Female Labor Force Participation Rates 10. The female labor force participation rates of women in randomly selected countries in 1990 and latest years of the 1990s are in Example $8$ (Lim, 2002). Estimate the mean difference in the female labor force participation rate in 1990 to latest years of the 1990s using a 95% confidence level? 11. Example $9$ contains pulse rates collected from males, who are non-smokers but do drink alcohol ("Pulse rates before," 2013). The before pulse rate is before they exercised, and the after pulse rate was taken after the subject ran in place for one minute. Do the data indicate that the pulse rate before exercise is less than after exercise? Test at the 1% level. Pulse before Pulse after 76 88 56 110 64 126 50 90 49 83 68 136 68 125 88 150 80 146 78 168 59 92 60 104 65 82 76 150 145 155 84 140 78 141 85 131 78 132 Table $9$: Pulse Rate of Males Before and After Exercise 12. Example $9$ contains pulse rates collected from males, who are non-smokers but do drink alcohol ("Pulse rates before," 2013). The before pulse rate is before they exercised, and the after pulse rate was taken after the subject ran in place for one minute. Compute a 98% confidence interval for the mean difference in pulse rates from before and after exercise. Answer For all hypothesis tests, just the conclusion is given. For all confidence intervals, just the interval using technology is given. See solution for the entire answer. 1. Reject Ho 2. $5.39857 \mathrm{mg} / \mathrm{dL}<\mu_{d}<41.1729 \mathrm{mg} / \mathrm{dL}$ 3. Fail to reject Ho 4. $-\ 3.24216<\mu_{d}<\ 8.13327$ 5. Reject Ho 6. $1154.09<\mu_{d}<2517.51$ 7. Reject Ho 8. $1.499<\mu_{d}<3.001$ 9. Fail to reject Ho 10. $-10.9096 \%<\mu_{d}<0.2596 \%$ 11. Reject Ho 12. $-62.0438 \text { beats/min }<\mu_{d}<-37.1141 \text { beats/min }$
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/09%3A_Two-Sample_Interference/9.02%3A_Paired_Samples_for_Two_Means.txt
This section will look at how to analyze when two samples are collected that are independent. As with all other hypothesis tests and confidence intervals, the process is the same though the formulas and assumptions are different. The only difference with the independent t-test, as opposed to the other tests that have been done, is that there are actually two different formulas to use depending on if a particular assumption is met or not. Hypothesis Test for Independent t-Test (2-Sample t-Test) 1. State the random variables and the parameters in words. $x_{1}$ = random variable 1 $x_{2}$ = random variable 2 $\mu_{1}$ = mean of random variable 1 $\mu_{2}$= mean of random variable 2 2. State the null and alternative hypotheses and the level of significance The normal hypotheses would be $\begin{array}{ll}{H_{o} : \mu_{1}=\mu_{2}} & {\text { or } \quad H_{o} : \mu_{1}-\mu_{2}=0} \ {H_{A} : \mu_{1}<\mu_{2}} & \quad\quad\: {H_{A} : \mu_{1}-\mu_{2}<0} \ {H_{A} : \mu_{1}>\mu_{2}} & \quad\quad\: {H_{A} : \mu_{1}-\mu_{2}>0} \ {H_{A} : \mu_{1} \neq \mu_{2}} & \quad\quad\: {H_{A} : \mu_{1}-\mu_{2} \neq 0}\end{array}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for the hypothesis test 1. A random sample of size $n_{1}$ is taken from population 1. A random sample of size $n_{2}$ is taken from population 2. Note The samples do not need to be the same size, but the test is more robust if they are. 2. The two samples are independent. 3. Population 1 is normally distributed. Population 2 is normally distributed. Just as before, the t-test is fairly robust to the assumption if the sample size is large. This means that if this assumption isn’t met, but your sample sizes are quite large (over 30), then the results of the t-test are valid. 4. The population variances are unknown and not assumed to be equal. The old assumption is that the variances are equal. However, this assumption is no longer an assumption that most statisticians use. This is because it isn’t really realistic to assume that the variances are equal. So we will just assume the assumption of the variances being unknown and not assumed to be equal is true, and it will not be checked. 4. Find the sample statistic, test statistic, and p-value Sample Statistic: Calculate $\overline{x}_{1}, \overline{x}_{2}, s_{1}, s_{2}, n_{1}, n_{2}$ Test Statistic: Since the assumption that $\sigma_{1}^{2}=\sigma_{2}^{2}$ isn’t being satisfied, then $t=\dfrac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\stackrel{s_{1}^{2}}{n_{1}}+\stackrel{s_{2}^{2}}{n_{2}}}}$ Usually $\mu_{1}-\mu_{2}=0$, since $H_{o} : \mu_{1}-\mu_{2}=0$ Degrees of freedom: (the Welch–Satterthwaite equation) $d f=\dfrac{(A+B)^{2}}{\dfrac{A^{2}}{n_{1}-1}+\dfrac{B^{2}}{n_{2}-1}}$ where $A=\dfrac{s_{1}^{2}}{n_{1}} \text { and } B=\dfrac{s_{2}^{2}}{n_{2}}$ p-value: Using the TI-83/84: tcdf(lower limit, upper limit, df) Note If $H_{A} : \mu_{1}-\mu_{2}<0$, then lower limit is $-1 E 99$ and upper limit is your test statistic. If $H_{A} : \mu_{1}-\mu_{2}>0$, then lower limit is your test statistic and the upper limit is $1 E 99$. If $H_{A} : \mu_{1}-\mu_{2} \neq 0$, then find the p-value for $H_{A} : \mu_{1}-\mu_{2}<0$, and multiply by 2. Using R: pt(t, df) Note If $H_{A} : \mu_{1}-\mu_{2}<0$, then use pt(t, df). If $H_{A} : \mu_{1}-\mu_{2}>0$, then use 1 - pt(t, df). If $H_{A} : \mu_{1}-\mu_{2} \neq 0$, then find the p-value for $H_{A} : \mu_{1}-\mu_{2}<0$, and multiply by 2. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$ 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Confidence Interval for the Difference in Means from Two Independent Samples (2 Samp T-Int) The confidence interval for the difference in means has the same random variables and means and the same assumptions as the hypothesis test for independent samples. If you have already completed the hypothesis test, then you do not need to state them again. If you haven’t completed the hypothesis test, then state the random variables and means and state and check the assumptions before completing the confidence interval step. 1. Find the sample statistic and confidence interval Sample Statistic: Calculate Confidence Interval: $\overline{x}_{1}, \overline{x}_{2}, s_{1}, s_{2}, n_{1}, n_{2}$ The confidence interval estimate of the difference is $\mu_{1}-\mu_{2}$ Since the assumption that $\sigma_{1}^{2}=\sigma_{2}^{2}$ isn’t being satisfied, then $\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E$ where $E=t_{c} \sqrt{\dfrac{s_{1}^{2}}{n_{1}}+\dfrac{s_{2}^{2}}{n_{2}}}$ where $t_{c}$ is the critical value with degrees of freedom: Degrees of freedom: (the Welch–Satterthwaite equation) $d f=\dfrac{(A+B)^{2}}{\dfrac{A^{2}}{n_{1}-1}+\dfrac{B^{2}}{n_{2}-1}}$ where $A=\dfrac{s_{1}^{2}}{n_{1}} \text { and } B=\dfrac{s_{2}^{2}}{n_{2}}$ 2. Statistical Interpretation: In general this looks like, “there is a C% chance that $\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E$ contains the true mean difference.” 3. Real World Interpretation: This is where you state what interval contains the true difference in means, though often you state how much more (or less) the first mean is from the second mean. The critical value is a value from the Student’s t-distribution. Since a confidence interval is found by adding and subtracting a margin of error amount from the difference in sample means, and the interval has a probability of containing the true difference in means, then you can think of this as the statement $P\left(\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E\right)=C$. To find the critical value you use table A.2 in the Appendix. How to check the assumptions of two sample t-test and confidence interval: In order for the t-test or confidence interval to be valid, the assumptions of the test must be true. So whenever you run a t-test or confidence interval, you must make sure the assumptions are true. So you need to check them. Here is how you do this: 1. For the random sample assumption, describe how you took the two samples. Make sure your sampling technique is random for both samples. 2. For the independent assumption, describe how they are independent samples. 3. For the assumption about each population being normally distributed, remember the process of assessing normality from chapter 6. Make sure you assess each sample separately. 4. You do not need to check the equal variance assumption since it is not being assumed. Example $1$ hypothesis test for two means The cholesterol level of patients who had heart attacks was measured two days after the heart attack. The researchers want to see if patients who have heart attacks have higher cholesterol levels over healthy people, so they also measured the cholesterol level of healthy adults who show no signs of heart disease. The data is in Table $1$ ("Cholesterol levels after," 2013). Do the data show that people who have had heart attacks have higher cholesterol levels over patients that have not had heart attacks? Test at the 1% level. Cholesterol Level of Heart Attack Patients Cholesterol Level of Healthy Individual 270 196 236 232 210 200 142 242 280 206 272 178 160 184 220 198 226 160 242 182 186 182 266 198 206 182 318 238 294 198 282 188 234 166 224 204 276 182 282 178 360 212 310 164 280 230 278 186 288 162 288 182 244 218 236 170 200 176 Table $1$: Cholesterol Levels in mg/dL 1. State the random variables and the parameters in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for the hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. $x_{1}$ = Cholesterol level of patients who had a heart attack $x_{2}$ = Cholesterol level of healthy individuals $\mu_{1}$ = mean cholesterol level of patients who had a heart attack $\mu_{2}$ = mean cholesterol level of healthy individuals 2. The normal hypotheses would be $\begin{array}{ll}{H_{o} : \mu_{1}=\mu_{2}} & {\text { or } \quad H_{o} : \mu_{1}-\mu_{2}=0} \ {H_{A} : \mu_{1}>\mu_{2}} & \quad\quad\:{H_{A} : \mu_{1}-\mu_{2}>0} \ {\alpha=0.01}\end{array}$ 3. 1. A random sample of 28 cholesterol levels of patients who had a heart attack is taken. A random sample of 30 cholesterol levels of healthy individuals is taken. The problem does not state if either sample was randomly selected. So this assumption may not be valid. 2. The two samples are independent. This is because either they were dealing with patients who had heart attacks or healthy individuals. 3. Population of all cholesterol levels of patients who had a heart attack is normally distributed. Population of all cholesterol levels of healthy individuals is normally distributed. Patients who had heart attacks: This looks somewhat bell shaped. There are no outliers This looks somewhat linear. So, the population of all cholesterol levels of patients who had heart attacks is probably somewhat normally distributed. Healthy individuals: This does not look bell shaped. There are no outliers. This doesn't look linear. So, the population of all cholesterol levels of healthy individuals is probably not normally distributed. This assumption is not valid for the second sample. Since the sample is fairly large, and the t-test is robust, it may not be an issue. However, just realize that the conclusions of the test may not be valid. 4. Sample Statistic: $\overline{x}_{1} \approx 252.32, \overline{x}_{2} \approx 193.13, s_{1} \approx 47.0642, s_{2} \approx 22.3000, n_{1}=28, n_{2}=30$ Test Statistic: $t=\dfrac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\dfrac{s_{1}^{2}}{n_{1}}+\dfrac{s_{2}^{2}}{n_{2}}}}$ $=\dfrac{(252.32-193.13)-0}{\sqrt{\dfrac{47.0642^{2}}{28}+\dfrac{22.3000^{2}}{30}}}$ $\approx 6.051$ Degrees of freedom: (the Welch-Satterthwaite equation) $A=\dfrac{s_{1}^{2}}{n_{1}}=\dfrac{47.0642^{2}}{28} \approx 79.1085$ $B=\dfrac{s_{2}^{2}}{n_{2}}=\dfrac{22.3000^{2}}{30} \approx 16.5763$ $d f=\dfrac{(A+B)^{2}}{\dfrac{A^{2}}{n_{1}-1}+\dfrac{B^{2}}{n_{2}-1}}=\dfrac{(79.1085+16.5763)^{2}}{\dfrac{79.1085^{2}}{28-1}+\dfrac{16.5763^{2}}{30-1}} \approx 37.9493$ p-value: Using TI-83/84: $\operatorname{tcdf}(6.051,1 E 99,37.9493) \approx 2.44 \times 10^{-7}$ Using R: $1-\mathrm{pt}(6.051,37.9493) \approx 2.44 \times 10^{-7}$ Using Technology: Using the TI-83/84: Note The Pooled question on the calculator is for whether you are assuming the variances are equal. Since this assumption is not being made, then the answer to this question is no. Pooled means that you assume the variances are equal and can pool the sample variances together. Using R: command in general: t.test(variable1, variable2, alternative = "less" or "greater") For this example, the R command is: t.test(heartattack, healthy, alternative="greater") Welch Two Sample t-test data: heartattack and healthy t = 6.1452, df = 37.675, p-value = 1.86e-07 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: 44.1124 Inf sample estimates: mean of x mean of y 253.9286 193.1333 The test statistic is t = 6.1452. The p-value is $1.86 \times 10^{-7}$ 5. Reject $H_{o}$ since the p-value < $\alpha$. 6. This is enough evidence to show that patients who have had heart attacks have higher cholesterol level on average from healthy individuals. (Though do realize that some of assumptions are not valid, so this interpretation may be invalid.) Example $2$ confidence interval for $\mu_{1}-\mu_{2}$ The cholesterol level of patients who had heart attacks was measured two days after the heart attack. The researchers want to see if patients who have heart attacks have higher cholesterol levels over healthy people, so they also measured the cholesterol level of healthy adults who show no signs of heart disease. The data is in Example $1$ ("Cholesterol levels after," 2013). Find a 99% confidence interval for the mean difference in cholesterol levels between heart attack patients and healthy individuals. 1. State the random variables and the parameters in words. 2. State and check the assumptions for the hypothesis test. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. These were stated in Example $1$, but are reproduced here for reference. $x_{1}$ = Cholesterol level of patients who had a heart attack $x_{2}$ = Cholesterol level of healthy individuals $\mu_{1}$ = mean cholesterol level of patients who had a heart attack $\mu_{2}$ = mean cholesterol level of healthy individuals 2. The assumptions were stated and checked in Example $1$. 3. Sample Statistic: $\overline{x}_{1} \approx 252.32, \overline{x_{2}} \approx 193.13, s_{1} \approx 47.0642, s_{2} \approx 22.3000, n_{1}=28, n_{2}=30$ Test Statistic: Degrees of freedom: (the Welch–Satterthwaite equation) $A=\dfrac{s_{1}^{2}}{n_{1}}=\dfrac{47.0642^{2}}{28} \approx 79.1085$ $B=\dfrac{s_{2}^{2}}{n_{2}}=\dfrac{22.3000^{2}}{30} \approx 16.5763$ $d f=\dfrac{(A+B)^{2}}{\dfrac{A^{2}}{n_{1}-1}+\dfrac{B^{2}}{n_{2}-1}}=\dfrac{(79.1085+16.5763)^{2}}{\dfrac{79.1085^{2}}{28-1}+\dfrac{16.5763^{2}}{30-1}} \approx 37.9493$ Since this df is not in the table, round to the nearest whole number. $t_{c}=2.712$ $E=t_{c} \sqrt{\dfrac{s_{1}^{2}}{n_{1}}+\dfrac{s_{2}^{2}}{n_{2}}}=2.712 \sqrt{\dfrac{47.0642^{2}}{28}+\dfrac{22.3000^{2}}{30}} \approx 26.53$ $\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E$ $(252.32-193.13)-26.53<\mu_{1}-\mu_{2}<(252.32-193.13)+26.53$ $32.66 \mathrm{mg} / \mathrm{dL}<\mu_{1}-\mu_{2}<85.72 \mathrm{mg} / \mathrm{dL}$ Using Technology: Using TI-83/84: Note The Pooled question on the calculator is for whether you are assuming the variances are equal. Since this assumption is not being made, then the answer to this question is no. Pooled means that you assume the variances are equal and can pool the sample variances together. Using R: the commands is t.test(variable1, variable2, conf.level=C), where C is in decimal form. For this example, the command is t.test(heartattack, healthy, conf.level=.99) Output: Welch Two Sample t-test data: heartattack and healthy t = 6.1452, df = 37.675, p-value = 3.721e-07 alternative hypothesis: true difference in means is not equal to 0 99 percent confidence interval: 33.95750 87.63298 sample estimates: mean of x mean of y 253.9286 193.1333 The confidence interval is $33.96<\mu_{1}-\mu_{2}<87.63$ 4. There is a 99% chance that $33.96<\mu_{1}-\mu_{2}<87.63$ contains the true difference in means. 5. The mean cholesterol level for patients who had heart attacks is anywhere from 32.66 mg/dL to 85,72 mg/dL more than the mean cholesterol level for healthy patients. (Though do realize that many of assumptions are not valid, so this interpretation may be invalid.) If you do assume that the variances are equal, that is $\sigma_{1}^{2}=\sigma_{2}^{2}$, then the test statistic is: $t=\dfrac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{s_{p} \sqrt{\dfrac{1}{n_{1}}+\dfrac{1}{n_{2}}}}$ where $s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}-1\right)+\left(n_{2}-1\right)}}$ $s_{p}$ = pooled standard deviation The Degrees of Freedom is: df = $n_{1}+n_{2}-2$ The confidence interval if you do assume that $\sigma_{1}^{2}=\sigma_{2}^{2}$ has been met, is $\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E$ where $E=t_{c} s_{p} \sqrt{\dfrac{1}{n_{1}}+\dfrac{1}{n_{2}}}$ and $s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}-1\right)+\left(n_{2}-1\right)}}$ Degrees of Freedom: df = $n_{1}+n_{2}-2$ $t_{c}$ is the critical value where C = 1 - $\alpha$ To show that the variances are equal, just show that the ratio of your sample variances is not unusual (probability is greater than 0.05). In other words, make sure the following is true. $P\left(F>s_{1}^{2} / s_{2}^{2}\right) \geq 0.05\left(\text { or } P\left(F>s_{2}^{2} / s_{1}^{2}\right) \geq 0.05\right.$ so that the larger variance is in the numerator). This probability is from an F-distribution. To find the probability on the TI-83/84 calculator use $\operatorname{Fcdf}\left(s_{1}^{2} / s_{2}^{2}, 1 E 99, n_{1}-1, n_{2}-1\right)$. To find the probability on R, use $1-\operatorname{pf}\left(s_{1}^{2} / s_{2}^{2}, n_{1}-1, n_{2}-1\right)$. Note The F-distribution is very sensitive to the normal distribution. A better test for equal variances is Levene's test, though it is more complicated. It is best to do Levene’s test when using statistical software (such as SPSS or Minitab) to perform the two-sample independent t-test. Example $3$ hypothesis test for two means The amount of sodium in beef hotdogs was measured. In addition, the amount of sodium in poultry hotdogs was also measured ("SOCR 012708 id," 2013). The data is in Example $2$. Is there enough evidence to show that beef has less sodium on average than poultry hotdogs? Use a 5% level of significance. Sodium in Beef Hotdogs Sodium in Poultry Hotdogs 495 430 477 375 425 396 322 383 482 387 587 542 370 359 322 357 479 528 375 513 330 426 300 513 386 358 401 581 645 588 440 522 317 545 319 430 298 375 253 396 Table $2$: Hotdog Data 1. State the random variables and the parameters in words. 2. State the null and alternative hypotheses and the level of significance. 3. State and check the assumptions for the hypothesis test. 4. Find the sample statistic, test statistic, and p-value. 5. Conclusion 6. Interpretation Solution 1. $x_{1}$ = sodium level in beef hotdogs $x_{2}$ = sodium level in poultry hotdogs $\mu_{1}$ = mean sodium level in beef hotdogs $\mu_{2}$ = mean sodium level in poultry hotdogs 2. The normal hypotheses would be $\begin{array}{ll}{H_{o} : \mu_{1}=\mu_{2}} & {\text { or } \quad H_{o} : \mu_{1}-\mu_{2}=0} \ {H_{A} : \mu_{1}<\mu_{2}} & \quad\quad\: {H_{A} : \mu_{1}-\mu_{2}<0} \ {\alpha=0.05}\end{array}$ 3. 1. A random sample of 20 sodium levels in beef hotdogs is taken. A random sample of 20 sodium levels in poultry hotdogs. The problem does not state if either sample was randomly selected. So this assumption may not be valid. 2. The two samples are independent since these are different types of hotdogs. 3. Population of all sodium levels in beef hotdogs is normally distributed. Population of all sodium levels in poultry hotdogs is normally distributed. Beef Hotdogs: This looks somewhat bell shaped. There are no outliers. This looks somewhat linear. So, the population of all sodium levels in beef hotdogs may be normally distributed. Poultry Hotdogs: This does not look bell shaped. There are no outliers. This does not look linear. So, the population of all sodium levels in poultry hotdogs is probably not normally distributed. This assumption is not valid. Since the samples are fairly large, and the t-test is robust, it may not be a large issue. However, just realize that the conclusions of the test may not be valid. d. The population variances are equal, i.e. $\sigma_{1}^{2}=\sigma_{2}^{2}$. $\begin{array}{l}{s_{1} \approx 102.4347} \ {s_{2} \approx 81.1786} \ {\dfrac{s_{1}^{2}}{s_{2}^{2}}=\dfrac{102.4347^{2}}{81.1786^{2}} \approx 1.592}\end{array}$ Using TI-83/84: Fcdf $(1.592,1 E 99,19,19) \approx 0.1597 \geq 0.05$ Using R: 1 - pf $(1.592,19,19) \approx 0.1597 \geq 0.05$ So you can say that these variances are equal. 4. Find the sample statistic, test statistic, and p-value Sample Statistic: $\overline{x}_{1}=401.15, \overline{x}_{2}=450.2, s_{1} \approx 102.4347, s_{2} \approx 81.1786, n_{1}=20, n_{2}=20$ Test Statistic: The assumption $\sigma_{1}^{2}=\sigma_{2}^{2}$ has been met, so $s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}-1\right)+\left(n_{2}-1\right)}}$ $=\sqrt{\dfrac{102.4347^{2} * 19+81.1786^{2} * 19}{(20-1)+(20-1)}}$ $\approx 92.4198$ Though you should try to do the calculations in the problem so you don’t create round off error. $t=\dfrac{\left(\overline{x}_{1}-\overline{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{s_{P} \sqrt{\dfrac{1}{n_{1}}+\dfrac{1}{n_{2}}}}$ $=\dfrac{(401.15-450.2)-0}{92.4198 \sqrt{\dfrac{1}{20}+\dfrac{1}{20}}}$ $\approx-1.678$ df = 20 + 20 - 2 = 38 p-value: Using TI-83/84: tcdf $(-1 E 99,-1.678,38) \approx 0.0508$ Using R: pt $(-1.678,38) \approx 0.0508$ Using technology to find the t and p-value: Using TI-83/84: Note The Pooled question on the calculator is for whether you are using the pooled standard deviation or not. In this example, the pooled standard deviation was used since you are assuming the variances are equal. That is why the answer to the question is Yes. Using R: the command is t.test(variable1, variable2, alternative="less" or "greater") For this example, the command is t.test(beef, poultry, alternative="less", equalvar=TRUE) Welch Two Sample t-test data: beef and poultry t = -1.6783, df = 36.115, p-value = 0.05096 alternative hypothesis: true difference in means is less than 0 95 percent confidence interval: -Inf 0.2875363 sample estimates: mean of x mean of y 401.15 450.20 The t = -1.6783 and the p-value = 0.05096. 5. Fail to reject $H_{o}$ since the p-value > $\alpha$. 6. This is not enough evidence to show that beef hotdogs have less sodium than poultry hotdogs. (Though do realize that many of assumptions are not valid, so this interpretation may be invalid.) Example $4$ confidence interval for $\mu_{1}-\mu_{2}$ The amount of sodium in beef hotdogs was measured. In addition, the amount of sodium in poultry hotdogs was also measured ("SOCR 012708 id," 2013). The data is in Example $2$. Find a 95% confidence interval for the mean difference in sodium levels between beef and poultry hotdogs. 1. State the random variables and the parameters in words. 2. State and check the assumptions for the hypothesis test. 3. Find the sample statistic and confidence interval. 4. Statistical Interpretation 5. Real World Interpretation Solution 1. These were stated in Example $1$, but are reproduced here for reference. $x_{1}$ = sodium level in beef hotdogs $x_{2}$ = sodium level in poultry hotdogs $\mu_{1}$ = mean sodium level in beef hotdogs $\mu_{2}$ = mean sodium level in poultry hotdogs 2. The assumptions were stated and checked in Example $3$. 3. Sample Statistic: $\overline{x}_{1}=401.15, \overline{x}_{2}=450.2, s_{1} \approx 102.4347, s_{2} \approx 81.1786, n_{1}=20, n_{2}=20$ Confidence Interval: The confidence interval estimate of the difference $\mu_{1}-\mu_{2}$ is The assumption $\sigma_{1}^{2}=\sigma_{2}^{2}$ has been met, so $s_{p}=\sqrt{\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}-1\right)+\left(n_{2}-1\right)}}$ $=\sqrt{\dfrac{102.4347^{2} * 19+81.1786^{2} * 19}{(20-1)+(20-1)}}$ $\approx 92.4198$ Though you should try to do the calculations in the formula for E so you don’t create round off error. df = $=n_{1}+n_{2}-2=20+20-2=38$ $t_{c} = 2.024$ $E=t_{c} s_{p} \sqrt{\dfrac{1}{n_{1}}+\dfrac{1}{n_{2}}}$ $=2.024(92.4198) \sqrt{\dfrac{1}{20}+\dfrac{1}{20}}$ $\approx 59.15$ $\left(\overline{x}_{1}-\overline{x}_{2}\right)-E<\mu_{1}-\mu_{2}<\left(\overline{x}_{1}-\overline{x}_{2}\right)+E$ $(401.15-450.2)-59.15<\mu_{1}-\mu_{2}<(401.15-450.2)+59.15$ $-108.20 \mathrm{g}<\mu_{1}-\mu_{2}<10.10 \mathrm{g}$ Using technology: Using the TI-83/84: Note The Pooled question on the calculator is for whether you are using the pooled standard deviation or not. In this example, the pooled standard deviation was used since you are assuming the variances are equal. That is why the answer to the question is Yes. Using R: the command is t.test(variable1, variable2, equalvar=TRUE, conf.level=C), where C is in decimal form. For this example, the command is t.test(beef, poultry, conf.level=.95, equalvar=TRUE) Welch Two Sample t-test data: beef and poultry t = -1.6783, df = 36.115, p-value = 0.1019 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -108.31592 10.21592 sample estimates: mean of x mean of y 401.15 450.20 The confidence interval is $-108.32<\mu_{1}-\mu_{2}<10.22$. 4. There is a 95% chance that $-108.20 \mathrm{g}<\mu_{1}-\mu_{2}<10.10 \mathrm{g}$ contains the true difference in means. 5. The mean sodium level of beef hotdogs is anywhere from 108.20 g less than the mean sodium level of poultry hotdogs to 10.10 g more. (The negative sign on the lower limit implies that the first mean is less than the second mean. The positive sign on the upper limit implies that the first mean is greater than the second mean.) Realize that many of assumptions are not valid in this example, so the interpretation may be invalid. Homework Exercise $1$ In each problem show all steps of the hypothesis test or confidence interval. If some of the assumptions are not met, note that the results of the test or interval may not be correct and then continue the process of the hypothesis test or confidence interval. Unless directed by your instructor, do not assume the variances are equal (except in problems 11 through 16). 1. The income of males in each state of the United States, including the District of Columbia and Puerto Rico, are given in Example $3$, and the income of females is given in table #9.3.4 ("Median income of," 2013). Is there enough evidence to show that the mean income of males is more than of females? Test at the 1% level. $42,951$52,379 $42,544$37,488 $49,281$50,987 $60,705$50,411 $66,760$40,951 $43,902$45,494 $41,528$50,746 $45,183$43,624 $43,993$41,612 $46,313$43,944 $56,708$60,264 $50,053$50,580 $40,202$43,146 $41,635$42,182 $41,803$53,033 $60,568$41,037 $50,388$41,950 $44,660$46,176 $41,420$45,976 $47,956$22,529 $48,842$41,464 $40,285$41,309 $43,160$47,573 $44,057$52,805 $53,046$42,125 $46,214$51,630 Table $3$: Data of Income for Males $31,862$40,550 $36,048$30,752 $41,817$40,236 $47,476$40,500 $60,332$33,823 $35,438$37,242 $31,238$39,150 $34,023$33,745 $33,269$32,684 $31,844$34,599 $48,748$46,185 $36,931$40,416 $29,548$33,865 $31,067$33,424 $35,484$41,021 $47,155$32,316 $42,113$33,459 $32,462$35,746 $31,274$36,027 $37,089$22,117 $41,412$31,330 $31,329$33,184 $35,301$32,843 $38,177$40,969 $40,993$29,688 $35,890$34,381 Table $4$: Data of Income for Females 2. The income of males in each state of the United States, including the District of Columbia and Puerto Rico, are given in Example $3$, and the income of females is given in Example $4$ ("Median income of," 2013). Compute a 99% confidence interval for the difference in incomes between males and females in the U.S. 3. A study was conducted that measured the total brain volume (TBV) (in $m m^{3}$) of patients that had schizophrenia and patients that are considered normal. Example $5$ contains the TBV of the normal patients and Example $6$ contains the TBV of schizophrenia patients ("SOCR data oct2009," 2013). Is there enough evidence to show that the patients with schizophrenia have less TBV on average than a patient that is considered normal? Test at the 10% level. 1663407 1583940 1299470 1535137 1431890 1578698 1453510 1650348 1288971 1366346 1326402 1503005 1474790 1317156 1441045 1463498 1650207 1523045 1441636 1432033 1420416 1480171 1360810 1410213 1574808 1502702 1203344 1319737 1688990 1292641 1512571 1635918 Table $5$: Total Brain Volume (in $\mathrm{mm}^{3}$) of Normal Patients 1331777 1487886 1066075 1297327 1499983 1861991 1368378 1476891 1443775 1337827 1658258 1588132 1690182 1569413 1177002 1387893 1483763 1688950 1563593 1317885 1420249 1363859 1238979 1286638 1325525 1588573 1476254 1648209 1354054 1354649 1636119 Table $6$: Total Brain Volume (in $\mathrm{mm}^{3}$) of Schizophrenia Patients 4. A study was conducted that measured the total brain volume (TBV) (in $m m^{3}$) of patients that had schizophrenia and patients that are considered normal. Example $5$ contains the TBV of the normal patients and Example $6$ contains the TBV of schizophrenia patients ("SOCR data oct2009," 2013). Compute a 90% confidence interval for the difference in TBV of normal patients and patients with Schizophrenia. 5. The length of New Zealand (NZ) rivers that travel to the Pacific Ocean are given in Example $7$ and the lengths of NZ rivers that travel to the Tasman Sea are given in Example $8$ ("Length of NZ," 2013). Do the data provide enough evidence to show on average that the rivers that travel to the Pacific Ocean are longer than the rivers that travel to the Tasman Sea? Use a 5% level of significance. 209 48 169 138 64 97 161 95 145 90 121 80 56 64 209 64 72 288 322 Table $7$: Lengths (in km) of NZ Rivers that Flow into the Pacific Ocean 76 64 68 64 37 32 32 51 56 40 64 56 80 121 177 56 80 35 72 72 108 48 Table $8$: Lengths (in km) of NZ Rivers that Flow into the Tasman Sea 6. The length of New Zealand (NZ) rivers that travel to the Pacific Ocean are given in Example $7$ and the lengths of NZ rivers that travel to the Tasman Sea are given in Example $8$ ("Length of NZ," 2013). Estimate the difference in mean lengths of rivers between rivers in NZ that travel to the Pacific Ocean and ones that travel to the Tasman Sea. Use a 95% confidence level. 7. The number of cell phones per 100 residents in countries in Europe is given in Example $9$ for the year 2010. The number of cell phones per 100 residents in countries of the Americas is given in Example $10$ also for the year 2010 ("Population reference bureau," 2013). Is there enough evidence to show that the mean number of cell phones in countries of Europe is more than in countries of the Americas? Test at the 1% level. 100 76 100 130 75 84 112 84 138 133 118 134 126 188 129 93 64 128 124 122 109 121 127 152 96 63 99 95 151 147 123 95 67 67 118 125 110 115 140 115 141 77 98 102 102 112 118 118 54 23 121 126 47 Table $9$: Number of Cell Phones per 100 Residents in Europe 158 117 106 159 53 50 78 66 88 92 42 3 150 72 86 113 50 58 70 109 37 32 85 101 75 69 55 115 95 73 86 157 100 119 81 113 87 105 96 Table $10$: Number of Cell Phones per 100 Residents in the America 8. The number of cell phones per 100 residents in countries in Europe is given in Example $9$ for the year 2010. The number of cell phones per 100 residents in countries of the Americas is given in Example $10$ also for the year 2010 ("Population reference bureau," 2013). Find the 98% confidence interval for the difference in mean number of cell phones per 100 residents in Europe and the Americas. 9. A vitamin K shot is given to infants soon after birth. Nurses at Northbay Healthcare were involved in a study to see if how they handle the infants could reduce the pain the infants feel ("SOCR data nips," 2013). One of the measurements taken was how long, in seconds, the infant cried after being given the shot. A random sample was taken from the group that was given the shot using conventional methods (Example $11$), and a random sample was taken from the group that was given the shot where the mother held the infant prior to and during the shot (Example $12$). Is there enough evidence to show that infants cried less on average when they are held by their mothers than if held using conventional methods? Test at the 5% level. 63 0 2 46 33 33 29 23 11 12 48 15 33 14 51 37 24 70 63 0 73 39 54 52 39 34 30 55 58 18 Table $11$: Crying Time of Infants Given Shots Using Conventional Methods 0 32 20 23 14 19 60 59 64 64 72 50 44 14 10 58 19 41 17 5 36 73 19 46 9 43 73 27 25 18 Table $12$: Crying Time of Infants Given Shots Using New Methods 10. A vitamin K shot is given to infants soon after birth. Nurses at Northbay Healthcare were involved in a study to see if how they handle the infants could reduce the pain the infants feel ("SOCR data nips," 2013). One of the measurements taken was how long, in seconds, the infant cried after being given the shot. A random sample was taken from the group that was given the shot using conventional methods (Example $11$), and a random sample was taken from the group that was given the shot where the mother held the infant prior to and during the shot (Example $12$). Calculate a 95% confidence interval for the mean difference in mean crying time after being given a vitamin K shot between infants held using conventional methods and infants held by their mothers. 11. Redo problem 1 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). 12. Redo problem 2 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). 13. Redo problem 7 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). 14. Redo problem 8 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). 15. Redo problem 9 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). 16. Redo problem 10 testing for the assumption of equal variances and then use the formula that utilizes the assumption of equal variances (follow the procedure in Example $3$). Answer For all hypothesis tests, just the conclusion is given. For all confidence intervals, just the interval using technology is given. See solution for the entire answer. 1. Reject Ho 2. $\ 65443.80<\mu_{1}-\mu_{2}<\ 13340.80$ 3. Fail to reject Ho 4. $-51564.6 \mathrm{mm}^{3}<\mu_{1}-\mu_{2}<75656.6 \mathrm{mm}^{3}$ 5. Reject Ho 6. $23.2818 \mathrm{km}<\mu_{1}-\mu_{2}<103.67 \mathrm{km}$ 7. Reject Ho 8. $4.3641<\mu_{1}-\mu_{2}<37.5276$ 9. Fail to reject Ho 10. $-10.9726 \mathrm{s}<\mu_{1}-\mu_{2}<11.3059 \mathrm{s}$ 11. Reject Ho 12. $\ 6544.98<\mu_{1}-\mu_{2}<\ 13339.60$ 13. Reject Ho 14. $4.8267<\mu_{1}-\mu_{2}<37.0649$ 15. Fail to reject Ho 16. $-10.9713 \mathrm{s}<\mu_{1}-\mu_{2}<11.3047 \mathrm{s}$
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/09%3A_Two-Sample_Interference/9.03%3A_Independent_Samples_for_Two_Means.txt
One of the most important concept that you need to understand is deciding which analysis you should conduct for a particular situation. To help you to figure out the analysis to conduct, there are a series of questions you should ask yourself. 1. Does the problem deal with mean or proportion? Sometimes the problem states explicitly the words mean or proportion, but other times you have to figure it out based on the information you are given. If you counted number of individuals that responded in the affirmative to a question, then you are dealing with proportion. If you measured something, then you are dealing with mean. 2. Does the problem have one or two samples? So look to see if one group was measured or if two groups were measured. If you have the data sets, then it is usually easy to figure out if there is one or two samples, then there is either one data set or two data sets. If you don’t have the data, then you need to decide if the problem describes collecting data from one group or from two groups. 3. If you have two samples, then you need to determine if the samples are independent or dependent. If the individuals are different for both samples, then most likely the samples are independent. If you can’t tell, then determine if a data value from the first sample influences the data value in the second sample. In other words, can you pair data values together so you can find the difference, and that difference has meaning. If the answer is yes, then the samples are paired. Otherwise, the samples are independent. 4. Does the situation involve a hypothesis test or a confidence interval? If the problem talks about "do the data show", "is there evidence of", "test to see", then you are doing a hypothesis test. If the problem talks about "find the value", "estimate the" or "find the interval", then you are doing a confidence interval. So if you have a situation that has two samples, independent samples, involving the mean, and is a hypothesis test, then you have a two-sample independent t-test. Now you look up the assumptions and the formula or technology process for doing this test. Every hypothesis test involves the same six steps, and you just have to use the correct assumptions and calculations. Every confidence interval has the same five steps, and again you just need to use the correct assumptions and calculations. So this is why it is so important to figure out what analysis you should conduct. Data Sources: AP exam scores. (2013, November 20). Retrieved from wiki.stat.ucla.edu/socr/index...08_APExamScore s Buy sushi grade fish online. (2013, November 20). Retrieved from http://www.catalinaop.com/ Center for Disease Control and Prevention, Prevalence of Autism Spectrum Disorders - Autism and Developmental Disabilities Monitoring Network. (2008). Autism and developmental disabilities monitoring network-2012. Retrieved from website: www.cdc.gov/ncbddd/autism/doc...nityReport.pdf Cholesterol levels after heart attack. (2013, September 25). Retrieved from http://www.statsci.org/data/general/cholest.html Flanagan, R., Rooney, C., & Griffiths, C. (2005). Fatal poisoning in childhood, england & wales 1968-2000. Forensic Science International, 148:121-129, Retrieved from http://www.cdc.gov/nchs/data/ice/fat...ning_child.pdf Friday the 13th datafile. (2013, November 25). Retrieved from lib.stat.cmu.edu/DASL/Datafil...aythe13th.html Gettler, L. T., McDade, T. W., Feranil, A. B., & Kuzawa, C. W. (2011). Longitudinal evidence that fatherhood decreases testosterone in human males. The Proceedings of the National Academy of Sciences, PNAS 2011, doi: 10.1073/pnas.1105403108 Length of NZ rivers. (2013, September 25). Retrieved from http://www.statsci.org/data/oz/nzrivers.html Lim, L. L. United Nations, International Labour Office. (2002). Female labour-force participation. Retrieved from website: www.un.org/esa/population/pub...ty/RevisedLIMp aper.PDF Median income of males. (2013, October 9). Retrieved from http://www.prb.org/DataFinder/Topic/...s.aspx?ind=137 Olson, K., & Hanson, J. (1997). Using reiki to manage pain: a preliminary report. Cancer Prev Control, 1(2), 108-13. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9765732 Population reference bureau. (2013, October 8). Retrieved from http://www.prb.org/DataFinder/Topic/...gs.aspx?ind=25 Seafood online. (2013, November 20). Retrieved from http://www.allfreshseafood.com/ SOCR 012708 id data hotdogs. (2013, November 13). Retrieved from http://wiki.stat.ucla.edu/socr/index...D_Data_HotDogs SOCR data nips infantvitK shotdata. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index...tVitK_ShotData SOCR data Oct2009 id ni. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index..._Oct2009_ID_NI Statistics brain. (2013, November 30). Retrieved from http://www.statisticbrain.com/infidelity-statistics/ Student t-distribution. (2013, November 25). Retrieved from lib.stat.cmu.edu/DASL/Stories/student.html
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/09%3A_Two-Sample_Interference/9.04%3A_Which_Analysis_Should_You_Conduct.txt
The previous chapter looked at comparing populations to see if there is a difference between the two. That involved two random variables that are similar measures. This chapter will look at two random variables that are not similar measures, and see if there is a relationship between the two variables. To do this, you look at regression, which finds the linear relationship, and correlation, which measures the strength of a linear relationship. Note There are many other types of relationships besides linear that can be found for the data. This book will only explore linear, but realize that there are other relationships that can be used to describe data. 10: Regression and Correlation When comparing two different variables, two questions come to mind: “Is there a relationship between two variables?” and “How strong is that relationship?” These questions can be answered using regression and correlation. Regression answers whether there is a relationship (again this book will explore linear only) and correlation answers how strong the linear relationship is. To introduce both of these concepts, it is easier to look at a set of data. Example $1$ if there is a relationship Is there a relationship between the alcohol content and the number of calories in 12-ounce beer? To determine if there is one a random sample was taken of beer’s alcohol content and calories ("Calories in beer," 2011), and the data is in Example $1$. Brand Brewery Alcohol Content Calories in 12 oz Big Sky Scape Goat Pale Ale Big Sky Brewing 4.70% 163 Sierra Nevada Harvest Ale Sierra Nevada 6.70% 215 Steel Reserve MillerCoors 8.10% 222 O'Doul's Anheuser Busch 0.40% 70 Coors Light MillerCoors 4.15% 104 Genesee Cream Ale High Falls Brewing 5.10% 162 Sierra Nevada Summerfest Beer Sierra Nevada 5.00% 158 Michelob Beer Anheuser Busch 5.00% 155 Flying Dog Doggie Style Flying Dog Brewery 4.70% 158 Big Sky I.P.A. Big Sky Brewing 6.20% 195 Table $1$: Alcohol and Calorie Content in Beer Solution To aid in figuring out if there is a relationship, it helps to draw a scatter plot of the data. It is helpful to state the random variables, and since in an algebra class the variables are represented as x and y, those labels will be used here. It helps to state which variable is x and which is y. State random variables x = alcohol content in the beer y = calories in 12 ounce beer This scatter plot looks fairly linear. However, notice that there is one beer in the list that is actually considered a non-alcoholic beer. That value is probably an outlier since it is a non-alcoholic beer. The rest of the analysis will not include O’Doul’s. You cannot just remove data points, but in this case it makes more sense to, since all the other beers have a fairly large alcohol content. To find the equation for the linear relationship, the process of regression is used to find the line that best fits the data (sometimes called the best fitting line). The process is to draw the line through the data and then find the distances from a point to the line, which are called the residuals. The regression line is the line that makes the square of the residuals as small as possible, so the regression line is also sometimes called the least squares line. The regression line and the residuals are displayed in Figure $2$. To find the regression equation (also known as best fitting line or least squares line) Given a collection of paired sample data, the regression equation is $\hat{y}=a+b x$ where the slope = $b=\dfrac{S S_{\mathrm{xy}}}{S S_{x}}$ and y-intercept = $a=\overline{y}-b \overline{x}$ Definition $1$ The residuals are the difference between the actual values and the estimated values. residual $=y-\hat{y}$ Definition $2$ SS stands for sum of squares. So you are summing up squares. With the subscript xy, you aren’t really summing squares, but you can think of it that way in a weird sense. $\begin{array}{l}{S S_{x y}=\sum(x-\overline{x})(y-\overline{y})} \ {S S_{x}=\sum(x-\overline{x})^{2}} \ {S S_{y}=\sum(y-\overline{y})^{2}}\end{array}$ Note The easiest way to find the regression equation is to use the technology. The independent variable, also called the explanatory variable or predictor variable, is the x-value in the equation. The independent variable is the one that you use to predict what the other variable is. The dependent variable depends on what independent value you pick. It also responds to the explanatory variable and is sometimes called the response variable. In the alcohol content and calorie example, it makes slightly more sense to say that you would use the alcohol content on a beer to predict the number of calories in the beer. Definition $3$ The population equation looks like: $\begin{array}{l}{y=\beta_{o}+\beta_{1} x} \ {\beta_{o}=\text { slope }} \ {\beta_{1}=y \text { -intercept }}\end{array}$ $\hat{y}$ is used to predict y. Assumptions of the regression line: 1. The set $(x, y)$ of ordered pairs is a random sample from the population of all such possible $(x, y)$ pairs. 2. For each fixed value of x, the y-values have a normal distribution. All of the y distributions have the same variance, and for a given x-value, the distribution of y-values has a mean that lies on the least squares line. You also assume that for a fixed y, each x has its own normal distribution. This is difficult to figure out, so you can use the following to determine if you have a normal distribution. 1. Look to see if the scatter plot has a linear pattern. 2. Examine the residuals to see if there is randomness in the residuals. If there is a pattern to the residuals, then there is an issue in the data. Example $2$ find the equation of the regression line 1. Is there a positive relationship between the alcohol content and the number of calories in 12-ounce beer? To determine if there is a positive linear relationship, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data are in Example $2$. 2. Use the regression equation to find the number of calories when the alcohol content is 6.50%. 3. Use the regression equation to find the number of calories when the alcohol content is 2.00%. 4. Find the residuals and then plot the residuals versus the x-values. Brand Brewery Alcohol Content Calories in 12 oz Big Sky Scape Goat Pale Ale Big Sky Brewing 4.70% 163 Sierra Nevada Harvest Ale Sierra Nevada 6.70% 215 Steel Reserve MillerCoors 8.10% 222 O'Doul's Anheuser Busch 0.40% 70 Coors Light MillerCoors 4.15% 104 Genesee Cream Ale High Falls Brewing 5.10% 162 Sierra Nevada Summerfest Beer Sierra Nevada 5.00% 158 Michelob Beer Anheuser Busch 5.00% 155 Flying Dog Doggie Style Flying Dog Brewery 4.70% 158 Big Sky I.P.A. Big Sky Brewing 6.20% 195 Table $2$: Alcohol and Caloric Content in Beer without Outlier Solution a. State random variables x = alcohol content in the beer y = calories in 12 ounce beer Assumptions check: 1. A random sample was taken as stated in the problem. 2. The distribution for each calorie value is normally distributed for every value of alcohol content in the beer. 1. From Example $1$, the scatter plot looks fairly linear. 2. The residual versus the x-values plot looks fairly random. (See Figure $5$.) It appears that the distribution for calories is a normal distribution. To find the regression equation on the TI-83/84 calculator, put the x’s in L1 and the y’s in L2. Then go to STAT, over to TESTS, and choose LinRegTTest. The setup is in Figure $3$. The reason that >0 was chosen is because the question was asked if there was a positive relationship. If you are asked if there is a negative relationship, then pick <0. If you are just asked if there is a relationship, then pick $\neq 0$. Right now the choice will not make a different, but it will be important later. From this you can see that $\hat{y}=25.0+26.3 x$ To find the regression equation using R, the command is lm(dependent variable ~ independent variable), where ~ is the tilde symbol located on the upper left of most keyboards. So for this example, the command would be lm(calories ~ alcohol), and the output would be Call: lm(formula = calories ~ alcohol) Coefficients: (Intercept) alcohol 25.03 26.32 From this you can see that the y-intercept is 25.03 and the slope is 26.32. So the regression equation is $\hat{y}=25.0+26.3 x$. Remember, this is an estimate for the true regression. A different random sample would produce a different estimate. b. $\begin{array}{l}{x_{o}=6.50} \ {\hat{y}=25.0+26.3(6.50)=196 \text { calories }}\end{array}$ If you are drinking a beer that is 6.50% alcohol content, then it is probably close to 196 calories. Notice, the mean number of calories is 170 calories. This value of 196 seems like a better estimate than the mean when looking at the original data. The regression equation is a better estimate than just the mean. c. $\begin{array}{l}{x_{o}=2.00} \ {\hat{y}=25.0+26.3(2.00)=78 \text { calories }}\end{array}$ If you are drinking a beer that is 2.00% alcohol content, then it has probably close to 78 calories. This doesn’t seem like a very good estimate. This estimate is what is called extrapolation. It is not a good idea to predict values that are far outside the range of the original data. This is because you can never be sure that the regression equation is valid for data outside the original data. d. To find the residuals, find $\hat{y}$ for each x-value. Then subtract each $\hat{y}$ from the given y value to find the residuals. Realize that these are sample residuals since they are calculated from sample values. It is best to do this in a spreadsheet. x y $\hat{y}=25.0+26.3 x$ $y-\hat{y}$ 4.70 163 148.61 14.390 6.70 215 201.21 13.790 8.10 222 238.03 -16.030 4.15 104 134.145 -30.145 5.10 162 159.13 2.870 5.00 158 156.5 1.500 5.00 155 156.5 -1.500 4.70 158 148.61 9.390 6.20 195 188.06 6.940 Table $3$: Residuals for Beer Calories Notice the residuals add up to close to 0. They don’t add up to exactly 0 in this example because of rounding error. Normally the residuals add up to 0. You can use R to get the residuals. The command is lm.out = lm(dependent variable ~ independent variable) – this defines the linear model with a name so you can use it later. Then residual(lm.out) – produces the residuals. For this example, the command would be lm(calories~alcohol) Call: lm(formula = calories ~ alcohol) Coefficients: (Intercept) alcohol 25.03 26.32 > residuals(lm.out) $\begin{array}{ccccc}{1} & {2} & {3} & {4} & {5} & {6} & {7} & {8} & {9} \ {14.271307} & {13.634092} & {-16.211959} & {-30.253458} & {2.743864} & {1.375725} & {-1.624275} & {9.271307} & {6.793396}\end{array}$ So the first residual is 14.271307 and it belongs to the first x value. The residual 13.634092 belongs to the second x value, and so forth. You can then graph the residuals versus the independent variable using the plot command. For this example, the command would be plot(alcohol, residuals(lm.out), main="Residuals for Beer Calories versus Alcohol Content", xlab="Alcohol Content", ylab="Residuals"). Sometimes it is useful to see the x-axis on the graph, so after creating the plot, type the command abline(0,0). The graph of the residuals versus the x-values is in Figure $5$. They appear to be somewhat random. Notice, that the 6.50% value falls into the range of the original x-values. The processes of predicting values using an x within the range of original x-values is called interpolating. The 2.00% value is outside the range of original x-values. Using an x-value that is outside the range of the original x-values is called extrapolating. When predicting values using interpolation, you can usually feel pretty confident that that value will be close to the true value. When you extrapolate, you are not really sure that the predicted value is close to the true value. This is because when you interpolate, you know the equation that predicts, but when you extrapolate, you are not really sure that your relationship is still valid. The relationship could in fact change for different x-values. An example of this is when you use regression to come up with an equation to predict the growth of a city, like Flagstaff, AZ. Based on analysis it was determined that the population of Flagstaff would be well over 50,000 by 1995. However, when a census was undertaken in 1995, the population was less than 50,000. This is because they extrapolated and the growth factor they were using had obviously changed from the early 1990’s. Growth factors can change for many reasons, such as employment growth, employment stagnation, disease, articles saying great place to live, etc. Realize that when you extrapolate, your predicted value may not be anywhere close to the actual value that you observe. What does the slope mean in the context of this problem? $m=\dfrac{\Delta y}{\Delta x}=\dfrac{\Delta \text { calories }}{\Delta \text { alcohol content }}=\dfrac{26.3 \text { calories }}{1 \%}$ The calories increase 26.3 calories for every 1% increase in alcohol content. The y-intercept in many cases is meaningless. In this case, it means that if a drink has 0 alcohol content, then it would have 25.0 calories. This may be reasonable, but remember this value is an extrapolation so it may be wrong. Consider the residuals again. According to the data, a beer with 6.7% alcohol has 215 calories. The predicted value is 201 calories. Residual = actual - predicted =215 - 201 =14 This deviation means that the actual value was 14 above the predicted value. That isn’t that far off. Some of the actual values differ by a large amount from the predicted value. This is due to variability in the dependent variable. The larger the residuals the less the model explains the variability in the dependent variable. There needs to be a way to calculate how well the model explains the variability in the dependent variable. This will be explored in the next section. The following example demonstrates the process to go through when using the formulas for finding the regression equation, though it is better to use technology. This is because if the linear model doesn’t fit the data well, then you could try some of the other models that are available through technology. Example $3$ calculating the regression equation with the formula Is there a relationship between the alcohol content and the number of calories in 12-ounce beer? To determine if there is one a random sample was taken of beer’s alcohol content and calories ("Calories in beer," 2011), and the data are in Example $2$. Find the regression equation from the formula. Solution State random variables x = alcohol content in the beer y = calories in 12 ounce beer Alcohol Content Calories $x-\overline{x}$ $y-\overline{y}$ $(x-\overline{x})^{2}$ $(y-\overline{y})^{2}$ $(x-\overline{x})(y-\overline{y})$ 4.70 163 -0.8167 -7.2222 0.6669 52.1065 5.8981 6.70 215 1.1833 44.7778 1.4003 2005.0494 52.9870 8.10 222 2.5833 51.7778 6.6736 2680.9383 133.7595 4.15 104 -1.3667 -66.2222 1.8678 4385.3827 90.5037 5.10 162 -0.4167 -8.2222 0.1736 67.6049 3.4259 5.00 158 -0.5167 -12.2222 0.2669 149.3827 6.3148 5.00 155 -0.5167 -15.2222 0.2669 231.7160 7.8648 4.70 158 -0.8167 -12.2222 0.6669 149.3827 9.9815 6.20 195 0.6833 24.7778 0.4669 613.9383 16.9315 5.516667 = $\overline{x}$ 170.2222 = $\overline{y}$ 12.45 = $S S_{x}$ 10335.5556 = $S S_{y}$ 327.6667 = $S S_{xy}$ Table $4$: Calculations for Regression Equation slope: $b=\dfrac{S S_{x y}}{S S_{x}}=\dfrac{327.6667}{12.45} \approx 26.3$ y-intercept: $a=\overline{y}-b \overline{x}=170.222-26.3(5.516667) \approx 25.0$ Regression equation: $\hat{y}=25.0+26.3 x$ Homework Exercise $1$ For each problem, state the random variables. Also, look to see if there are any outliers that need to be removed. Do the regression analysis with and without the suspected outlier points to determine if their removal affects the regression. The data sets in this section are used in the homework for sections 10.2 and 10.3 also. 1. When an anthropologist finds skeletal remains, they need to figure out the height of the person. The height of a person (in cm) and the length of their metacarpal bone 1 (in cm) were collected and are in Example $5$ ("Prediction of height," 2013). Create a scatter plot and find a regression equation between the height of a person and the length of their metacarpal. Then use the regression equation to find the height of a person for a metacarpal length of 44 cm and for a metacarpal length of 55 cm. Which height that you calculated do you think is closer to the true height of the person? Why? Length of Metacarpal (cm) Height of Person (cm) 45 171 51 178 39 157 41 163 48 172 49 183 46 173 43 175 47 173 Table $5$: Data of Metacarpal versus Height 2. Example $6$ contains the value of the house and the amount of rental income in a year that the house brings in ("Capital and rental," 2013). Create a scatter plot and find a regression equation between house value and rental income. Then use the regression equation to find the rental income a house worth $230,000 and for a house worth$400,000. Which rental income that you calculated do you think is closer to the true rental income? Why? Value Rental Value Rental Value Rental Value Rental 81000 6656 77000 4576 75000 7280 67500 6864 95000 7904 94000 8736 90000 6240 85000 7072 121000 12064 115000 7904 110000 7072 104000 7904 135000 8320 130000 9776 126000 6240 125000 7904 145000 8320 140000 9568 140000 9152 135000 7488 165000 13312 165000 8528 155000 7488 148000 8320 178000 11856 174000 10400 170000 9568 170000 12688 200000 12272 200000 10608 194000 11232 190000 8320 214000 8528 208000 10400 200000 10400 200000 8320 240000 10192 240000 12064 240000 11648 225000 12480 289000 11648 270000 12896 262000 10192 244500 11232 325000 12480 310000 12480 303000 12272 300000 12480 Table $6$: Data of House Value versus Rental 3. The World Bank collects information on the life expectancy of a person in each country ("Life expectancy at," 2013) and the fertility rate per woman in the country ("Fertility rate," 2013). The data for 24 randomly selected countries for the year 2011 are in Example $7$. Create a scatter plot of the data and find a linear regression equation between fertility rate and life expectancy. Then use the regression equation to find the life expectancy for a country that has a fertility rate of 2.7 and for a country with fertility rate of 8.1. Which life expectancy that you calculated do you think is closer to the true life expectancy? Why? Fertility Rate Life Expectancy 1.7 77.2 5.8 55.4 2.2 69.9 2.1 76.4 1.8 75.0 2.0 78.2 2.6 73.0 2.8 70.8 1.4 82.6 2.6 68.9 1.5 81.0 6.9 54.2 2.4 67.1 1.5 73.3 2.5 74.2 1.4 80.7 2.9 72.1 2.1 78.3 4.7 62.9 6.8 54.4 5.2 55.9 4.2 66.0 1.5 76.0 3.9 72.3 Table $7$: Data of Fertility Rates versus Life Expectancy 4. The World Bank collected data on the percentage of GDP that a country spends on health expenditures ("Health expenditure," 2013) and also the percentage of women receiving prenatal care ("Pregnant woman receiving," 2013). The data for the countries where this information are available for the year 2011 is in Example $8$. Create a scatter plot of the data and find a regression equation between percentage spent on health expenditure and the percentage of women receiving prenatal care. Then use the regression equation to find the percent of women receiving prenatal care for a country that spends 5.0% of GDP on health expenditure and for a country that spends 12.0% of GDP. Which prenatal care percentage that you calculated do you think is closer to the true percentage? Why? Health Expenditure (% of GDP) Prenatal Care (%) 9.6 47.9 3.7 54.6 5.2 93.7 5.2 84.7 10.0 100.0 4.7 42.5 4.8 96.4 6.0 77.1 5.4 58.3 4.8 95.4 4.1 78.0 6.0 93.3 9.5 93.3 6.8 93.7 6.1 89.8 Table $8$: Data of Health Expenditure versus Prenatal Care 5. The height and weight of baseball players are in Example $9$ ("MLB heightsweights," 2013). Create a scatter plot and find a regression equation between height and weight of baseball players. Then use the regression equation to find the weight of a baseball player that is 75 inches tall and for a baseball player that is 68 inches tall. Which weight that you calculated do you think is closer to the true weight? Why? Height (inches) Weight (pounds) 76 212 76 224 72 180 74 210 75 215 71 200 77 235 78 235 77 194 76 185 72 180 72 170 75 220 74 228 73 210 72 180 70 185 73 190 71 186 74 200 74 200 75 210 79 240 72 208 75 180 Table $9$: Heights and Weights of Baseball Players 6. Different species have different body weights and brain weights are in Example $10$. ("Brain2bodyweight," 2013). Create a scatter plot and find a regression equation between body weights and brain weights. Then use the regression equation to find the brain weight for a species that has a body weight of 62 kg and for a species that has a body weight of 180,000 kg. Which brain weight that you calculated do you think is closer to the true brain weight? Why? Species Body Weight (kg) Brain Weight (kg) Newborn Human 3.20 0.37 Adult Human 73.00 1.35 Pithecanthropus Man 70.00 0.93 Squirrel 0.80 0.01 Hamster 0.15 0.00 Chimpanzee 50.00 0.42 Rabbit 1.40 0.01 Dog (Beagle) 10.00 0.07 Cat 4.50 0.03 Rat 0.40 0.00 Bottle-Nosed Dolphin 400.00 1.50 Beaver 24.00 0.04 Gorilla 320.00 0.50 Tiger 170.00 0.26 Owl 1.50 0.00 Camel 550.00 0.76 Elephant 4600.00 6.00 Lion 187.00 0.24 Sheep 120.00 0.14 Walrus 800.00 0.93 Horse 450.00 0.50 Cow 700.00 0.44 Giraffe 950.00 0.53 Green Lizard 0.20 0.00 Sperm Whale 35000.00 7.80 Turtle 3.00 0.00 Alligator 270.00 0.01 Table $10$: Body Weights and Brain Weights of Species 7. A random sample of beef hotdogs was taken and the amount of sodium (in mg) and calories were measured. ("Data hotdogs," 2013) The data are in Example $11$. Create a scatter plot and find a regression equation between amount of calories and amount of sodium. Then use the regression equation to find the amount of sodium a beef hotdog has if it is 170 calories and if it is 120 calories. Which sodium level that you calculated do you think is closer to the true sodium level? Why? Calories Sodium 186 495 181 477 176 425 149 322 184 482 190 587 158 370 139 322 175 479 148 375 152 330 111 300 141 386 153 401 190 645 157 440 131 317 149 319 135 298 132 253 Table $11$: Calories and Sodium Levels in Beef Hotdogs 8. Per capita income in 1960 dollars for European countries and the percent of the labor force that works in agriculture in 1960 are in Example $12$ ("OECD economic development," 2013). Create a scatter plot and find a regression equation between percent of labor force in agriculture and per capita income. Then use the regression equation to find the per capita income in a country that has 21 percent of labor in agriculture and in a country that has 2 percent of labor in agriculture. Which per capita income that you calculated do you think is closer to the true income? Why? Country Percent in Agriculture Per Capita Income Sweden 14 1644 Switzerland 11 1361 Luxembourg 15 1242 U. Kingdom 4 1105 Denmark 18 1049 W. Germany 15 1035 France 20 1013 Belgium 6 1005 Norway 20 977 Iceland 25 839 Netherlands 11 810 Austria 23 681 Ireland 36 529 Italy 27 504 Greece 56 324 Spain 42 290 Portugal 44 238 Turkey 79 177 Table $12$: Percent of Labor in Agriculture and Per Capita Income for European Countries 9. Cigarette smoking and cancer have been linked. The number of deaths per one hundred thousand from bladder cancer and the number of cigarettes sold per capita in 1960 are in Example $13$ ("Smoking and cancer," 2013). Create a scatter plot and find a regression equation between cigarette smoking and deaths of bladder cancer. Then use the regression equation to find the number of deaths from bladder cancer when the cigarette sales were 20 per capita and when the cigarette sales were 6 per capita. Which number of deaths that you calculated do you think is closer to the true number? Why? Cigarette Sales (per Capita) Bladder Cancer Deaths (per 100 thousand) Cigarette Sales (per Capita) Bladder Cancer Deaths (per 100 Thousand) 18.20 2.90 42.40 6.54 25.82 3.52 28.64 5.98 18.24 2.99 21.16 2.90 28.60 4.46 29.14 5.30 31.10 5.11 19.96 2.89 33.60 4.78 26.38 4.47 40.46 5.60 23.44 2.93 28.27 4.46 23.78 4.89 20.10 3.08 29.18 4.99 27.91 4.75 18.06 3.25 26.18 4.09 20.94 3.64 22.12 4.23 20.08 2.94 21.84 2.91 22.57 3.21 23.44 2.86 14.00 3.31 21.58 4.65 25.89 4.63 28.92 4.79 21.17 4.04 25.91 5.21 21.25 5.14 26.92 4.69 22.86 4.78 24.96 5.27 28.04 3.20 22.06 3.72 30.34 3.46 16.08 3.06 23.75 3.95 27.56 4.04 23.32 3.72 Table $13$: Number of Cigarettes and Number of Bladder Cancer Deaths in 1960 10. The weight of a car can influence the mileage that the car can obtain. A random sample of cars’ weights and mileage was collected and are in Example $14$ ("Passenger car mileage," 2013). Create a scatter plot and find a regression equation between weight of cars and mileage. Then use the regression equation to find the mileage on a car that weighs 3800 pounds and on a car that weighs 2000 pounds. Which mileage that you calculated do you think is closer to the true mileage? Why? Weight (100 pounds) Mileage (mpg) 22.5 53.3 22.5 41.1 22.5 38.9 25.0 40.9 27.5 46.9 27.5 36.3 30.0 32.2 30.0 32.2 30.0 31.5 30.0 31.4 30.0 31.4 35.0 32.6 35.0 31.3 35.0 31.3 35.0 28.0 35.0 28.0 35.0 28.0 40.0 23.6 40.0 23.6 40.0 23.4 40.0 23.1 45.0 19.5 45.0 17.2 45.0 17.0 55.0 13.2 Table $14$: Weights and Mileages of Cars Answer For regression, only the equation is given. See solutions for the entire answer. 1. $\hat{y}=1.719 x+93.709$ 3. $\hat{y}=-4.706 x+84.873$ 5. $\hat{y}=5.859 x-230.942$ 7. $\hat{y}=4.0133 x-228.3313$ 9. $\hat{y}=0.12182 x+1.08608$
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/10%3A_Regression_and_Correlation/10.01%3A_Regression.txt
A correlation exists between two variables when the values of one variable are somehow associated with the values of the other variable. When you see a pattern in the data you say there is a correlation in the data. Though this book is only dealing with linear patterns, patterns can be exponential, logarithmic, or periodic. To see this pattern, you can draw a scatter plot of the data. Remember to read graphs from left to right, the same as you read words. If the graph goes up the correlation is positive and if the graph goes down the correlation is negative. The words “ weak”, “moderate”, and “strong” are used to describe the strength of the relationship between the two variables. The linear correlation coefficient is a number that describes the strength of the linear relationship between the two variables. It is also called the Pearson correlation coefficient after Karl Pearson who developed it. The symbol for the sample linear correlation coefficient is r. The symbol for the population correlation coefficient is $\rho$ (Greek letter rho). The formula for r is $r=\dfrac{S S_{x y}}{\sqrt{S S_{x} S S_{y}}}$ Where $\begin{array}{l}{S S_{x}=\sum(x-\overline{x})^{2}} \ {S S_{y}=\sum(y-\overline{y})^{2}} \ {S S_{x y}=\sum(x-\overline{x})(y-\overline{y})}\end{array}$ Assumptions of linear correlation are the same as the assumptions for the regression line: 1. The set (x, y) of ordered pairs is a random sample from the population of all such possible (x, y) pairs. 2. For each fixed value of x, the y -values have a normal distribution. All of the y -distributions have the same variance, and for a given x-value, the distribution of y-values has a mean that lies on the least squares line. You also assume that for a fixed y, each x has its own normal distribution. This is difficult to figure out, so you can use the following to determine if you have a normal distribution. 1. Look to see if the scatter plot has a linear pattern. 2. Examine the residuals to see if there is randomness in the residuals. If there is a pattern to the residuals, then there is an issue in the data. Note Interpretation of the correlation coefficient r is always between -1 and 1. r = -1 means there is a perfect negative linear correlation and r = 1 means there is a perfect positive correlation. The closer r is to 1 or -1, the stronger the correlation. The closer r is to 0, the weaker the correlation. CAREFUL: r = 0 does not mean there is no correlation. It just means there is no linear correlation. There might be a very strong curved pattern. r How strong is the positive relationship between the alcohol content and the number of calories in 12-ounce beer? To determine if there is a positive linear correlation, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data are in Table $1$. Find the correlation coefficient and interpret that value. Brand Brewery Alcohol Content Calories in 12 oz Big Sky Scape Goat Pale Ale Big Sky Brewing 4.70% 163 Sierra Nevada Harvest Ale Sierra Nevada 6.70% 215 Steel Reserve MillerCoors 8.10% 222 Coors Light MillerCoors 4.15% 104 Genesee Cream Ale High Falls Brewing 5.10% 162 Sierra Nevada Summerfest Beer Sierra Nevada 5.00% 158 Michelob Beer Anheuser Busch 5.00% 155 Flying Dog Doggie Style Flying Dog Brewery 4.70% 158 Big Sky I.P.A. Big Sky Brewing 6.20% 195 Table $1$: Alcohol and Calorie Content in Beer without Outlier Solution State random variables x = alcohol content in the beer y = calories in 12 ounce beer Assumptions check: From Example $2$, the assumptions have been met. To compute the correlation coefficient using the TI-83/84 calculator, use the LinRegTTest in the STAT menu. The setup is in Figure $2$. The reason that >0 was chosen is because the question was asked if there was a positive correlation. If you are asked if there is a negative correlation, then pick <0. If you are just asked if there is a correlation, then pick $\neq 0$. Right now the choice will not make a different, but it will be important later. To compute the correlation coefficient in R, the command is cor(independent variable, dependent variable). So for this example the command would be cor(alcohol, calories). The output is [1] 0.9134414 The correlation coefficient is r = 0.913. This is close to 1, so it looks like there is a strong, positive correlation. Causation One common mistake people make is to assume that because there is a correlation, then one variable causes the other. This is usually not the case. That would be like saying the amount of alcohol in the beer causes it to have a certain number of calories. However, fermentation of sugars is what causes the alcohol content. The more sugars you have, the more alcohol can be made, and the more sugar, the higher the calories. It is actually the amount of sugar that causes both. Do not confuse the idea of correlation with the concept of causation. Just because two variables are correlated does not mean one causes the other to happen. Example $2$ correlation versus Causation 1. A study showed a strong linear correlation between per capita beer consumption and teacher’s salaries. Does giving a teacher a raise cause people to buy more beer? Does buying more beer cause teachers to get a raise? 2. A study shows that there is a correlation between people who have had a root canal and those that have cancer. Does that mean having a root canal causes cancer? Solution a. There is probably some other factor causing both of them to increase at the same time. Think about this: In a town where people have little extra money, they won’t have money for beer and they won’t give teachers raises. In another town where people have more extra money to spend it will be easier for them to buy more beer and they would be more willing to give teachers raises. b. Just because there is positive correlation doesn’t mean that one caused the other. It turns out that there is a positive correlation between eating carrots and cancer, but that doesn’t mean that eating carrots causes cancer. In other words, there are lots of relationships you can find between two variables, but that doesn’t mean that one caused the other. Remember a correlation only means a pattern exists. It does not mean that one variable causes the other variable to change. Explained Variation As stated before, there is some variability in the dependent variable values, such as calories. Some of the variation in calories is due to alcohol content and some is due to other factors. How much of the variation in the calories is due to alcohol content? When considering this question, you want to look at how much of the variation in calories is explained by alcohol content and how much is explained by other variables. Realize that some of the changes in calories have to do with other ingredients. You can have two beers at the same alcohol content, but beer one has higher calories because of the other ingredients. Some variability is explained by the model and some variability is not explained. Together, both of these give the total variability. This is $\begin{array}{ccccc} {\text{(total variation)}}&{=}&{\text{(explained variation)}}&{+}&{\text{(unexplained variation)}}\ {\sum(y-\overline{y})^{2}}&{=}& {\sum(\hat{y}-\overline{y})^{2}}&{+}&{\sum(y-\hat{y})^{2}} \end{array}$ Note The proportion of the variation that is explained by the model is $r^{2}=\dfrac{\text { explained variation }}{\text { total variation }}$ This is known as the coefficient of determination. To find the coefficient of determination, you square the correlation coefficient. In addition, $r^{2}$ is part of the calculator results. Example $3$ finding the coefficient of determination Find the coefficient of variation in calories that is explained by the linear relationship between alcohol content and calories and interpret the value. Solution From the calculator results, $r^{2} = 0.8344$ Using R, you can do (cor(independent variable, dependent variable))^2. So that would be (cor(alcohol, calories))^2, and the output would be [1] 0.8343751 Or you can just use a calculator and square the correlation value. Thus, 83.44% of the variation in calories is explained to the linear relationship between alcohol content and calories. The other 16.56% of the variation is due to other factors. A really good coefficient of determination has a very small, unexplained part. and $r^{2}$ How strong is the relationship between the alcohol content and the number of calories in 12-ounce beer? To determine if there is a positive linear correlation, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data are in Example $1$. Find the correlation coefficient and the coefficient of determination using the formula. Solution From Example $2$, $S S_{x}=12.45, S S_{y}=10335.5556, S S_{x y}=327.6667$ Correlation coefficient: $r=\dfrac{S S_{x y}}{\sqrt{S S_{x} S S_{y}}}=\dfrac{327.6667}{\sqrt{12.45 * 10335.5556}} \approx 0.913$ Coefficient of determination: $r^{2}=(r)^{2}=(0.913)^{2} \approx 0.834$ Now that you have a correlation coefficient, how can you tell if it is significant or not? This will be answered in the next section. Homework Exercise $1$ For each problem, state the random variables. Also, look to see if there are any outliers that need to be removed. Do the correlation analysis with and without the suspected outlier points to determine if their removal affects the correlation. The data sets in this section are in section 10.1 and will be used in section 10.3. 1. When an anthropologist finds skeletal remains, they need to figure out the height of the person. The height of a person (in cm) and the length of their metacarpal bone 1 (in cm) were collected and are in Example $5$ ("Prediction of height," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 2. Example $6$ contains the value of the house and the amount of rental income in a year that the house brings in ("Capital and rental," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 3. The World Bank collects information on the life expectancy of a person in each country ("Life expectancy at," 2013) and the fertility rate per woman in the country ("Fertility rate," 2013). The data for 24 randomly selected countries for the year 2011 are in Example $7$. Find the correlation coefficient and coefficient of determination and then interpret both. 4. The World Bank collected data on the percentage of GDP that a country spends on health expenditures ("Health expenditure," 2013) and also the percentage of women receiving prenatal care ("Pregnant woman receiving," 2013). The data for the countries where this information is available for the year 2011 are in Example $8$. Find the correlation coefficient and coefficient of determination and then interpret both. 5. The height and weight of baseball players are in Example $9$ ("MLB heightsweights," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 6. Different species have different body weights and brain weights are in Example $10$. ("Brain2bodyweight," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 7. A random sample of beef hotdogs was taken and the amount of sodium (in mg) and calories were measured. ("Data hotdogs," 2013) The data are in Example $11$. Find the correlation coefficient and coefficient of determination and then interpret both. 8. Per capita income in 1960 dollars for European countries and the percent of the labor force that works in agriculture in 1960 are in Example $12$ ("OECD economic development," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 9. Cigarette smoking and cancer have been linked. The number of deaths per one hundred thousand from bladder cancer and the number of cigarettes sold per capita in 1960 are in Example $13$ ("Smoking and cancer," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 10. The weight of a car can influence the mileage that the car can obtain. A random sample of cars weights and mileage was collected and are in Example $14$ ("Passenger car mileage," 2013). Find the correlation coefficient and coefficient of determination and then interpret both. 11. There is a negative correlation between police expenditure and crime rate. Does this mean that spending more money on police causes the crime rate to decrease? Explain your answer. 12. There is a positive correlation between tobacco sales and alcohol sales. Does that mean that using tobacco causes a person to also drink alcohol? Explain your answer. 13. There is a positive correlation between the average temperature in a location and the morality rate from breast cancer. Does that mean that higher temperatures cause more women to die of breast cancer? Explain your answer. 14. There is a positive correlation between the length of time a tableware company polishes a dish and the price of the dish. Does that mean that the time a plate is polished determines the price of the dish? Explain your answer. Answer Only the correlation coefficient and coefficient of determination are given. See solutions for the entire answer. 1. r = 0.9578, $r^{2}$ = 0.7357 3. r = -0.9313, $r^{2}$ = 0.8674 5. r = 0.6605, $r^{2}$ = 0.4362 7. r = 0.8871, $r^{2}$ = 0.7869 9. r = 0.7036, $r^{2}$ = 0.4951 11. No, see solutions. 13. No, see solutions.
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/10%3A_Regression_and_Correlation/10.02%3A_Correlation.txt
How do you really say you have a correlation? Can you test to see if there really is a correlation? Of course, the answer is yes. The hypothesis test for correlation is as follows: Hypothesis Test for Correlation: 1. State the random variables in words. x = independent variable y = dependent variable 2. State the null and alternative hypotheses and the level of significance $\begin{array}{l}{H_{o} : \rho=0 \text { (There is no correlation) }} \ {H_{A} : \rho \neq 0 \text { (There is a correlation) }} \ {\text { or }} \ {H_{A} : \rho<0 \text { (There is a negative correlation) }} \ {\text { or }} \ {H_{A} : \rho>0 \text { (There is a postive correlation) }}\end{array}$ Also, state your $\alpha$ level here. 3. State and check the assumptions for the hypothesis test The assumptions for the hypothesis test are the same assumptions for regression and correlation. 4. Find the test statistic and p-value $t=\dfrac{r}{\sqrt{\dfrac{1-r^{2}}{n-2}}}$ with degrees of freedom = df = n - 2 p-value: Using the TI-83/84: tcdf(lower limit, upper limit, df) Note If $H_{A} : \rho<0$, then lower limit is -1E99 and upper limit is your test statistic. If $H_{A} : \rho>0$, then lower limit is your test statistic and the upper limit is 1E99. If $H_{A} : \rho \neq 0$, then find the p-value for $H_{A} : \rho<0$, and multiply by 2. Using R: pt(t, df) Note If $H_{A} : \rho<0$, then use pt(t, df), If $H_{A} : \rho>0$, then use $1-\mathrm{pt}(t, d f)$. If $H_{A} : \rho \neq 0$, then find the p-value for $H_{A} : \rho<0$, and multiply by 2. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Note The TI-83/84 calculator results give you the test statistic and the p-value. In R, the command for getting the test statistic and p-value is cor.test(independent variable, dependent variable, alternative = "less" or "greater"). Use less for $H_{A} : \rho<0$, use greater for $H_{A} : \rho>0$, and leave off this command for $H_{A} : \rho \neq 0$. Example $1$ Testing the claim of a linear correlation Is there a positive correlation between beer’s alcohol content and calories? To determine if there is a positive linear correlation, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data is in Example $1$. Test at the 5% level. Solution 1. State the random variables in words. x = alcohol content in the beer y = calories in 12 ounce beer 2. State the null and alternative hypotheses and the level of significance. Since you are asked if there is a positive correlation, $\rho> 0$. $\begin{array}{l}{H_{o} : \rho=0} \ {H_{A} : \rho>0} \ {\alpha=0.05}\end{array}$ 3. State and check the assumptions for the hypothesis test. The assumptions for the hypothesis test were already checked in Example $2$. 4. Find the test statistic and p-value. The results from the TI-83/84 calculator are in Figure $1$. Figure $1$: Results for Linear Regression Test on TI-83/84 Test statistic: t $\approx$ 5.938 and p-value: $p \approx 2.884 \times 10^{-4}$ The results from R are cor.test(alcohol, calories, alternative = "greater") Pearson's product-moment correlation data: alcohol and calories t = 5.9384, df = 7, p-value = 0.0002884 alternative hypothesis: true correlation is greater than 0 95 percent confidence interval: 0.7046161 1.0000000 sample estimates: cor 0.9134414 Test statistic: t $\approx$ 5.9384 and p-value: $p \approx 0.0002884$ 5. Conclusion Reject $H_{o}$ since the p-value is less than 0.05. 6. Interpretation There is enough evidence to show that there is a positive correlation between alcohol content and number of calories in a 12-ounce bottle of beer. Prediction Interval Using the regression equation you can predict the number of calories from the alcohol content. However, you only find one value. The problem is that beers vary a bit in calories even if they have the same alcohol content. It would be nice to have a range instead of a single value. The range is called a prediction interval. To find this, you need to figure out how much error is in the estimate from the regression equation. This is known as the standard error of the estimate. Definition Standard Error of the Estimate This is the sum of squares of the residuals $s_{e}=\sqrt{\dfrac{\sum(y-\hat{y})^{2}}{n-2}}$ This formula is hard to work with, so there is an easier formula. You can also find the value from technology, such as the calculator. $s_{e}=\sqrt{\dfrac{S S_{y}-b^{*} S S_{x y}}{n-2}}$ Example $2$ finding the standard error of the estimate Find the standard error of the estimate for the beer data. To determine if there is a positive linear correlation, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data are in Example $1$. Solution x = alcohol content in the beer y = calories in 12 ounce beer Using the TI-83/84, the results are in Figure $2$. The s in the results is the standard error of the estimate. So $s_{e} \approx 15.64$. To find the standard error of the estimate in R, the commands are lm.out = lm(dependent variable ~ independent variable) – this defines the linear model with a name so you can use it later. Then summary(lm.out) – this will produce most of the information you need for a regression and correlation analysis. In fact, the only thing R doesn’t produce with this command is the correlation coefficient. Otherwise, you can use the command to find the regression equation, coefficient of determination, test statistic, p-value for a two-tailed test, and standard error of the estimate. The results from R are lm.out=lm(calories~alcohol) summary(lm.out) Call: lm(formula = calories ~ alcohol) Residuals: $\begin{array} {ccccc} {\text{Min}} & {\text{1Q}} & {\text{Median}} & {\text{3Q}} & {\text{Max}} \{-30.253}&{-1.624}&{2.744}&{9.271}&{14.271} \end{array}$ Coefficients: $\begin{array}{ccccc} {}&{\text{Estimate Std.}}&{\text{Error}}&{\text{t value}}&{\text{Pr(>|t|)}} \ {\text{(Intercept)}}&{25.031}&{24.999}&{1.001}&{0.350038}\{\text{alcohol}}&{26.319}&{4.432}&{5.938}&{0.000577}\end{array}$ --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 15.64 on 7 degrees of freedom Multiple R-squared: 0.8344, Adjusted R-squared: 0.8107 F-statistic: 35.26 on 1 and 7 DF, p-value: 0.0005768 From this output, you can find the y-intercept is 25.031, the slope is 26.319, the test statistic is t = 5.938, the p-value for the two-tailed test is 0.000577. If you want the p-value for a one-tailed test, divide this number by 2. The standard error of the estimate is the residual standard error and is 15.64. There is some information in this output that you do not need. If you want to know how to calculate the standard error of the estimate from the formula, refer to Example $3$. Example $3$ finding the standard error of the estimate from the formula Find the standard error of the estimate for the beer data using the formula. To determine if there is a positive linear correlation, a random sample was taken of beer’s alcohol content and calories for several different beers ("Calories in beer," 2011), and the data are in Example $1$. Solution x = alcohol content in the beer y = calories in 12 ounce beer From Example $3$ : $\begin{array}{l}{S S_{y}=\sum(y-\overline{y})^{2}=10335.56} \ {S S_{x y}=\sum(x-\overline{x})(y-\overline{y})=327.6666} \ {n=9} \ {b=26.3}\end{array}$ The standard error of the estimate is \begin{aligned} s_{e} &=\sqrt{\dfrac{S S_{y}-b^{*} S S_{x y}}{n-2}} \ &=\sqrt{\dfrac{10335.56-26.3(327.6666)}{9-2}} \ &=15.67 \end{aligned} Prediction Interval for an Individual y Given the fixed value $x_{0}$, the prediction interval for an individual y is $\hat{y}-E<y<\hat{y}+E$ where $\begin{array}{l}{\hat{y}=a+b x} \ {E=t_{c} s_{e} \sqrt{1+\dfrac{1}{n}+\dfrac{\left(x_{o}-\overline{x}\right)^{2}}{S S_{x}}}} \ {d f=n-2}\end{array}$ Note To find $S S_{x}=\sum(x-\overline{x})^{2}$ remember, the standard derivation formula from chapter 3 $s_{x}=\sqrt{\dfrac{\sum(x-\overline{x})^{2}}{n-1}}$ So, $s_{x}=\sqrt{\dfrac{S S_{x}}{n-1}}$ Now solve for $S S_{x}$ $S S_{x}=s_{x}^{2}(n-1)$ You can get the standard deviation from technology. R will produce the prediction interval for you. The commands are (Note you probably already did the lm.out command. You do not need to do it again.) lm.out = lm(dependent variable ~ independent variable) – calculates the linear model predict(lm.out, newdata=list(independent variable = value), interval="prediction", level=C) – will compute a prediction interval for the independent variable set to a particular value (put that value in place of the word value), at a particular C level (given as a decimal) Example $4$ find the prediction interval Find a 95% prediction interval for the number of calories when the alcohol content is 6.5% using a random sample taken of beer’s alcohol content and calories ("Calories in beer," 2011). The data are in Example $1$. Solution x = alcohol content in the beer y = calories in 12 ounce beer Computing the prediction interval using the TI-83/84 calculator: From Example $2$ $\begin{array}{l}{\hat{y}=25.0+26.3 x} \ {x_{o}=6.50} \ {\hat{y}=25.0+26.3(6.50)=196 \text { calories }}\end{array}$ From Example #10.3.2 $s_{e} \approx 15.64$ $\begin{array}{l}{\overline{x}=5.517} \ {s_{x}=1.247497495} \ {n=9}\end{array}$ Now you can find \begin{aligned} S S_{x} &=s_{x}^{2}(n-1) \ &=(1.247497495)^{2}(9-1) \ &=12.45 \ d f &=n-2=9-2=7 \end{aligned} Now look in table A.2. Go down the first column to 7, then over to the column headed with 95%. $t_{c}=2.365$ \begin{aligned} E &=t_{c} s_{e} \sqrt{1+\dfrac{1}{n}+\dfrac{\left(x_{o}-\overline{x}\right)^{2}}{S S_{x}}} \ &=2.365(15.64) \sqrt{1+\dfrac{1}{9}+\dfrac{(6.50-5.517)^{2}}{12.45}} \ &=40.3 \end{aligned} Prediction interval is $\begin{array}{l}{\hat{y}-E<y<\hat{y}+E} \ {196-40.3<y<196+40.3} \ {155.7<y<236.3}\end{array}$ Computing the prediction interval using R: predict(lm.out, newdata=list(alcohol=6.5), interval = "prediction", level=0.95) $\begin{array}{ccc}{}&{\text { fit }} & {\text { lwr }} & {\text { upr }} \ {1} & {196.1022} & {155.7847}&{236 .4196}\end{array}$ fit = $\hat{\mathcal{Y}}$ when x = 6.5%. lwr = lower limit of prediction interval. upr = upper limit of prediction interval. So the prediction interval is $155.8<y<236.4$. Statistical interpretation: There is a 95% chance that the interval $155.8<y<236.4$ contains the true value for the calories when the alcohol content is 6.5%. Real world interpretation: If a beer has an alcohol content of 6.50% then it has between 156 and 236 calories. Example $5$ Doing a correlation and regression analysis using the ti-83/84 Example $1$ contains randomly selected high temperatures at various cities on a single day and the elevation of the city. Elevation (in feet) 7000 4000 6000 3000 7000 4500 5000 Temperature (°F) 50 60 48 70 55 55 60 Table $1$: Temperatures and Elevation of Cities on a Given Day 1. State the random variables. 2. Find a regression equation for elevation and high temperature on a given day. 3. Find the residuals and create a residual plot. 4. Use the regression equation to estimate the high temperature on that day at an elevation of 5500 ft. 5. Use the regression equation to estimate the high temperature on that day at an elevation of 8000 ft. 6. Between the answers to parts d and e, which estimate is probably more accurate and why? 7. Find the correlation coefficient and coefficient of determination and interpret both. 8. Is there enough evidence to show a negative correlation between elevation and high temperature? Test at the 5% level. 9. Find the standard error of the estimate. 10. Using a 95% prediction interval, find a range for high temperature for an elevation of 6500 feet. Solution a. x = elevation y = high temperature b. 1. A random sample was taken as stated in the problem. 2. The distribution for each high temperature value is normally distributed for every value of elevation. 1. Look at the scatter plot of high temperature versus elevation. Figure $4$: Scatter Plot of Temperature Versus Elevation The scatter plot looks fairly linear. 2. There are no points that appear to be outliers. 3. The residual plot for temperature versus elevation appears to be fairly random. (See Figure $7$.) It appears that the high temperature is normally distributed. All calculations computed using the TI-83/84 calculator. $\hat{y}=77.4-0.0039 x$ c. x y $\hat{\mathcal{Y}}$ $y-\hat{y}$ 7000 50 50.1 -0.1 4000 60 61.8 -1.8 6000 48 54.0 -6.0 3000 70 65.7 4.3 7000 55 50.1 4.9 4500 55 59.85 -4.85 5000 60 57.9 2.1 Table $2$: Residuals for Elevation vs. Temperature Data The residuals appear to be fairly random. d. $\begin{array}{l}{x_{o}=5500} \ {\hat{y}=77.4-0.0039(5500)=55.95^{\circ} F}\end{array}$ e. $\begin{array}{l}{x_{o}=8000} \ {\hat{y}=77.4-0.0039(8000)=46.2^{\circ} F}\end{array}$ f. Part d is more accurate, since it is interpolation and part e is extrapolation. g. From Figure $6$, the correlation coefficient is r $\approx$ -0.814, which is moderate to strong negative correlation. From Figure $6$, the coefficient of determination is $r^{2} \approx 0.663$, which means that 66.3% of the variability in high temperature is explained by the linear model. The other 33.7% is explained by other variables such as local weather conditions. h. 1. State the random variables in words. x = elevation y = high temperature 2. State the null and alternative hypotheses and the level of significance $\begin{array}{l}{H_{o} : \rho=0} \ {H_{A} : \rho<0} \ {\alpha=0.05}\end{array}$ 3. State and check the assumptions for the hypothesis test The assumptions for the hypothesis test were already checked part b. 4. Find the test statistic and p-value From Figure $6$, Test statistic: $t \approx-3.139$ p-value: $p \approx 0.0129$ 5. Conclusion Reject $H_{o}$ since the p-value is less than 0.05. 6. Interpretation There is enough evidence to show that there is a negative correlation between elevation and high temperatures. i. From Figure $6$, $s_{e} \approx 4.677$ j. $\hat{y}=77.4-0.0039(6500) \approx 52.1^{\circ} F$ $\begin{array}{l}{\overline{x}=5214.29} \ {s_{x}=1523.624} \ {n=7}\end{array}$ Now you can find \begin{aligned} S S_{x} &=s_{x}^{2}(n-1) \ &=(1523.623501)^{2}(7-1) \ &=13928571.43 \ d f &=n-2=7-2=5 \end{aligned} Now look in table A.2. Go down the first column to 5, then over to the column headed with 95%. $t_{c}=2.571$ So \begin{aligned} E &=t_{c} s_{e} \sqrt{1+\dfrac{1}{n}+\dfrac{\left(x_{o}-\overline{x}\right)^{2}}{S S_{x}}} \ &=2.571(4.677) \sqrt{1+\dfrac{1}{7}+\dfrac{(6500-5214.29)^{2}}{13928571.43}} \ &=13.5 \end{aligned} Prediction interval is $\begin{array}{l}{\hat{y}-E<y<\hat{y}+E} \ {52.1-13.5<y<52.1+13.5} \ {38.6<y<65.6}\end{array}$ Statistical interpretation: There is a 95% chance that the interval $38.6<y<65.6$ contains the true value for the temperature at an elevation of 6500 feet. Real world interpretation: A city of 6500 feet will have a high temperature between 38.6°F and 65.6°F. Though this interval is fairly wide, at least the interval tells you that the temperature isn’t that warm. Example $6$ doing a correlation and regression analysis using r Example $1$ contains randomly selected high temperatures at various cities on a single day and the elevation of the city. 1. State the random variables. 2. Find a regression equation for elevation and high temperature on a given day. 3. Find the residuals and create a residual plot. 4. Use the regression equation to estimate the high temperature on that day at an elevation of 5500 ft. 5. Use the regression equation to estimate the high temperature on that day at an elevation of 8000 ft. 6. Between the answers to parts d and e, which estimate is probably more accurate and why? 7. Find the correlation coefficient and coefficient of determination and interpret both. 8. Is there enough evidence to show a negative correlation between elevation and high temperature? Test at the 5% level. 9. Find the standard error of the estimate. 10. Using a 95% prediction interval, find a range for high temperature for an elevation of 6500 feet. Solution a. x = elevation y = high temperature b. 1. A random sample was taken as stated in the problem. 2. The distribution for each high temperature value is normally distributed for every value of elevation. 1. Look at the scatter plot of high temperature versus elevation. R command: plot(elevation, temperature, main="Scatter Plot for Temperature vs Elevation", xlab="Elevation (feet)", ylab="Temperature (degrees F)", ylim=c(0,80)) Figure $9$: Scatter Plot of Temperature Versus Elevation The scatter plot looks fairly linear. 2. The residual plot for temperature versus elevation appears to be fairly random. (See Figure 10.3.10.) It appears that the high temperature is normally distributed. Using R: Commands: lm.out=lm(temperature ~ elevation) summary(lm.out) Output: Call: lm(formula = temperature ~ elevation) Residuals: $\begin{array} {ccccccc} {1}&{2}&{3}&{4}&{5}&{6}&{7}\{ 0.1667}&{-1.6333}&{-5.7667}&{4 .4333}&{5 .1667}&{-4.6667}&{ 2.3000} \end{array}$ Coefficients: $\begin{array}{ccccc} {}&{\text{Estimate Std.}}&{\text{Error}}&{\text{t value}}&{\text{Pr(>|t|)}} \ {\text{(Intercept)}}&{77.36667}&{6.769182}&{11.429}&{8.98e-05 ***}\{\text{elevation}}&{-0.003933}&{0.001253}&{-3.139}&{0.0257*}\end{array}$ --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 4.677 on 5 degrees of freedom Multiple R-squared: 0.6633, Adjusted R-squared: 0.596 F-statistic: 9.852 on 1 and 5 DF, p-value: 0.0257 From the output you can see the slope = -0.0039 and the y-intercept = 77.4. So the regression equation is: $\hat{y}=77.4-0.0039 x$ c. R command: (notice these are also in the summary(lm.out) output, but if you have too many data points, then R only gives a numerical summary of the residuals.) residuals(lm.out) $\begin{array} {CCCCCCC} {1}&{2}&{3}&{4}&{5}&{6}&{7} \ {0.1666667}&{-1.63333333}&{-5.766667}&{4 .43333333}&{5 .1666667}&{-4.66666667}&{2.3000000} \end{array}$ So for the first x of 7000, the residual is approximately 0.1667. This means if you find the $\hat{y}$ for when x is 7000 and then subtract this answer from the y value of 50 that was measured, you would obtain 0.1667. Similar process is computed for the other residual values. To plot the residuals, the R command is plot(elevation, residuals(lm.out), main="Residual Plot for Temperautre vs Elevation", xlab="Elevation (feet)", ylab="Residuals") abline(0,0) The residuals appear to be fairly random. d. $\begin{array}{l}{x_{o}=5500} \ {\hat{y}=77.4-0.0039(5500)=55.95^{\circ} F}\end{array}$ e. $\begin{array}{l}{x_{o}=8000} \ {\hat{y}=77.4-0.0039(8000)=46.2^{\circ} F}\end{array}$ f. Part d is more accurate, since it is interpolation and part e is extrapolation. g. The R command for the correlation coefficient is cor(elevation, temperature) [1] -0.8144564 So, $r \approx-0.814$, which is moderate to strong negative correlation. From summary(lm.out), the coefficient of determination is the Multiple R-squared. So $r^{2} \approx 0.663$, which means that 66.3% of the variability in high temperature is explained by the linear model. The other 33.7% is explained by other variables such as local weather conditions. h. 1. State the random variables in words. x = elevation y = high temperature 2. . State the null and alternative hypotheses and the level of significance $\begin{array}{l}{H_{o} : \rho=0} \ {H_{A} : \rho<0} \ {\alpha=0.05}\end{array}$ 3. State and check the assumptions for the hypothesis test. The assumptions for the hypothesis test were already checked part b. 4. Find the test statistic and p-value The R command is cor.test(elevation, temperature, alternative = "less") Pearson's product-moment correlation data: elevation and temperature t = -3.1387, df = 5, p-value = 0.01285 alternative hypothesis: true correlation is less than 0 95 percent confidence interval: -1.0000000 -0.3074247 sample estimates: cor -0.8144564 Test statistic: $t \approx-3.1387$ and p-value: $p \approx 0.01285$ 5. Conclusion Reject $H_{o}$ since the p-value is less than 0.05. 6. Interpretation There is enough evidence to show that there is a negative correlation between elevation and high temperatures. i. From summary(lm.out), Residual standard error: 4.677. So, $s_{e} \approx 4.677$ j. R command is predict(lm.out, newdata=list(elevation = 6500), interval = "prediction", level=0.95) $\begin{array}{cccc}{}&{\text { fit }}&{ \text { lwr} } &{ \text { upr }} \ {1}&{51.8}&{38 .29672}&{65 .30328}\end{array}$ So when x = 6500 feet, $\hat{y}=51.8^{\circ} F \text { and } 38.29672<y<65.30328$. Statistical interpretation: There is a 95% chance that the interval $38.3<y<65.3$ contains the true value for the temperature at an elevation of 6500 feet. Real world interpretation: A city of 6500 feet will have a high temperature between 38.3°F and 65.3°F. Though this interval is fairly wide, at least the interval tells you that the temperature isn’t that warm. Homework Exercise $1$ For each problem, state the random variables. The data sets in this section are in the homework for section 10.1 and were also used in section 10.2. If you removed any data points as outliers in the other sections, remove them in this sections homework too. 1. When an anthropologist finds skeletal remains, they need to figure out the height of the person. The height of a person (in cm) and the length of their metacarpal bone one (in cm) were collected and are in Example $5$ ("Prediction of height," 2013). 1. Test at the 1% level for a positive correlation between length of metacarpal bone one and height of a person. 2. Find the standard error of the estimate. 3. Compute a 99% prediction interval for height of a person with a metacarpal length of 44 cm. 2. Example $6$ contains the value of the house and the amount of rental income in a year that the house brings in ("Capital and rental," 2013). 1. Test at the 5% level for a positive correlation between house value and rental amount. 2. Find the standard error of the estimate. 3. Compute a 95% prediction interval for the rental income on a house worth \$230,000. 3. The World Bank collects information on the life expectancy of a person in each country ("Life expectancy at," 2013) and the fertility rate per woman in the country ("Fertility rate," 2013). The data for 24 randomly selected countries for the year 2011 are in Example $7$. 1. Test at the 1% level for a negative correlation between fertility rate and life expectancy. 2. Find the standard error of the estimate. 3. Compute a 99% prediction interval for the life expectancy for a country that has a fertility rate of 2.7. 4. The World Bank collected data on the percentage of GDP that a country spends on health expenditures ("Health expenditure," 2013) and also the percentage of women receiving prenatal care ("Pregnant woman receiving," 2013). The data for the countries where this information is available for the year 2011 are in Example $8$. 1. Test at the 5% level for a correlation between percentage spent on health expenditure and the percentage of women receiving prenatal care. 2. Find the standard error of the estimate. 3. Compute a 95% prediction interval for the percentage of woman receiving prenatal care for a country that spends 5.0 % of GDP on health expenditure. 5. The height and weight of baseball players are in Example $9$ ("MLB heightsweights," 2013). 1. Test at the 5% level for a positive correlation between height and weight of baseball players. 2. Find the standard error of the estimate. 3. Compute a 95% prediction interval for the weight of a baseball player that is 75 inches tall. 6. Different species have different body weights and brain weights are in Example $10$. ("Brain2bodyweight," 2013). 1. Test at the 1% level for a positive correlation between body weights and brain weights. 2. Find the standard error of the estimate. 3. Compute a 99% prediction interval for the brain weight for a species that has a body weight of 62 kg. 7. A random sample of beef hotdogs was taken and the amount of sodium (in mg) and calories were measured. ("Data hotdogs," 2013) The data are in Example $11$. 1. Test at the 5% level for a correlation between amount of calories and amount of sodium. 2. Find the standard error of the estimate. 3. Compute a 95% prediction interval for the amount of sodium a beef hotdog has if it is 170 calories. 8. Per capita income in 1960 dollars for European countries and the percent of the labor force that works in agriculture in 1960 are in Example $12$ ("OECD economic development," 2013). 1. Test at the 5% level for a negative correlation between percent of labor force in agriculture and per capita income. 2. Find the standard error of the estimate. 3. Compute a 90% prediction interval for the per capita income in a country that has 21 percent of labor in agriculture. 9. Cigarette smoking and cancer have been linked. The number of deaths per one hundred thousand from bladder cancer and the number of cigarettes sold per capita in 1960 are in Example $13$ ("Smoking and cancer," 2013). 1. Test at the 1% level for a positive correlation between cigarette smoking and deaths of bladder cancer. 2. Find the standard error of the estimate. 3. Compute a 99% prediction interval for the number of deaths from bladder cancer when the cigarette sales were 20 per capita. 10. The weight of a car can influence the mileage that the car can obtain. A random sample of cars weights and mileage was collected and are in Example $14$ ("Passenger car mileage," 2013). 1. Test at the 5% level for a negative correlation between the weight of cars and mileage. 2. Find the standard error of the estimate. 3. Compute a 95% prediction interval for the mileage on a car that weighs 3800 pounds. Answer For hypothesis test just the conclusion is given. See solutions for entire answer. 1. a. Reject Ho, b. $s_{e} \approx 4.559$, c. $151.3161 \mathrm{cm}<y<187.3859 \mathrm{cm}$ 3. a. Reject Ho, b. $s_{e} \approx 3.204$, c. $62.945 \text { years }<y<81.391 \text{years}$ 5. a. Reject Ho, b. $s_{e} \approx 15.33$, c. $176.02 \text { inches }<y<240.92 \text{inches}$ 7. a. Reject Ho, b. $s_{e} \approx 48.58$, c. $348.46 \mathrm{mg}<y<559.38 \mathrm{mg}$ 9. a. Reject Ho, b. $s_{e} \approx 0.6838$, c. $1.613 \text { hundred thousand }<y<5.432 \text{ hundred thousand}$ Data Source: Brain2bodyweight. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index...ain2BodyWeight Calories in beer, beer alcohol, beer carbohydrates. (2011, October 25). Retrieved from www.beer100.com/beercalories.htm Capital and rental values of Auckland properties. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/rentcap.html Data hotdogs. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index...D_Data_HotDogs Fertility rate. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SP.DYN.TFRT.IN Health expenditure. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.XPD.TOTL.ZS Life expectancy at birth. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SP.DYN.LE00.IN MLB heightsweights. (2013, November 16). Retrieved from http://wiki.stat.ucla.edu/socr/index...HeightsWeights OECD economic development. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Datafiles/oecdat.html Passenger car mileage. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Datafiles/carmpgdat.html Prediction of height from metacarpal bone length. (2013, September 26). Retrieved from http://www.statsci.org/data/general/stature.html Pregnant woman receiving prenatal care. (2013, October 14). Retrieved from http://data.worldbank.org/indicator/SH.STA.ANVC.ZS Smoking and cancer. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Datafil...cancerdat.html
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/10%3A_Regression_and_Correlation/10.03%3A_Inference_for_Regression_and_Correlation.txt
This chapter presents material on three more hypothesis tests. One is used to determine significant relationship between two qualitative variables, the second is used to determine if the sample data has a particular distribution, and the last is used to determine significant relationships between means of 3 or more samples. • 11.1: Chi-Square Test for Independence • 11.2: Chi-Square Goodness of Fit • 11.3: Analysis of Variance (ANOVA) There are times where you want to compare three or more population means. One idea is to just test different combinations of two means. The problem with that is that your chance for a type I error increases. Instead you need a process for analyzing all of them at the same time. This process is known as analysis of variance (ANOVA). The test statistic for the ANOVA is fairly complicated, you will want to use technology to find the test statistic and p-value. 11: Chi-Square and ANOVA Tests Remember, qualitative data is where you collect data on individuals that are categories or names. Then you would count how many of the individuals had particular qualities. An example is that there is a theory that there is a relationship between breastfeeding and autism. To determine if there is a relationship, researchers could collect the time period that a mother breastfed her child and if that child was diagnosed with autism. Then you would have a table containing this information. Now you want to know if each cell is independent of each other cell. Remember, independence says that one event does not affect another event. Here it means that having autism is independent of being breastfed. What you really want is to see if they are not independent. In other words, does one affect the other? If you were to do a hypothesis test, this is your alternative hypothesis and the null hypothesis is that they are independent. There is a hypothesis test for this and it is called the Chi-Square Test for Independence. Technically it should be called the Chi-Square Test for Dependence, but for historical reasons it is known as the test for independence. Just as with previous hypothesis tests, all the steps are the same except for the assumptions and the test statistic. Hypothesis Test for Chi-Square Test 1. State the null and alternative hypotheses and the level of significance $H_{o}$: the two variables are independent (this means that the one variable is not affected by the other) $H_{A}$: the two variables are dependent (this means that the one variable is affected by the other) Also, state your $\alpha$ level here. 2. State and check the assumptions for the hypothesis test 1. A random sample is taken. 2. Expected frequencies for each cell are greater than or equal to 5 (The expected frequencies, E, will be calculated later, and this assumption means $E \geq 5$). 3. Find the test statistic and p-value Finding the test statistic involves several steps. First the data is collected and counted, and then it is organized into a table (in a table each entry is called a cell). These values are known as the observed frequencies, which the symbol for an observed frequency is O. Each table is made up of rows and columns. Then each row is totaled to give a row total and each column is totaled to give a column total. The null hypothesis is that the variables are independent. Using the multiplication rule for independent events you can calculate the probability of being one value of the first variable, A, and one value of the second variable, B (the probability of a particular cell $P(A \text { and } B) )$. Remember in a hypothesis test, you assume that $H_{o}$ is true, the two variables are assumed to be independent. \begin{align*} P(A \text { and } B) &=P(A) \cdot P(B) \text { if } A \text { and } B are independent \[4pt] &=\dfrac{\text { number of ways } A \text { can happen }}{\text { total number of individuals }} \cdot \dfrac{\text { number of ways } B \text { can happen }}{\text { total number of individuals }} \[4pt] &= \dfrac{\text { row total }}{n} * \dfrac{\text { column total }}{n} \end{align*} Now you want to find out how many individuals you expect to be in a certain cell. To find the expected frequencies, you just need to multiply the probability of that cell times the total number of individuals. Do not round the expected frequencies. Expected frequency $(\operatorname{cell} A \text { and } B)=E(A \text { and } B)$ $\begin{array}{l}{=n\left(\dfrac{\text { row total }}{n} \cdot \dfrac{\text { column total }}{n}\right)} \ {=\dfrac{\text { row total -column total }}{n}}\end{array}$ If the variables are independent the expected frequencies and the observed frequencies should be the same. The test statistic here will involve looking at the difference between the expected frequency and the observed frequency for each cell. Then you want to find the “total difference” of all of these differences. The larger the total, the smaller the chances that you could find that test statistic given that the assumption of independence is true. That means that the assumption of independence is not true. How do you find the test statistic? First find the differences between the observed and expected frequencies. Because some of these differences will be positive and some will be negative, you need to square these differences. These squares could be large just because the frequencies are large, you need to divide by the expected frequencies to scale them. Then finally add up all of these fractional values. This is the test statistic. Test Statistic: The symbol for Chi-Square is $\chi^{2}$ $\chi^{2}=\sum \dfrac{(O-E)^{2}}{E}$ where O is the observed frequency and E is the expected frequency Distribution of Chi-Square $\chi^{2}$ has different curves depending on the degrees of freedom. It is skewed to the right for small degrees of freedom and gets more symmetric as the degrees of freedom increases (see Figure $1$). Since the test statistic involves squaring the differences, the test statistics are all positive. A chi-squared test for independence is always right tailed. p-value: Using the TI-83/84: $\chi \text { cdf (lower limit, } 1 \mathrm{E} 99, d f )$ Using R: $1-\text { pchisq }\left(x^{2}, d f\right)$ Where the degrees of freedom is $d f=(\# \text { of rows }-1) *(\# \text { of columns }-1)$ 4. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 5. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Example $1$ hypothesis test with chi-square test using formula Is there a relationship between autism and breastfeeding? To determine if there is, a researcher asked mothers of autistic and non-autistic children to say what time period they breastfed their children. The data is in table #11.1.1 (Schultz, Klonoff-Cohen, Wingard, Askhoomoff, Macera, Ji & Bacher, 2006). Do the data provide enough evidence to show that that breastfeeding and autism are independent? Test at the1% level. Autis261m Breast Feeding Timelines Row Total None Less than 2 months 2 to 6 months More than 6 months Yes 241 198 164 215 818 No 20 25 27 44 116 Column Total 261 223 191 259 934 Table $1$: Autism Versus Breastfeeding Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: Breastfeeding and autism are independent $H_{A}$: Breastfeeding and autism are dependent $\alpha$ = 0.01 2. State and check the assumptions for the hypothesis test 1. A random sample of breastfeeding time frames and autism incidence was taken. 2. Expected frequencies for each cell are greater than or equal to 5 (ie. $E \geq 5$). See step 3. All expected frequencies are more than 5. 3. Find the test statistic and p-value Test statistic: First find the expected frequencies for each cell $E(\text { Autism and no breastfeeding })=\dfrac{818^{*} 261}{934} \approx 228.585$ $E(\text { Autism and }<2 \text { months })=\dfrac{818^{*} 223}{934} \approx 195.304$ $E(\text { Autism and } 2 \text { to } 6 \text { months })=\dfrac{818^{*} 191}{934} \approx 167.278$ $E(\text { Autism and more than } 6 \text { months })=\dfrac{818 * 259}{934} \approx 226.833$ Others are done similarly. It is easier to do the calculations for the test statistic with a table, the others are in table #11.1.2 along with the calculation for the test statistic. (Note: the column of O-E should add to 0 or close to 0.) O E O-E $(O-E)^{2}$ $(O-E)^{2} / E$ 241 228.585 12.415 154.132225 0.674288448 198 195.304 2.696 7.268416 0.03721591 164 167.278 -3.278 10.745284 0.064236086 215 226.833 -11.833 140.019889 0.617281828 20 32.4154 -12.4154 154.1421572 4.755213792 25 27.6959 -2.6959 7.26787681 0.262417066 27 23.7216 3.2784 10.74790656 0.453085229 44 32.167 11.833 140.019889 4.352904809 Total 0.0001 11.2166432 = $\chi^{2}$ Table $2$: Calculations for Chi-Square Test Statistic The test statistic formula is $\chi^{2}=\sum \dfrac{(O-E)^{2}}{E}$, which is the total of the last column in Example $2$. p-value: $d f=(2-1)^{*}(4-1)=3$ Using TI-83/84: $\chi \operatorname{cdf}(11.2166432,1 \mathrm{E} 99,3) \approx 0.01061$ Using R: $1-\text{pchisq}(11.2166432,3) \approx 0.01061566$ 4. Conclusion Fail to reject $H_{o}$ since the p-value is more than 0.01. 5. Interpretation There is not enough evidence to show that breastfeeding and autism are dependent. This means that you cannot say that the whether a child is breastfed or not will indicate if that the child will be diagnosed with autism. Example $2$ hypothesis test with chi-square test using technology Is there a relationship between autism and breastfeeding? To determine if there is, a researcher asked mothers of autistic and non-autistic children to say what time period they breastfed their children. The data is in Example $1$ (Schultz, Klonoff-Cohen, Wingard, Askhoomoff, Macera, Ji & Bacher, 2006). Do the data provide enough evidence to show that that breastfeeding and autism are independent? Test at the1% level. Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: Breastfeeding and autism are independent $H_{A}$: Breastfeeding and autism are dependent $\alpha$ = 0.01 2. State and check the assumptions for the hypothesis test 1. A random sample of breastfeeding time frames and autism incidence was taken. 2. Expected frequencies for each cell are greater than or equal to 5 (ie. $E \geq 5$). See step 3. All expected frequencies are more than 5. 3. Find the test statistic and p-value Test statistic: To use the TI-83/84 calculator to compute the test statistic, you must first put the data into the calculator. However, this process is different than for other hypothesis tests. You need to put the data in as a matrix instead of in the list. Go into the MATRX menu then move over to EDIT and choose 1:[A]. This will allow you to type the table into the calculator. Figure $2$ shows what you will see on your calculator when you choose 1:[A] from the EDIT menu. The table has 2 rows and 4 columns (don’t include the row total column and the column total row in your count). You need to tell the calculator that you have a 2 by 4. The 1 X1 (you might have another size in your matrix, but it doesn’t matter because you will change it) on the calculator is the size of the matrix. So type 2 ENTER and 4 ENTER and the calculator will make a matrix of the correct size. See Figure $3$. Now type the table in by pressing ENTER after each cell value. Figure $4$ contains the complete table typed in. Once you have the data in, press QUIT. To run the test on the calculator, go into STAT, then move over to TEST and choose $\chi^{2}$-Test from the list. The setup for the test is in Figure $5$. Once you press ENTER on Calculate you will see the results in Figure $6$. The test statistic is $\chi^{2} \approx 11.2167$ and the p-value is $p \approx 0.01061$. Notice that the calculator calculates the expected values for you and places them in matrix B. To eview the expected values, go into MATRX and choose 2:[B]. Figure $7$ shows the output. Press the right arrows to see the entire matrix. To compute the test statistic and p-value with R, row1 = c(data from row 1 separated by commas) row2 = c(data from row 2 separated by commas) keep going until you have all of your rows typed in. data.table = rbind(row1, row2, …) – makes the data into a table. You can call it what ever you want. It does not have to be data.table. data.table – use if you want to look at the table chisq.test(data.table) – calculates the chi-squared test for independence chisq.test(data.table)$expected – let’s you see the expected values For this example, the commands would be row1 = c(241, 198, 164, 215) row2 = c(20, 25, 27, 44) data.table = rbind(row1, row2) data.table Output: [,1] [,2] [,3] [,4] row1 241 198 164 215 row2 20 25 27 44 chisq.test(data.table) Output: Pearson's Chi-squared test data: data.table X-squared = 11.217, df = 3, p-value = 0.01061 chisq.test(data.table)$expected Output: [,1] [,2] [,3] [,4] row1 228.58458 195.30407 167.27837 226.83298 row2 32.41542 27.69593 23.72163 32.16702 The test statistic is $\chi^{2} \approx 11.217$ and the p-value is $p \approx 0.01061$. 4. Conclusion Fail to reject $H_{o}$ since the p-value is more than 0.01. 5. Interpretation There is not enough evidence to show that breastfeeding and autism are dependent. This means that you cannot say that the whether a child is breastfed or not will indicate if that the child will be diagnosed with autism. Example $3$ hypothesis test with chi-square test using formula The World Health Organization (WHO) keeps track of how many incidents of leprosy there are in the world. Using the WHO regions and the World Banks income groups, one can ask if an income level and a WHO region are dependent on each other in terms of predicting where the disease is. Data on leprosy cases in different countries was collected for the year 2011 and a summary is presented in Table $3$ ("Leprosy: Number of," 2013). Is there evidence to show that income level and WHO region are independent when dealing with the disease of leprosy? Test at the 5% level. WHO Region World Bank Income Group Row Total High Income Upper Middle Income Lower Middle Income Low Income Americas 174 36028 615 0 36817 Eastern Mediterranean 54 6 1883 604 2547 Europe 10 0 0 0 10 Western Pacific 26 216 3689 1155 5086 Africa 0 39 1986 15928 17953 South-East Asia 0 0 149896 10236 160132 Column Total 264 36289 158069 27923 222545 Table $3$: Number of Leprosy Cases Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: WHO region and Income Level when dealing with the disease of leprosy are independent $H_{A}$: WHO region and Income Level when dealing with the disease of leprosy are dependent $\alpha$ = 0.05 2. State and check the assumptions for the hypothesis test 1. A random sample of incidence of leprosy was taken from different countries and the income level and WHO region was taken. 2. Expected frequencies for each cell are greater than or equal to 5 (ie. $E \geq 5$). See step 3. There are actually 4 expected frequencies that are less than 5, and the results of the test may not be valid. If you look at the expected frequencies you will notice that they are all in Europe. This is because Europe didn’t have many cases in 2011. 3. Find the test statistic and p-value Test statistic: First find the expected frequencies for each cell. $E(\text { Americas and High Income })=\dfrac{36817 * 264}{222545} \approx 43.675$ $E(\text { Americas and Upper Middle Income })=\dfrac{36817 * 36289}{222545} \approx 6003.514$ $E (\text { Americas and Lower Middle Income) }=\dfrac{36817 * 158069}{222545} \approx 26150.335$ $E(\text { Americas and Lower Income })=\dfrac{36817 * 27923}{222545} \approx 4619.475$ Others are done similarly. It is easier to do the calculations for the test statistic with a table, and the others are in Example $4$ along with the calculation for the test statistic. O E O-E $(O-E)^{2}$ $(O-E)^{2} / E$ 174 43.675 130.325 16984.564 388.8838719 54 3.021 50.979 2598.813 860.1218328 10 0.012 9.988 99.763 8409.746711 26 6.033 19.967 398.665 66.07628214 0 21.297 -21.297 453.572 21.29722977 0 189.961 -189.961 36085.143 189.9608978 36028 6003.514 30024.486 901469735.315 150157.0038 6 415.323 -409.323 167545.414 403.4097962 0 1.631 -1.631 2.659 1.6306365 216 829.342 -613.342 376188.071 453.5983897 39 2927.482 -2888.482 8343326.585 2850.001268 0 26111.708 -26111.708 681821316.065 26111.70841 615 26150.335 -25535.335 652053349.724 24934.7988 1883 1809.080 73.290 5464.144 3.020398811 0 7.103 -7.103 50.450 7.1027882 3689 3612.478 76.522 5855.604 1.620938405 1986 12751.636 -10765.636 115898911.071 9088.944681 149896 113738.368 36157.632 1307374351.380 11494.57632 0 4619.475 -4619.475 21339550.402 4619.475122 604 319.575 284.425 80897.421 253.1404187 0 1.255 -1.255 1.574 1.25471253 1155 638.147 516.853 267137.238 418.6140882 15928 2252.585 13675.415 187016964.340 83023.25138 10236 20091.963 -9855.963 97140000.472 4834.769106 Total 0.000 328594.008 = $\chi^{2}$ Table $4$: Calculations for Chi-Square Test Statistic The test statistic formula is $\chi^{2}=\sum \dfrac{(O-E)^{2}}{E}$, which is the total of the last column in Example $2$. p-value: $d f=(6-1) *(4-1)=15$ Using the TI-83/84: $\chi \operatorname{cdf}(328594.008,1 \mathrm{E} 99,15) \approx 0$ Using R: $1-\text { pchisq }(328594.008,15) \approx 0$ 4. Conclusion Reject $H_{o}$ since the p-value is less than 0.05. 5. Interpretation There is enough evidence to show that WHO region and income level are dependent when dealing with the disease of leprosy. WHO can decide how to focus their efforts based on region and income level. Do remember though that the results may not be valid due to the expected frequencies not all be more than 5. Example $4$ hypothesis test with chi-square test using technology The World Health Organization (WHO) keeps track of how many incidents of leprosy there are in the world. Using the WHO regions and the World Banks income groups, one can ask if an income level and a WHO region are dependent on each other in terms of predicting where the disease is. Data on leprosy cases in different countries was collected for the year 2011 and a summary is presented in Table $3$ ("Leprosy: Number of," 2013). Is there evidence to show that income level and WHO region are independent when dealing with the disease of leprosy? Test at the 5% level. Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: WHO region and Income Level when dealing with the disease of leprosy are independent $H_{A}$: WHO region and Income Level when dealing with the disease of leprosy are dependent $\alpha$ = 0.05 2. State and check the assumptions for the hypothesis test 1. A random sample of incidence of leprosy was taken from different countries and the income level and WHO region was taken. 2. Expected frequencies for each cell are greater than or equal to 5 (ie. $E \geq 5$). See step 3. There are actually 4 expected frequencies that are less than 5, and the results of the test may not be valid. If you look at the expected frequencies you will notice that they are all in Europe. This is because Europe didn’t have many cases in 2011. 3. Find the test statistic and p-value Test statistic: Using the TI-83/84. See Example $2$ for the process of doing the test on the calculator. Remember, you need to put the data in as a matrix instead of in the list. $\chi^{2} \approx 328594.0079$ Press the right arrow to look at the other expected frequencies. p-value: $p-\text {value} \approx 0$ Using R: row1=c(174, 36028, 615, 0) row2=c(54, 6, 1883, 604) row3=c(10, 0, 0, 0) row4=c(26, 216, 3689, 1155) row5=c(0, 39, 1986, 15928) row6=c(0, 0, 149896, 10236) chisq.test(data.table) Pearson's Chi-squared test data: data.table X-squared = 328590, df = 15, p-value < 2.2e-16 Warning message: In chisq.test(data.table) : Chi-squared approximation may be incorrect chisq.test(data.table)\$expected $\begin{array} {ccccc} {}&{[,1]}&{[,2]}&{[,3]}&{[,4]} \ {\text{row1}}&{43.67515783}&{6003.514404}&{2.615034e+04}&{4619.475122}\{\text{row2}}&{3.02144735}&{415.323117}&{1.809080e+03}&{319.575281}\ {\text{row3}}&{0.01186277}&{1.630637}&{7.102788e+00}&{1.254713}\{\text{row4}}&{6.03340448}&{829.341724}&{3.612478e+03}&{638.146793}\{\text{row5}}&{21.29722977}&{2927.481709}&{1.275164e+04}&{2252.585405}\{\text{row6}}&{189.96089780}&{26111.708410}&{1.137384e+05}&{20091.962686} \end{array}$ Warning message: In chisq.test(data.table) : Chi-squared approximation may be incorrect $\chi^{2}=328590$ and p-value = $2.2 \times 10^{-16}$ 4. Conclusion Reject $H_{o}$ since the p-value is less than 0.05. 5. Interpretation There is enough evidence to show that WHO region and income level are dependent when dealing with the disease of leprosy. WHO can decide how to focus their efforts based on region and income level. Do remember though that the results may not be valid due to the expected frequencies not all be more than 5. Homework Exercise $1$ In each problem show all steps of the hypothesis test. If some of the assumptions are not met, note that the results of the test may not be correct and then continue the process of the hypothesis test. 1. The number of people who survived the Titanic based on class and sex is in Example $5$ ("Encyclopedia Titanica," 2013). Is there enough evidence to show that the class and the sex of a person who survived the Titanic are independent? Test at the 5% level. Class Sex Total Female Male 1st 134 59 193 2nd 94 25 119 3rd 80 58 138 Total 308 142 450 Table $5$: Surviving the Titanic 2. Researchers watched groups of dolphins off the coast of Ireland in 1998 to determine what activities the dolphins partake in at certain times of the day ("Activities of dolphin," 2013). The numbers in Example $6$ represent the number of groups of dolphins that were partaking in an activity at certain times of days. Is there enough evidence to show that the activity and the time period are independent for dolphins? Test at the 1% level. Activity Period Row Total Morning Noon Afternoon Evening Travel 6 6 14 13 39 Feed 28 4 0 56 88 Social 38 5 9 10 62 Column Total 72 15 23 79 189 Table $6$: Dolphin Activity 3. Is there a relationship between autism and what an infant is fed? To determine if there is, a researcher asked mothers of autistic and non-autistic children to say what they fed their infant. The data is in Example $7$ (Schultz, Klonoff-Cohen, Wingard, Askhoomoff, Macera, Ji & Bacher, 2006). Do the data provide enough evidence to show that that what an infant is fed and autism are independent? Test at the 1% level. Autism Feeding Row Total Breast feeding Formula with DHA/ARA Formula without DRA/ARA Yes 12 39 65 116 No 6 22 10 38 Column Total 18 61 75 164 Table $7$: Autism Versus Breastfeeding 4. A person’s educational attainment and age group was collected by the U.S. Census Bureau in 1984 to see if age group and educational attainment are related. The counts in thousands are in Example $8$ ("Education by age," 2013). Do the data show that educational attainment and age are independent? Test at the 5% level. Education Age Group Row Total 25-34 35-44 45-54 55-64 >64 Did not complete HS 5416 5030 5777 7606 13746 37575 Completed HS 16431 1855 9435 8795 7558 44074 College 1-3 years 8555 5576 3124 2524 2503 22282 College 4 or more years 9771 7596 3904 3109 2483 26863 Column Total 40173 20057 22240 22034 26290 130794 Table $8$: Educational Attainment and Age Group 5. Students at multiple grade schools were asked what their personal goal (get good grades, be popular, be good at sports) was and how important good grades were to them (1 very important and 4 least important). The data is in Example $9$ ("Popular kids datafile," 2013). Do the data provide enough evidence to show that goal attainment and importance of grades are independent? Test at the 5% level. Goal Grades Importance Rating Row Total 1 2 3 4 Grades 70 66 55 56 247 Popular 14 33 45 49 141 Sports 10 24 33 23 90 Column Total 94 123 133 128 478 Table $9$: Personal Goal and Importance of Grades 6. Students at multiple grade schools were asked what their personal goal (get good grades, be popular, be good at sports) was and how important being good at sports were to them (1 very important and 4 least important). The data is in Example $10$ ("Popular kids datafile," 2013). Do the data provide enough evidence to show that goal attainment and importance of sports are independent? Test at the 5% level. Goal Sports Importance Rating Row Total 1 2 3 4 Grades 83 81 55 28 247 Popular 32 49 43 17 141 Sports 50 24 14 2 90 Column Total 165 154 112 47 478 Table $10$: Personal Goal and Importance of Sports 7. Students at multiple grade schools were asked what their personal goal (get good grades, be popular, be good at sports) was and how important having good looks were to them (1 very important and 4 least important). The data is in Example $11$ ("Popular kids datafile," 2013). Do the data provide enough evidence to show that goal attainment and importance of looks are independent? Test at the 5% level. Goal Looks Importance Rating Row Total 1 2 3 4 Grades 80 66 66 35 247 Popular 81 30 18 12 141 Sports 24 30 17 19 90 Column Total 185 126 101 66 478 Table $11$: Personal Goal and Importance of Looks 8. Students at multiple grade schools were asked what their personal goal (get good grades, be popular, be good at sports) was and how important having money were to them (1 very important and 4 least important). The data is in Example $12$ ("Popular kids datafile," 2013). Do the data provide enough evidence to show that goal attainment and importance of money are independent? Test at the 5% level. Goal Money Importance Rating Row Total 1 2 3 4 Grades 14 34 71 128 247 Popular 14 29 35 63 141 Sports 6 12 26 46 90 Column Total 34 75 132 237 478 Table $12$: Personal Goal and Importance of Money Answer For all hypothesis tests, just the conclusion is given. See solutions for the entire answer. 1. Reject Ho 3. Reject Ho 5. Reject Ho 7. Reject Ho
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/11%3A_Chi-Square_and_ANOVA_Tests/11.01%3A_Chi-Square_Test_for_Independence.txt
In probability, you calculated probabilities using both experimental and theoretical methods. There are times when it is important to determine how well the experimental values match the theoretical values. An example of this is if you wish to verify if a die is fair. To determine if observed values fit the expected values, you want to see if the difference between observed values and expected values is large enough to say that the test statistic is unlikely to happen if you assume that the observed values fit the expected values. The test statistic in this case is also the chi-square. The process is the same as for the chi-square test for independence. Hypothesis Test for Goodness of Fit Test 1. State the null and alternative hypotheses and the level of significance $H_{o}$: The data are consistent with a specific distribution $H_{A}$: The data are not consistent with a specific distribution Also, state your $\alpha$ level here. 2. State and check the assumptions for the hypothesis test 1. A random sample is taken. 2. Expected frequencies for each cell are greater than or equal to 5 (The expected frequencies, E, will be calculated later, and this assumption means $E \geq 5$). 3. Find the test statistic and p-value Finding the test statistic involves several steps. First the data is collected and counted, and then it is organized into a table (in a table each entry is called a cell). These values are known as the observed frequencies, which the symbol for an observed frequency is O. The table is made up of k entries. The total number of observed frequencies is n. The expected frequencies are calculated by multiplying the probability of each entry, p, times n. $\text{Expected frequency( entry }i )=E=n^{*} p$ Test Statistic: $\chi^{2}=\sum \dfrac{(O-E)^{2}}{E}$ where O is the observed frequency and E is the expected frequency. Again, the test statistic involves squaring the differences, so the test statistics are all positive. Thus a chi-squared test for goodness of fit is always right tailed. p-value: Using the TI-83/84: $\chi \text { cdf }(\text { lower limit, } 1 \mathrm{E} 99, d f)$ Using R: $1-\text { pchisq }\left(\chi^{2}, d f\right)$ Where the degrees of freedom is df = k - 1 4. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$, 5. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. Example $1$ goodness of fit test using the formula Suppose you have a die that you are curious if it is fair or not. If it is fair then the proportion for each value should be the same. You need to find the observed frequencies and to accomplish this you roll the die 500 times and count how often each side comes up. The data is in Example $1$. Do the data show that the die is fair? Test at the 5% level. Die values 1 2 3 4 5 6 Total Observed Frequency 78 87 87 76 85 87 100 Table $1$: Observed Frequencies of Die Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: The observed frequencies are consistent with the distribution for fair die (the die is fair) $H_{A}$: The observed frequencies are not consistent with the distribution for fair die (the die is not fair) $\alpha$ = 0.05 2. State and check the assumptions for the hypothesis test 1. A random sample is taken since each throw of a die is a random event. 2. Expected frequencies for each cell are greater than or equal to 5. See step 3. 3. Find the test statistic and p-value First you need to find the probability of rolling each side of the die. The sample space for rolling a die is {1, 2, 3, 4, 5, 6}. Since you are assuming that the die is fair, then $P(1)=P(2)=P(3)=P(4)=P(5)=P(6)=\dfrac{1}{6}$. Now you can find the expected frequency for each side of the die. Since all the probabilities are the same, then each expected frequency is the same. $\text{Expected Frequency} =E=n^{*} p=500 * \dfrac{1}{6} \approx 83.33$ Test Statistic: It is easier to calculate the test statistic using a table. O E O-E $(O-E)^{2}$ $\dfrac{(O-E)^{2}}{E}$ 78 83.33 -5.22 28.4089 0.340920437 87 83.33 3.67 13.4689 0.161633265 87 83.33 3.67 13.4689 0.161633265 76 83.33 -7.33 53.7289 0.644772591 85 83.33 1.67 2.7889 0.033468139 87 83.33 3.67 13.4689 0.161633265 Total 0.02 $\chi^{2} \approx 1.504060962$ Table $2$: Calculation of the Chi-Square Test Statistic The test statistic is $\chi^{2} \approx 1.504060962$ The degrees of freedom are df = k - 1 = 6 - 1 = 5 Using TI-83/84: $p-\text {value}=\chi^{2} \operatorname{cdf}(1.50406096,1 E 99,5) \approx 0.913$ Using R: $p-\text {value}=1-\text { pchisq }(1.50406096,5) \approx 0.9126007$ 4. Conclusion Fail to reject $H_{o}$ since the p-value is greater than 0.05. 5. Interpretation There is not enough evidence to show that the die is not consistent with the distribution for a fair die. There is not enough evidence to show that the die is not fair. Example $2$ goodness of fit test using technology Suppose you have a die that you are curious if it is fair or not. If it is fair then the proportion for each value should be the same. You need to find the observed frequencies and to accomplish this you roll the die 500 times and count how often each side comes up. The data is in Example $1$. Do the data show that the die is fair? Test at the 5% level. Solution 1. State the null and alternative hypotheses and the level of significance $H_{o}$: The observed frequencies are consistent with the distribution for fair die (the die is fair) $H_{A}$: The observed frequencies are not consistent with the distribution for fair die (the die is not fair) $\alpha$ = 0.05 2. State and check the assumptions for the hypothesis test 1. A random sample is taken since each throw of a die is a random event. 2. Expected frequencies for each cell are greater than or equal to 5. See step 3. 3. Find the test statistic and p-value Using the TI-83/84 calculator: Using the TI-83: To use the TI-83 calculator to compute the test statistic, you must first put the data into the calculator. Type the observed frequencies into L1 and the expected frequencies into L2. Then you will need to go to L3, arrow up onto the name, and type in $(L 1-L 2)^{\wedge} 2 / L 2$. Now you use 1-Var Stats L3 to find the total. See Figure $1$ for the initial setup, Figure 11.2.2 for the results of that calculation, and Figure $3$ for the result of the 1-Var Stats L3. The total is the chi-square value, $\chi^{2}=\sum x \approx 1.50406$. The p-value is found using $p-\text {value}=\chi^{2} \operatorname{cdf}(1.50406096,1 E 99,5) \approx 0.913$, where the degrees of freedom is df = k - 1 = 6 - 1 = 5. Using the TI-84: To run the test on the TI-84, type the observed frequencies into L1 and the expected frequencies into L2, then go into STAT, move over to TEST and choose $\chi^{2}$ GOF-Test from the list. The setup for the test is in Figure $4$. Once you press ENTER on Calculate you will see the results in Figure $5$. The test statistic is $\chi^{2} \approx 1.504060962$ The p-value $\approx 0.913$ The CNTRB represent the $\dfrac{(O-E)^{2}}{E}$ for each die value. You can see the values by pressing the right arrow. Using R: Type in the observed frequencies. Call it something like observed. observed<- c(type in data with commas in between) Type in the probabilities that you are comparing to the observed frequencies. Call it something like null.probs. null.probs <- c(type in probabilities with commas in between) chisq.test(observed, p=null.probs) – the command for the hypothesis test For this example (Note since you are looking to see if the die is fair, then the probability of each side of a fair die coming up is 1/6.) observed<-c(78, 87, 87, 76, 85, 87) null.probs<-c(1/6, 1/6, 1/6, 1/6, 1/6, 1/6) chisq.test(observed, p=null.probs) Output: Chi-squared test for given probabilities data: observed X-squared = 1.504, df = 5, p-value = 0.9126 The test statistic is $\chi^{2}=1.504$ and the p-value = 0.9126. 4. Conclusion Fail to reject $H_{o}$ since the p-value is greater than 0.05. 5. Interpretation There is not enough evidence to show that the die is not consistent with the distribution for a fair die. There is not enough evidence to show that the die is not fair. Homework Exercise $1$ In each problem show all steps of the hypothesis test. If some of the assumptions are not met, note that the results of the test may not be correct and then continue the process of the hypothesis test. 1. According to the M&M candy company, the expected proportion can be found in Example $3$. In addition, the table contains the number of M&M’s of each color that were found in a case of candy (Madison, 2013). At the 5% level, do the observed frequencies support the claim of M&M? Blue Brown Green Orange Red Yellow Total Observed Frequencies 481 371 483 544 372 369 2620 Expected Proportion 0.24 0.13 0.16 0.20 0.13 0.14 Table $3$: M&M Observed and Proportions 2. Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made the time period of January 1 to March 31. Example $4$ gives the defect and the number of defects. Do the data support the notion that each defect type occurs in the same proportion? Test at the 10% level. Defect type Number of defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table $4$: Number of Defective Lenses 3. On occasion, medical studies need to model the proportion of the population that has a disease and compare that to observed frequencies of the disease actually occurring. Suppose the end-stage renal failure in south-west Wales was collected for different age groups. Do the data in Example $5$ show that the observed frequencies are in agreement with proportion of people in each age group (Boyle, Flowerdew & Williams, 1997)? Test at the 1% level. Age Group 16-29 30-44 45-59 60-75 75+ Total Observed Frequency 32 66 132 218 91 539 Expected Proportion 0.23 0.25 0.22 0.21 0.09 Table $5$: Renal Failure Frequencies 4. In Africa in 2011, the number of deaths of a female from cardiovascular disease for different age groups are in Example $6$ ("Global health observatory," 2013). In addition, the proportion of deaths of females from all causes for the same age groups are also in Example $6$. Do the data show that the death from cardiovascular disease are in the same proportion as all deaths for the different age groups? Test at the 5% level. Age 5-14 15-29 30-49 50-69 Total Cardiovascular Frequency 9 16 56 433 513 All Cause Proportion 0.10 0.12 0.26 0.52 Table $6$: Deaths of Females for Different Age Groups 5. In Australia in 1995, there was a question of whether indigenous people are more likely to die in prison than non-indigenous people. To figure out, the data in Example $7$ was collected. ("Aboriginal deaths in," 2013). Do the data show that indigenous people die in the same proportion as non-indigenous people? Test at the 1% level. Prisoner Dies Prisoner Did Not Die Total Indigenous Prisoner Frequency 17 2890 2907 Frequency of Non-Indigenous Prisoner 42 14459 14501 Table $7$: Death of Prisoners 6. A project conducted by the Australian Federal Office of Road Safety asked people many questions about their cars. One question was the reason that a person chooses a given car, and that data is in Example $8$ ("Car preferences," 2013). Safety Reliability Cost Performance Comfort Looks 84 62 46 34 47 27 Table $8$: Reason for Choosing a Car Answer For all hypothesis tests, just the conclusion is given. See solutions for the entire answer. 1. Reject Ho 3. Reject Ho 5. Reject Ho
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/11%3A_Chi-Square_and_ANOVA_Tests/11.02%3A_Chi-Square_Goodness_of_Fit.txt
There are times where you want to compare three or more population means. One idea is to just test different combinations of two means. The problem with that is that your chance for a type I error increases. Instead you need a process for analyzing all of them at the same time. This process is known as analysis of variance (ANOVA). The test statistic for the ANOVA is fairly complicated, you will want to use technology to find the test statistic and p-value. The test statistic is distributed as an F-distribution, which is skewed right and depends on degrees of freedom. Since you will use technology to find these, the distribution and the test statistic will not be presented. Remember, all hypothesis tests are the same process. Note that to obtain a statistically significant result there need only be a difference between any two of the k means. Before conducting the hypothesis test, it is helpful to look at the means and standard deviations for each data set. If the sample means with consideration of the sample standard deviations are different, it may mean that some of the population means are different. However, do realize that if they are different, it doesn’t provide enough evidence to show the population means are different. Calculating the sample statistics just gives you an idea that conducting the hypothesis test is a good idea. Hypothesis test using ANOVA to compare k means 1. State the random variables and the parameters in words $\begin{array}{l}{x_{1}=\text { random variable } 1} \ {x_{2}=\text { random variable } 2} \ {\vdots} \ {x_{k}=\text { random variable } k} \ {\mu_{1}=\text { mean of random variable } 2} \ {\begin{array}{l}{\mu_{2}=\text { mean of random variable } 2} \ {\vdots} \ {\mu_{k}=\text { mean of random variable } k}\end{array}}\end{array}$ 2. State the null and alternative hypotheses and the level of significance $H_{o} : \mu_{1}=\mu_{2}=\mu_{3}=\cdots=\mu_{k}$ $H_{A}$ : at least two of the means are not equal Also, state your $\alpha$ level here. 3. State and check the assumptions for the hypothesis test 1. A random sample of size $n_{i}$ is taken from each population. 2. All the samples are independent of each other. 3. Each population is normally distributed. The ANOVA test is fairly robust to the assumption especially if the sample sizes are fairly close to each other. Unless the populations are really not normally distributed and the sample sizes are close to each other, then this is a loose assumption. 4. The population variances are all equal. If the sample sizes are close to each other, then this is a loose assumption. 4. . Find the test statistic and p-value The test statistic is $F=\dfrac{M S_{B}}{M S_{W}}$, where $M S_{B}=\dfrac{S S_{B}}{d f_{B}}$ is the mean square between the groups (or factors), and $M S_{W}=\dfrac{S S_{W}}{d f_{W}}$ is the mean square within the groups. The degrees of freedom between the groups is $d f_{B}=k-1$ and the degrees of freedom within the groups is $d f_{W}=n_{1}+n_{2}+\cdots+n_{k}-k$. To find all of the values, use technology such as the TI-83/84 calculator or R. The test statistic, F, is distributed as an F-distribution, where both degrees of freedom are needed in this distribution. The p-value is also calculated by the calculator or R. 5. Conclusion This is where you write reject $H_{o}$ or fail to reject $H_{o}$. The rule is: if the p-value < $\alpha$, then reject $H_{o}$. If the p-value $\geq \alpha$, then fail to reject $H_{o}$. 6. Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show $H_{A}$ is true, or you do not have enough evidence to show $H_{A}$ is true. If you do in fact reject $H_{o}$, then you know that at least two of the means are different. The next question you might ask is which are different? You can look at the sample means, but realize that these only give a preliminary result. To actually determine which means are different, you need to conduct other tests. Some of these tests are the range test, multiple comparison tests, Duncan test, Student-Newman-Keuls test, Tukey test, Scheffé test, Dunnett test, least significant different test, and the Bonferroni test. There is no consensus on which test to use. These tests are available in statistical computer packages such as Minitab and SPSS. Example $1$ hypothesis test involving several means Cancer is a terrible disease. Surviving may depend on the type of cancer the person has. To see if the mean survival time for several types of cancer are different, data was collected on the survival time in days of patients with one of these cancer in advanced stage. The data is in Example $1$ ("Cancer survival story," 2013). (Please realize that this data is from 1978. There have been many advances in cancer treatment, so do not use this data as an indication of survival rates from these cancers.) Do the data indicate that at least two of the mean survival time for these types of cancer are not all equal? Test at the 1% level. Stomach Bronchus Colon Ovary Breast 124 81 248 1234 1235 42 461 377 89 24 25 20 189 201 1581 45 450 1843 356 1166 412 246 180 2970 40 51 166 537 456 727 1112 63 519   3808 46 64 455   791 103 155 406   1804 876 859 365   3460 146 151 942   719 340 166 776 396 37 372 223 163 138 101 72 20 245 283 Table $1$: Survival Times in Days of Five Cancer Types Solution 1. State the random variables and the parameters in words $\begin{array}{l}{x_{1}=\text { survival time from stomach cancer }} \ {x_{2}=\text { survival time from bronchus cancer }} \ {x_{3}=\text { survival time from colon cancer }} \ {x_{4}=\text { survival time from ovarian cancer }} \ {x_{5}=\text { survival time from breast cancer }} \ {\mu_{1}=\text { mean survival time from breast cancer }} \ {\mu_{1}=\text { mean survival time from bronchus cancer }} \ {\mu_{3}=\text { mean survival time from colon cancer }} \ {\mu_{4} = \text{mean survival time from ovarian cancer}}\{\mu_{5} = \text{mean survival time from breast cancer}}\end{array}$ Now before conducting the hypothesis test, look at the means and standard deviations. $\begin{array}{ll}{\overline{x}_{1}= 286}&{s_{1}\approx 346.31}\{\overline{x}_{2} \approx 211.59} & {s_{2} \approx 209.86} \ {\overline{x}_{3} \approx 457.41} & {s_{3} \approx 427.17} \ {\overline{x}_{4} \approx 884.33} & {s_{4} \approx 1098.58} \ {\overline{x}_{5} \approx 1395.91} & {s_{5} \approx 1238.97}\end{array}$ There appears to be a difference between at least two of the means, but realize that the standard deviations are very different. The difference you see may not be significant. Notice the sample sizes are not the same. The sample sizes are $n_{1}=13, n_{2}=17, n_{3}=17, n_{4}=6, n_{5}=11$ 2. State the null and alternative hypotheses and the level of significance $H_{o} : \mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}=\mu_{5}$ $H_{A}$ : at least two of the means are not equal $\alpha$ = 0.01 3. State and check the assumptions for the hypothesis test 1. A random sample of 13 survival times from stomach cancer was taken. A random sample of 17 survival times from bronchus cancer was taken. A random sample of 17 survival times from colon cancer was taken. A random sample of 6 survival times from ovarian cancer was taken. A random sample of 11 survival times from breast cancer was taken. These statements may not be true. This information was not shared as to whether the samples were random or not but it may be safe to assume that. 2. Since the individuals have different cancers, then the samples are independent. 3. Population of all survival times from stomach cancer is normally distributed. Population of all survival times from bronchus cancer is normally distributed. Population of all survival times from colon cancer is normally distributed. Population of all survival times from ovarian cancer is normally distributed. Population of all survival times from breast cancer is normally distributed. Looking at the histograms, box plots and normal quantile plots for each sample, it appears that none of the populations are normally distributed. The sample sizes are somewhat different for the problem. This assumption may not be true. 4. The population variances are all equal. The sample standard deviations are approximately 346.3, 209.9, 427.2, 1098.6, and 1239.0 respectively. This assumption does not appear to be met, since the sample standard deviations are very different. The sample sizes are somewhat different for the problem. This assumption may not be true. 4. Find the test statistic and p-value To find the test statistic and p-value using the TI-83/84, type each data set into L1 through L5. Then go into STAT and over to TESTS and choose ANOVA(. Then type in L1,L2,L3,L4,L5 and press enter. You will get the results of the ANOVA test. The test statistic is $F \approx 6.433$ and $p-\text { value } \approx 2.29 \times 10^{-4}$ Just so you know, the Factor information is between the groups and the Error is within the groups. So $\begin{array}{l}{M S_{B} \approx 2883940.13, S S_{B} \approx 11535760.5, \text { and } d f_{B}=4 \text { and }} \ {M S_{W} \approx 448273.635, S S_{W} \approx 448273.635, \text { and } d f_{W}=59}\end{array}$ To find the test statistic and p-value on R: The commands would be: variable=c(type in all data values with commas in between) – this is the response variable factor=c(rep("factor 1", number of data values for factor 1), rep("factor 2", number of data values for factor 2), etc) – this separates the data into the different factors that the measurements were based on. data_name = data.frame(variable, factor) – this puts the data into one variable. data_name is the name you give this variable aov(variable ~ factor, data = data name) – runs the ANOVA analysis For this example, the commands would be: time=c(124, 42, 25, 45, 412, 51, 1112, 46, 103, 876, 146, 340, 396, 81, 461, 20, 450, 246, 166, 63, 64, 155, 859, 151, 166, 37, 223, 138, 72, 245, 248, 377, 189, 1843, 180, 537, 519, 455, 406, 365, 942, 776, 372, 163, 101, 20, 283, 1234, 89, 201, 356, 2970, 456, 1235, 24, 1581, 1166, 40, 727, 3808, 791, 1804, 3460, 719) factor=c(rep("Stomach", 13), rep("Bronchus", 17), rep("Colon", 17), rep("Ovary", 6), rep("Breast", 11)) survival=data.frame(time, factor) results=aov(time~factor, data=survival) summary(results) $\begin{array}{cccccc}{}&{\text{Df}}&{\text{Sum Sq}}&{\text{Mean Sq}}&{\text{F value}}&{\text{Pr(>F)}}\{\text{factor}}&{4}&{11535761}&{2883940}&{6.4333}&{0.000229***}\{\text{Residuals}}&{59}&{26448144}&{448274} \end{array}$ --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The test statistic is F = 6.433 and the p-value = 0.000229. 5. Conclusion Reject $H_{o}$ since the p-value is less than 0.01. 6. Interpretation There is evidence to show that at least two of the mean survival times from different cancers are not equal. By examination of the means, it appears that the mean survival time for breast cancer is different from the mean survival times for both stomach and bronchus cancers. It may also be different for the mean survival time for colon cancer. The others may not be different enough to actually say for sure. Homework Exercise $1$ In each problem show all steps of the hypothesis test. If some of the assumptions are not met, note that the results of the test may not be correct and then continue the process of the hypothesis test. 1. Cuckoo birds are in the habit of laying their eggs in other birds’ nest. The other birds adopt and hatch the eggs. The lengths (in cm) of cuckoo birds’ eggs in the other species nests were measured and are in Example $2$ ("Cuckoo eggs in," 2013). Do the data show that the mean length of cuckoo bird’s eggs is not all the same when put into different nests? Test at the 5% level. Meadow Pipit Tree Pipit Hedge Sparrow Robin Pied Wagtail Wren 19.65 22.25 21.05 20.85 21.05 21.05 19.85 20.05 22.25 21.85 21.65 21.85 21.85 20.05 20.65 22.25 22.05 22.05 22.05 21.85 20.25 20.85 22.25 22.45 22.85 22.05 21.85 20.85 21.65 22.65 22.65 23.05 22.05 22.05 20.85 21.65 22.65 23.25 23.05 22.25 22.45 20.85 21.65 22.85 23.25 23.05 22.45 22.65 21.05 21.85 22.85 23.25 23.05 22.45 23.05 21.05 21.85 22.85 23.45 23.45 22.65 23.05 21.05 21.85 22.85 23.45 23.85 23.05 23.25 21.25 22.05 23.05 23.65 23.85 23.05 23.45 21.45 22.05 23.25 23.85 23.85 23.05 24.05 22.05 22.05 23.25 24.05 24.05 23.05 24.05 22.05 22.05 23.45 24.05 25.05 23.05 24.05 22.05 22.05 23.65 24.05   23.25 24.85 22.25 22.05 23.85   23.85 22.05 24.25 22.05 24.45 22.05 22.25 22.05 22.25 22.25 22.25 22.25 22.25 22.25 Table $2$: Lengths of Cuckoo Bird Eggs in Different Species Nests 2. Levi-Strauss Co manufactures clothing. The quality control department measures weekly values of different suppliers for the percentage difference of waste between the layout on the computer and the actual waste when the clothing is made (called run-up). The data is in Example $3$, and there are some negative values because sometimes the supplier is able to layout the pattern better than the computer ("Waste run up," 2013). Do the data show that there is a difference between some of the suppliers? Test at the 1% level. Plant 1 Plant 2 Plant 3 Plant 4 Plant 5 1.2 16.4 12.1 11.5 24 10.1 -6 9.7 10.2 -3.7 -2 -11.6 7.4 3.8 8.2 1.5 -1.3 -2.1 8.3 9.2 -3 4 10.1 6.6 -9.3 -0.7 17 4.7 10.2 8 3.2 3.8 4.6 8.8 15.8 2.7 4.3 3.9 2.7 22.3 -3.2 10.4 3.6 5.1 3.1 -1.7 4.2 9.6 11.2 16.8 2.4 8.5 9.8 5.9 11.3 0.3 6.3 6.5 13 12.3 3.5 9 5.7 6.8 16.9 -0.8 7.1 5.1 14.5 19.4 4.3 3.4 5.2 2.8 19.7 -0.8 7.3 13 3 -3.9 7.1 42.7 7.6 0.9 3.4 1.4 70.2 1.5 0.7 3 8.5 2.4 6 1.3 2.9 Table $3$: Run-ups for Different Plants Making Levi Strauss Clothing 3. Several magazines were grouped into three categories based on what level of education of their readers the magazines are geared towards: high, medium, or low level. Then random samples of the magazines were selected to determine the number of three-plus-syllable words were in the advertising copy, and the data is in Example $4$ ("Magazine ads readability," 2013). Is there enough evidence to show that the mean number of three-plus-syllable words in advertising copy is different for at least two of the education levels? Test at the 5% level. High Education Medium Education Low Education 34 13 7 21 22 7 37 25 7 31 3 7 10 5 7 24 2 7 39 9 8 10 3 8 17 0 8 18 4 8 32 29 8 17 26 8 3 5 9 10 5 9 6 24 9 5 15 9 6 3 9 6 8 9 Table $4$: Number of Three Plus Syllable Words in Advertising Copy 4. A study was undertaken to see how accurate food labeling for calories on food that is considered reduced calorie. The group measured the amount of calories for each item of food and then found the percent difference between measured and labeled food, $\dfrac{(\text { measured - labeled })}{\text { labeled }} * 100 \%$. The group also looked at food that was nationally advertised, regionally distributed, or locally prepared. The data is in Example $5$ ("Calories datafile," 2013). Do the data indicate that at least two of the mean percent differences between the three groups are different? Test at the 10% level. National Advertised Regionally Advertised Locally Prepared 2 41 15 -28 46 60 -6 2 250 8 25 145 6 39 6 -1 16.5 8- 1- 17 95 13 28 3 15 -3 -4 14 -4 34 -18 42 10 5 3 -7 3 -0.5 -10 6 Table $5$: Percent Differences Between Measured and Labeled Food 5. The amount of sodium (in mg) in different types of hotdogs is in Example $6$ ("Hot dogs story," 2013). Is there sufficient evidence to show that the mean amount of sodium in the types of hotdogs are not all equal? Test at the 5% level. Beef Meat Poultry 495 458 430 477 506 375 425 473 396 322 545 383 482 496 387 587 360 542 370 387 359 322 386 357 479 507 528 375 393 513 330 405 426 300 372 513 386 144 358 401 511 581 645 405 588 440 428 522 317 339 545 319 298 253 Table $6$: Amount of Sodium (in mg) in Beef, Meat, and Poultry Hotdogs Answer For all hypothesis tests, just the conclusion is given. See solutions for the entire answer. 1. Reject Ho 3. Reject Ho 5. Fail to reject Ho Data Source: Aboriginal deaths in custody. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/custody.html Activities of dolphin groups. (2013, September 26). Retrieved from http://www.statsci.org/data/general/dolpacti.html Boyle, P., Flowerdew, R., & Williams, A. (1997). Evaluating the goodness of fit in models of sparse medical data: A simulation approach. International Journal of Epidemiology, 26(3), 651-656. Retrieved from http://ije.oxfordjournals.org/conten...3/651.full.pdf html Calories datafile. (2013, December 07). Retrieved from lib.stat.cmu.edu/DASL/Datafiles/Calories.html Cancer survival story. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Stories...rSurvival.html Car preferences. (2013, September 26). Retrieved from http://www.statsci.org/data/oz/carprefs.html Cuckoo eggs in nest of other birds. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Stories/cuckoo.html Education by age datafile. (2013, December 05). Retrieved from lib.stat.cmu.edu/DASL/Datafil...tionbyage.html Encyclopedia Titanica. (2013, November 09). Retrieved from www.encyclopediatitanica.org/ Global health observatory data respository. (2013, October 09). Retrieved from http://apps.who.int/gho/athena/data/...t=GHO/MORT_400 &profile=excel&filter=AGEGROUP:YEARS05-14;AGEGROUP:YEARS15- 29;AGEGROUP:YEARS30-49;AGEGROUP:YEARS50-69;AGEGROUP:YEARS70 ;MGHEREG:REG6_AFR;GHECAUSES:*;SEX:* Hot dogs story. (2013, November 16). Retrieved from lib.stat.cmu.edu/DASL/Stories/Hotdogs.html Leprosy: Number of reported cases by country. (2013, September 04). Retrieved from http://apps.who.int/gho/data/node.main.A1639 Magazine ads readability. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Datafiles/magadsdat.html Popular kids datafile. (2013, December 05). Retrieved from lib.stat.cmu.edu/DASL/Datafil...pularKids.html Schultz, S. T., Klonoff-Cohen, H. S., Wingard, D. L., Askhoomoff, N. A., Macera, C. A., Ji, M., & Bacher, C. (2006). Breastfeeding, infant formula supplementation, and autistic disorder: the results of a parent survey. International Breastfeeding Journal, 1(16), doi: 10.1186/1746-4358-1-16 Waste run up. (2013, December 04). Retrieved from lib.stat.cmu.edu/DASL/Stories/wasterunup.html
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/11%3A_Chi-Square_and_ANOVA_Tests/11.03%3A_Analysis_of_Variance_%28ANOVA%29.txt
Degrees of Freedom (df) 80% 90% 95% 98% 99% 1 3.078 6.314 12.706 31.821 63.657 2 1.886 2.920 4.303 6.965 9.925 3 1.638 2.353 3.182 4.541 5.841 4 1.533 2.132 2.776 3.747 4.604 5 1.476 2.015 2.571 3.365 4.032 6 1.440 1.943 2.447 3.143 3.707 7 1.415 1.895 2.365 2.998 3.499 8 1.397 1.860 2.306 2.896 3.355 9 1.383 1.833 2.262 2.821 3.250 10 1.372 1.812 2.228 2.764 3.169 11 1.363 1.796 2.201 2.718 3.106 12 1.356 1.782 2.179 2.681 3.055 13 1.350 1.771 2.160 2.650 3.012 14 1.345 .1761 2.145 2.624 2.977 15 1.341 1.753 2.131 2.602 2.947 16 1.337 1.746 2.120 2.583 2.921 17 1.333 1.740 2.110 2.567 2.898 18 1.330 1.734 2.101 2.552 2.878 19 1.328 1.729 2.093 2.539 2.861 20 1.325 1.725 2.086 2.528 2.845 21 1.323 1.721 2.080 2.518 2.831 22 1.321 1.717 2.074 2.508 2.819 23 1.319 1.714 2.069 2.500 2.807 24 1.318 1.711 2.064 2.492 2.797 25 1.316 1.708 2.060 2.485 2.787 26 1.315 1.706 2.056 2.479 2.779 27 1.314 1.703 2.052 2.473 2.771 28 1.313 1.701 2.048 2.467 2.763 29 1.311 1.699 2.045 2.462 2.756 30 1.310 1.697 2.042 2.457 2.750 31 1.309 1.696 2.040 2.453 2.744 32 1.309 1.694 2.037 2.449 2.738 33 1.308 1.692 2.035 2.445 2.733 34 1.307 1.691 2.032 2.441 2.728 35 1.306 1.690 2.030 2.438 2.724 36 1.306 1.688 2.028 2.434 2.719 37 1.305 1.687 2.026 2.431 2.715 38 1.304 1.686 2.024 2.429 2.712 39 1.304 1.685 2.023 2.426 2.712 40 1.303 1.684 2.021 2.423 2.704 41 1.303 1.683 2.020 2.421 2.701 42 1.302 1.682 2.018 2.418 2.698 43 1.302 1.681 2.017 2.416 2.695 44 1.301 1.680 2.015 2.414 2.692 45 1.301 1.679 2.014 2.412 2.690 46 1.300 1.679 2.013 2.410 2.687 47 1.300 1.678 2.012 2.408 2.685 48 1.299 1.677 2.011 2.407 2.682 49 1.299 1.677 2.010 2.405 2.680 50 1.299 1.676 2.009 2.403 2.678 51 1.298 1.675 2.008 2.402 2.676 52 1.298 1.675 2.007 2.400 2.674 53 1.298 1.674 2.006 2.399 2.672 54 1.297 1.674 2.005 2.397 2.670 55 1.297 1.673 2.004 2.396 2.668 56 1.297 1.673 2.003 2.395 2.667 57 1.297 1.672 2.002 2.394 2.665 58 1.296 1.672 2.002 2.392 2.663 59 1.296 1.671 2.001 2.391 2.662 60 1.296 1.671 2.000 2.390 2.660 61 1.296 1.670 2.000 2.389 2.659 62 1.295 1.670 1.999 2.388 2.657 63 1.295 1.669 1.998 2.387 2.656 64 1.295 1.669 1.998 2.386 2.655 65 1.295 1.669 1.997 2.385 2.654 66 1.295 1.668 1.997 2.384 2.652 67 1.294 1.668 1.996 2.383 2.651 68 1.294 1.668 1.995 2.382 2.650 69 1.294 1.667 1.995 2.382 2.649 70 1.294 1.667 1.994 2.381 2.648 71 1.294 1.667 1.994 2.380 2.647 72 1.293 1.666 1.993 2.379 2.646 73 1.293 1.666 1.993 2.379 2.645 74 1.293 1.666 1.993 2.378 2.644 75 1.293 1.665 1.992 2.377 2.643 76 1.293 1.665 1.992 2.376 2.642 77 1.293 1.665 1.991 2.376 2.641 78 1.292 1.665 1.991 2.375 2.640 79 1.292 1.664 1.990 2.374 2.640 80 1.292 1.664 1.990 2.374 2.639 81 1.292 1.664 1.990 2.373 2.638 82 1.292 1.664 1.989 2.373 2.637 83 1.292 1.663 1.989 2.372 2.636 84 1.292 1.663 1.989 2.372 2.636 85 1.292 1.663 1.988 2.371 2.635 86 1.291 1.663 1.988 2.370 2.634 87 1.291 1.663 1.988 2.370 2.634 88 1.291 1.662 1.987 2.369 2.633 89 1.291 1.662 1.987 2.369 2.632 90 1.291 1.662 1.987 2.368 2.632 91 1.291 1.662 1.986 2.368 2.631 92 1.291 1.662 1.986 2.368 2.630 93 1.291 1.661 1.986 2.367 2.630 94 1.291 1.661 1.986 2.367 2.629 95 1.291 1.661 1.985 2.366 2.629 96 1.290 1.661 1.985 2.366 2.628 97 1.290 1.661 1.985 2.365 2.627 98 1.290 1.661 1.984 2.365 2.627 99 1.290 1.660 1.984 2.365 2.626 100 1.290 1.660 1.984 2.364 2.626 101 1.290 1.660 1.984 2.364 2.625 102 1.290 1.660 1.983 2.363 2.625 103 1.290 1.660 1.983 2.363 2.624 104 1.290 1.660 1.983 2.363 2.624 105 1.290 1.659 1.983 2.362 2.623 Table A.2: Critical Values for t-Interval 12.02: Normal Critical Values for Confidence Levels Confidence Level, C Critical Value, \(Z_{c}\) 99% 2.575 98% 2.33 95% 1.96 90% 1.645 80% 1.28 Table A.1: Normal Critical Values for Confidence Levels Critical values for \(Z_{c}\) created using Microsoft Excel
textbooks/stats/Introductory_Statistics/Statistics_with_Technology_2e_(Kozak)/12%3A_Appendix-_Critical_Value_Tables/12.01%3A_Critical_Values_for_t-Interval.txt
• Comparing Fractions, Decimals, and Percents In this section, we will go over techniques to compare two numbers. These numbers could be presented as fractions, decimals or percents and may not be in the same form. For example, when we look at a histogram, we can compute the fraction of the group that occurs the most frequently. We might be interested in whether that fraction is greater than 25% of the population. By the end of this section we will know how to make this comparison. • Converting Between Fractions, Decimals and Percents In this section, we will convert from decimals to percents and back. We will also start with a fraction and convert it to a decimal and a percent. In statistics we are often given a number as a percent and have to do calculations on it. To do so, we must first convert it to a percent. Also, the computer or calculator shows numbers as decimals, but for presentations, percents are friendlier. It is also much easier to compare decimals than fractions, thus converting to a decimal is helpful. • Decimals: Rounding and Scientific Notation In this section, we will go over how to round decimals to the nearest whole number, nearest tenth, nearest hundredth, etc. In most statistics applications that you will encounter, the numbers will not come out evenly, and you will need to round the decimal. • Using Fractions, Decimals and Percents to Describe Charts Charts, such as bar charts and pie charts are visual ways of presenting data. You can think of each slice of the pie or each bar as a part of the whole. The numerical versions of this are a list of fractions, decimals and percents. By the end of this section we will be able to look at one of these charts and produce the corresponding fractions, decimals, and percents. Decimals Fractions and Percents Learning Outcomes 1. Compare two fractions 2. Compare two numbers given in different forms In this section, we will go over techniques to compare two numbers. These numbers could be presented as fractions, decimals or percents and may not be in the same form. For example, when we look at a histogram, we can compute the fraction of the group that occurs the most frequently. We might be interested in whether that fraction is greater than 25% of the population. By the end of this section we will know how to make this comparison. Comparing Two Fractions Whether you like fractions or not, they come up frequently in statistics. For example, a probability is defined as the number of ways a sought after event can occur over the total number of possible outcomes. It is commonly asked to compare two such probabilities to see if they are equal, and if not, which is larger. There are two main approaches to comparing fractions. Approach 1: Change the fractions to equivalent fractions with a common denominator and then compare the numerators The procedure of approach 1 is to first find the common denominator and then multiply the numerator and the denominator by the same whole number to make the denominators common. Example $1$ Compare: $\frac{2}{3}$ and $\frac{5}{7}$ Solution A common denominator is the product of the two: $3\:\times7\:=\:21$. We convert: $\frac{2}{3}\:\frac{7}{7}\:=\frac{14}{21}\nonumber$ and $\frac{5}{7}\:\frac{3}{3}=\frac{15}{21}\nonumber$ Next we compare the numerators and see that $14\:<\:15$, hence $\frac{2}{3}<\:\frac{5}{7}$ Example $2$ In statistics, we say that two events are independent if the probability of the second occurring is equal to the probability of the second occurring given that the first occurs. The probability of rolling two dice and having the sum equal to 7 is $\frac{6}{36}$. If you know that the first die lands on a 4, then the probability that the sum of the two dice is a 7 is $\frac{1}{6}$. Are these events independent? Solution We need to compare $\frac{6}{36}$and $\frac{1}{6}$. The common denominator is 36. We convert the second fraction to $\frac{1}{6}\frac{6}{6}=\frac{6}{36}\nonumber$ Now we can see that the two fractions are equal, so the events are independent. Approach 2: Use a calculator or computer to convert the fractions to decimals and then compare the decimals If it is easy to build up the fractions so that we have a common denominator, then Approach 1 works well, but often the fractions are not simple, so it is easier to make use of the calculator or computer. Example $3$ In computing probabilities for a uniform distribution, fractions come up. Given that the number of ounces in a medium sized drink is uniformly distributed between 15 and 26 ounces, the probability that a randomly selected medium sized drink is less than 22 ounces is $\frac{7}{11}$. Given that the weight of in a medium sized American is uniformly distributed between 155 and 212 pounds, the probability that a randomly selected medium sized American is less than 195 pounds is $\frac{40}{57}$. Is it more likely to select a medium sized drink that is less than 22 ounces or to select a medium sized American who is less than 195 pounds? Solution We could get a common denominator and build the fractions, but it is much easier to just turn both fractions into decimal numbers and then compare. We have: $\frac{7}{11}\approx0.6364\nonumber$ and $\frac{40}{57}\approx0.7018\nonumber$ Notice that $0.6364\:<\:0.7018 \nonumber$ Hence, we can conclude that it is less likely to pick the medium sized 22 ounce or less drink than to pick the 195 pound or lighter medium sized person. Exercise If you guess on 10 true or false questions, the probability of getting at least 9 correct is $\frac{11}{1024}$. If you guess on six multiple choice questions with three choices each, then the probability of getting at least five of the six correct is $\frac{7}{729}$. Which of these is more likely? Comparing Fractions, Decimals and Percents When you want to compare a fraction to a decimal or a percent, it is usually easiest to convert to a decimal number first, and then compare the decimal numbers. Example $4$ Compare 0.52 and $\frac{7}{13}$. Solution We first convert $\frac{7}{13}$ to a decimal by dividing to get 0.5385. Now notice that $0.52 < 0.5385\nonumber$ Thus $0.52\:<\frac{\:7}{13}\nonumber$ Example $5$ When we preform a hypothesis test in statistics, We have to compare a number called the p-value to another number called the level of significance. Suppose that the p-value is calculated as 0.0641 and the level of significance is 5%. Compare these two numbers. Solution We first convert the level of significance, 5%, to a decimal number. Recall that to convert a percent to a decimal, we move the decimal over two places to the right. This gives us 0.05. Now we can compare the two decimals: $0.0641 > 0.05\nonumber$ Therefore, the p-value is greater than the level of significance. This is an application of comparing fractions to probability.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Decimals_Fractions_and_Percents/Comparing_Fractions_Decimals_and_Percents.txt
Learning Outcomes 1. Given a decimal, convert it to a percent 2. Given a percent, convert it to a decimal 3. Convert a fraction to a decimal and percent In this section, we will convert from decimals to percents and back. We will also start with a fraction and convert it to a decimal and a percent. In statistics we are often given a number as a percent and have to do calculations on it. To do so, we must first convert it to a percent. Also, the computer or calculator shows numbers as decimals, but for presentations, percents are friendlier. It is also much easier to compare decimals than fractions, thus converting to a decimal is helpful. For example, we often want to see if a probability is greater than 5%. A computer will display the probability as a decimal such as 0.04836. To make the comparison we will first change it to a percent and then compare it to 5%. Transforming a Decimal to a Percent We have all heard of percents before. "You only have a 20% chance of winning the game", "Just 38% of all Americans approve of Congress", and "I am 95% confident that my answer is correct" are just a few of the countless examples of percents as they come up in statistics. Defintion: Percent Percent means Parts Per Hundred Thus if we are given a decimal and want to convert it to a percent, we multiply the decimal by 100. In practice, this means we move the decimal point two places to the right. Example $1$ Convert the number 0.1738 to a percent. Solution We move the decimal over two to the right as shown below. We get: 17.38% for the answer. Example $2$ Convert 0.7 to a percent. Solution We want to move the decimal two places to the right, but there is only one digit to the right of the decimal place. The good news is that we can always add a 0 to the right of the last digit. We write: $0.7 = 0.70 \nonumber$ Now move the decimal place two digits to the right to get 70%. Example $3$ In regression analysis, an important number that is calculated is called R-Squared. It helps us determine how helpful one variable is in predicting another variable. The computer and calculator always display it as a decimal, but it is more meaningful as a percent. Suppose that the R-Squared value that relates the amount of studying students do to prepare for a final exam and the score on the exam is: $r^2=0.8971$. Convert this to a percent rounded to the nearest whole number percent. Solution We move the decimal 0.8971 two places to the right to get 89.71% Now round to the nearest whole number percent. Note that the digit to the left of the whole number is 7 > 5. Thus we add 1 to the whole number, 89. This gives us 90%. Exercise A standard goal in statistics is to come up with a range of values that a population proportion is likely to lie. This range is called a confidence interval. Suppose that we want to interpret a confidence interval for the percent of patients who experience side effects from an experimental cancer treatment. The computer calculates it as the decimal range: [0.023,0.029]. What is the likely range for the percent of patients who experience side effects from the experimental cancer treatment? Transforming a Percent to a Decimal To convert a decimal to a percent, we multiply the decimal by 100 which is equivalent to moving the decimal two places to the right. Not surprisingly, to convert a percent to a decimal, we do exactly the opposite. We divide the number by 100 which is equivalent to moving the decimal two places to the left. Example $4$ Convert the percent 89.4% to a decimal. Solution We move the decimal over two to the left as shown below. We get: 0.894 for the answer. Example $5$ Suppose that you want to find the value of $x$ such that 2.5% of the entire area under the Normal curve lies to the left of $x$. The first step will be to convert the 2.5% to a decimal. What decimal is equivalent to 2.5%? Solution We want to move the decimal 2.5 two places to the left, but since there is only one digit to the left of the decimal, we add a zero first: 02.5. Now move the decimal two places to the left to get 0.025. Converting a Fraction to a Decimal and a Percent Often in probability it is natural to represent probabilities as fractions, but it is easier to make comparisons as decimals. Thus, we need to be able to convert fractions to decimals. To do so we just divide. Example $6$ Convert the fraction $\frac{4}{7}$ to a decimal, rounding to the nearest hundredth. Solution We use long division: $\hspace{0.55cm}.571\7\overline{)4.000}\\hspace{0.35cm}\underline{35}\\hspace{0.5cm}50\\hspace{0.5cm}\underline{49}\\hspace{0.45cm}\hspace{0.25cm}10$ Next round to the nearest hundredth to get 0.57. Although everyone's favorite thing to do is to perform long division by hand, in most statistics classes you will have a calculator or computer to use. Thus you just have to remember to perform the division with the calculator or computer and then round. Example $7$ In statistics we need to find basic probabilities and create a table for them. Suppose that you roll two six-sided dice, what percent of the time will the sum equal to a 4? Round to the nearest whole number percent. Solution First, notice that there are 36 total possibilities for rolling the dice, since there are 6 faces on the first die and for each value of the first die roll, there are 6 possibilities for the second die roll. Multiplying: 6 x 6 = 36. This will be the denominator. To find the numerator, we list all the possible outcome where the sum is 4: (1,3), (2,2), and (3,1) There are three possible outcomes with the sum equaling a 4. Thus: $P(\text{sum} = 4) = 3/36 \nonumber$ Now we divide: $\frac{3}{36}\:=\:0.08333... \nonumber$ Next to convert this decimal to a percent, we move the decimal two places to the right to get: 8.333...% We are asked to round to the nearest whole number percent. The digit to the right or the whole number (8) is a 3. Since 3 < 5, we can just erase everything to the left of the 8 and leave the 8 unchanged to get 8%. Thus there is an 8% chance of getting a sum of 4 if you roll two six sided dice.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Decimals_Fractions_and_Percents/Converting_Between_Fractions_Decimals_and_Percents.txt
Learning Outcomes 1. Understand what it means to have a number rounded to a certain number of decimal places. 2. Round a number to a fixed number of digits. 3. Convert from scientific notation to decimal notation and back. In this section, we will go over how to round decimals to the nearest whole number, nearest tenth, nearest hundredth, etc. In most statistics applications that you will encounter, the numbers will not come out evenly, and you will need to round the decimal. We will also look at how to read scientific notation. A very common error that statistics students make is not noticing that the calculator is giving an answer in scientific notation. For example, suppose that you used a calculator to find the probability that a randomly selected day in July will have a high temperature of over 90 degrees. Your calculator gives the answer: 0.4987230156. This is far too many digits for practical use, so it makes sense to round to just a few digits. By the end of this section you will be able to perform the rounding that is necessary to make unmanageable numbers manageable. Brief Review of Decimal Language Consider the decimal number: 62.5739. There is a defined way to refer to each of the digits. • The digit 6 is in the "Tens Place" • The digit 2 is in the "Ones Place" • The digit 5 is in the "Tenths Place" • The digit 7 is in the "Hundredths Place" • The digit 3 is in the "Thousandths Place" • The digit 9 is in the "Ten-thousandths Place" • We also say that 62 is the "Whole Number" part. Keeping this example in mind will help you when you are asked to round to a specific place value. Example $1$ It is reported that the mean number of classes that college students take each semester is 3.2541. Then the digit in the hundredths place is 5. Rules of Rounding Now that we have reviewed place values of numbers, we are ready to go over the process of rounding to a specified place value. When asked to round to a specified place value, the answer will erase all the digits after the specified digit. The process to deal with the other digits is best shown by examples. Example $2$: Case 1 - The Test Digit is Less Than 5 Round 3.741 to the nearest tenth. Solution Since the test digit (4) is less than 5, we just erase everything to the right of the tenths digit, 7. The answer is: 3.7. Example $3$: Case 2 - The Test Digit is 5 or Greater Round 8.53792 to the nearest hundredth. Solution Since the test digit (6) is 5 or greater, we add one to the hundredths digit and erase everything to the right of the hundredths digit, 3. Thus the 3 becomes a 4. The answer is: 8.54. Example $4$: Case 3 - The Test Digit is 5 or Greater and the rounding position digit is a 9 Round 0.014952 to four decimal places. Solution The test digit is 5, so we must round up. The rounding position is a 9 and adding 1 gives 10, which is not a single digit number. Instead look at the two digits to the left of the test digit: 49. If we add 1 to 49, we get 50. Thus the answer is 0.0150. Applications Rounding is used in most areas of statistics, since the calculator or computer will produce numerical answers with far more digits than are useful. If you are not told how many decimal places to round to, then you often want to think about the smallest number of decimals to keep so that no important information is lost. For example suppose you conducted a sample to find the proportion of college students who receive financial aid and the calculator presented 0.568429314. You could turn this into a percent at 56.8429314%. There are no applications where keeping this many decimal places is useful. If, for example, you wanted to present this finding to the student government, you might want to round to the nearest whole number. In this case the ones digit is 6 and the test digit is 8. Since 8 > 5, you add 1 to the ones digit. You can tell the student government that 57% of all college students receive financial aid. Example $5$ Suppose that you found out that the probability that a randomly selected person with who has misused prescription opioids will transition to heroin is 0.04998713. Round this number to four decimal places. Solution The first four decimal places are 0.0499 and the test digit is 8. Since 8 > 5, we would like to add 1 to the fourth digit. Since this is a 9, we go to the next digit to the left. This is also a 9, so we go to the next one which is a 4. We can think of adding 0499 + 1 = 0500. Thus the answer is 0.0500. Note that we keep the last two 0's after the 5 to emphasize that this is accurate to the fourth decimal place. Rounding and Arithmetic Many times, we have to do arithmetic on numbers with several decimal places and want the answer rounded to a smaller number of decimal places. One question you might ask is should you round before you perform the arithmetic or after. For the most accurate result, you should always round after you preform the arithmetic if possible. When asked to do arithmetic and present you answer rounded to a fixed number of decimal places, only round after performing the arithmetic. Example $6$ Suppose you pick three cards from a 52 card deck with replacement and want to find the probability of the event, A, that none of the three cards will be a 2 through 7 of hearts. This probability is: $P\left(A\right)=\left(0.8846\right)^3 \nonumber$ Round the answer to 2 decimal places. Solution Note that we have to first perform the arithmetic. With a computer or calculator we get: $0.8846^3=\text{0.69221467973} \nonumber$ Now we round to two decimal places. Notice that the hundredths digit is a 9 and the test digit is a 2. Thus the 9 remains unchanged and everything to the right of the 9 goes away. the result is $P\left(A\right)\approx0.69 \nonumber$ If we mistakenly rounded 0.8846 to two decimal places (0.88) and then cubed the answer we would have gotten 0.68 which is not the correct answer. Scientific Notation When a calculator presents a number in scientific notation, we must pay attention to what this represents. The standard way of writing a number in scientific notation is writing the number as a product of a number greater than or equal 1 but less than 10 followed by a power of 10. For example: $602,000,000,000,000,000,000,000 = 6.02 \times 10^{23} \nonumber$ The main purpose of scientific notation is to allow us to write very large numbers or numbers very close to 0 without having to use so many digits. Most calculators and computers use a different notation for scientific notation, most likely because the superscript is difficult to render on a screen. For example, with a calculator: $0.00000032 = 3.2E-7 \nonumber$ Notice that to arrive at 3.2, the decimal needed to be moved 7 places to the right. Example $7$ A calculator displays: $2.0541E6 \nonumber$ Write this number in decimal form. Solution Notice that the number following E is 6. This means move the decimal over 6 places to the right. The first 4 moves is natural, but for the last 2 moves, there are no numbers to move the decimal place past. We can always add extra zeros after the last number to the right of the decimal place: $2.0541E6 = 2.054100E6 \nonumber$ Now we can move the decimal place to the right 6 places to get $2.0541E6 = 2.054100E6 = 2,054,100 \nonumber$ Example $8$ If you use a calculator or computer to find the probability of flipping a coin 27 times and getting all heads, then it will display: $7.45E−9 \nonumber$ Write this number in decimal form. Solution Many students will forget to look for the "E" and just write that the probability is 7.45, but probabilities can never be bigger than 1. You can not have a 745% chance of it occurring. Notice that the number following E is −9. Since the power is negative, this means move the decimal to the left, and in particular 9 places to the left. There is only one digit to the left of the decimal place, so we need to insert 8 zeros: $7.45E−9 = 000000007.45E−9 \nonumber$ Now we can move the decimal place to the right 9 places to the left to get $7.45E−9 = 000000007.45E−9 = 0.00000000745 \nonumber$
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Decimals_Fractions_and_Percents/Decimals%3A__Rounding_and_Scientific_Notation.txt
Learning Outcomes 1. Interpret bar charts using fractions, decimals and percents 2. Interpret pie charts using fractions, decimals and percents Charts, such as bar charts and pie charts are visual ways of presenting data. You can think of each slice of the pie or each bar as a part of the whole. The numerical versions of this are a list of fractions, decimals and percents. By the end of this section we will be able to look at one of these charts and produce the corresponding fractions, decimals, and percents. Reading a Bar Chart Bar charts occur frequently and it is definitely required to understand how to read them and interpret them in statistics. Often we want to convert the information of a bar chart to information shown numerically. We need fractions and/or percents to do this. Example $1$ The above bar chart shows the demographics of California in 2019 where the numbers represent millions of people. Here are some questions that might come up in a statistics class. 1. What fraction of Californians was Hispanic in 2019? 2. What proportion of all Californians was White in 2019? Write your answer as a decimal number rounded to four decimal places. 3. What percent of Californians who were neither Hispanic nor White in 2019? Round your answer to the nearest percent. Solution 1. To find the fraction of California that was Hispanic in 2019, the numerator will be the total number of Hispanics and the denominator will be the total number of people in California in 2019. The height of the bar that represents Hispanics is 15. Therefore the numerator is 15. To find the total number of people in California, we add up the heights of the three bars: $15+13+10\:=\:38 \nonumber$ Now we can just write down the fraction: $\frac{15}{38} \nonumber$To find the proportion of Californians who were White in 2019, we start in the same way. The numerator will be the number of Whites: 13. The denominator will be the total number of Californians which we already computed as 38. Therefore the fraction of Californians who were White is: $\frac{13}{38} \nonumber$ To convert this to a decimal, we use a calculator to get: $\frac{13}{38}\approx0.342105 \nonumber$Next round to four decimal places. Since the digit to the right of the fourth decimal place is $0\:<5$, we round down to: $0.3421 \nonumber$ 2. To find the percent of Californians who were neither Hispanic nor White in 2019, we first find the fraction who were neither. The numerator will be the number of "Other" which is: 10. The denominator will be the total which is 38. Thus the fraction is: $\frac{10}{38} \nonumber$ Next, use a calculator to divide these numbers to get: $\frac{10}{38}\approx0.263158 \nonumber$ To convert this to a percent we multiply by 100% by moving the decimal two places to the right: $0.263158\:\times100\%\:=\:26.3158\% \nonumber$ Finally we round to the nearest whole number. Noting that $3 < 5$, we round down to get: 26% Exercise The bar chart below shows the grade distribution for a math class. 1. Find the fraction of students who received a "C" grade. 2. Find the proportion of grades below a "C". Write your answer as a decimal number rounded to the nearest hundredth. 3. What percent of the students received an "A" grade? Round your answer to the nearest whole number percent. Reading a Pie Chart Another important chart that is used to display the components of a whole is a pie chart. With a pie chart, it is very easy to determine the percent of each item. Example $2$ The pie chart below shows the makeup of milk. Write the proportion of fat contained in milk as a decimal. Solution We see that 31% of milk is fat. To convert a percent to a decimal, we just move the decimal over two places to the left. Thus, 31% becomes 0.31. Example $3$ The pie chart above shows the number of pets of each type that had to be euthanized by the humane society due to incurable illnesses. 1. What fraction of the euthanized pets were dogs? 2. What percent of the euthanized pets were cats? Round to the nearest whole number percent. Solution 1. We take the number of dogs over the total. There were 334 euthanized dogs. To find the total we add: $737+37+334\:=\:1108 \nonumber$ Therefore, the fraction of euthanized dogs is $\frac{334}{1108} \nonumber$ 2. To find the percent of euthanized cats, we first find the fraction. There were 737 cats over a total of 1108 pets. The fraction is $\frac{737}{1108} \nonumber$ Next use a calculator to get the decimal number: 0.66516. Now multiply by 100% by moving the decimal place two digits to the right to get: 66.516%. Finally, we need to round to the nearest whole number percent. Since $5\ge\:5$, we round up. Thus the percent of euthanized cats is 67%.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Decimals_Fractions_and_Percents/Using_Fractions_Decimals_and_Percents_to_Describe_Charts.txt
• Evaluate Algebraic Expressions There are many formulas that are encountered in a statistics class and the values of each variable will be given. It will be your task to carefully evaluate the expression after plugging in each of the given values into the formula. In order to be successful you should not rush through the process and you need to be aware of the order of operations and use parentheses when necessary. • Inequalities and Midpoints Inequalities are an essential component of statistics. One very important use of inequalities is when we have found a mean or proportion from a sample and want to write out an inequality that gives where the population mean or proportion is likely to lie. Another application is in probability where we want to find the probability of a value being more than a number, less than a number, or between two numbers. • Solve Equations with Roots Square roots occur frequently in a statistics course, especially when dealing with standard deviations and sample sizes. In this section we will learn how to solve for a variable when that variable lies under the square root sign. The key thing to remember is that the square of a square root is what lies inside. In other words, squaring a square root cancels the square root. • Solving Linear Equations in One Variable It is a common task in algebra to solve an equation for a variable. The goal will be to get the variable on one side of the equation all by itself and have the other side of the equation just be a number. The process will involve identifying the operations that are done on the variable and apply the inverse operation to both sides of the equation. This will be managed in the reverse of the order of operations. Expressions Equations and Inequalities Learning Outcomes 1. Evaluate an algebraic expression given values for the variables. 2. Recognize given values in a word problem and evaluate an expression using these values. There are many formulas that are encountered in a statistics class and the values of each variable will be given. It will be your task to carefully evaluate the expression after plugging in each of the given values into the formula. In order to be successful you should not rush through the process and you need to be aware of the order of operations and use parentheses when necessary. Example $1$ Suppose that equation of the regression line for the number of days a week, $x$, a person exercises and the number of days, $\hat y$, a year a person is sick is: $\hat y=12.5\:-\:1.6x\nonumber$ We use $\hat y$ instead of $y$ since this is a prediction instead of an actual data value's y-coordinate. Use this regression line to predict the number of times a person who exercises 4 days a week will be sick this year. Solution The first step is always to identify the variable or variables that are given. In this case, we have 4 days of exercise a week, so: $x=4\nonumber$ Next, we plug in to get: $\hat y=12.5\:-\:1.6(4) = 6.1\nonumber$ Since we are predicting the number of days a year being sick, it is a good idea to round to the nearest whole number. We get that the best prediction for the number of sick days for a person who exercises 4 days per week is that they will be sick 6 days this year. Example $2$ For a yes/no question, a sample size is considered large enough to use a Normal distribution if $np>5$ and $nq\:>5$ where $n$ is the sample size, $p$ is the proportion of Yes answers, and $q$ is the proportion of No answers. A survey was given to 59 American adults asking them if they were food insecure today. 6.8% of them said they were food insecure today. Was the sample size large enough to use the Normal distribution? Solution Our first task is to list out each of the needed variables. Let's start with $n$, the sample size. We are given that 59 Americans were surveyed. Thus $n=59\nonumber$ Next, we will find $p$, the proportion of Yes answers. We are given that 6.8% said Yes. Since this is a percent and not a proportion, we must convert the percent to a proportion by moving the decimal place two places to the right. It helps to place a 0 to the left of the 6, so that the decimal point has a place to go. A common error is to rush through this and wrongly write down 0.68. Instead, the proportion is: $p=0.068\nonumber$ Our next task is to find $q$, the proportion of No answers. For a Yes/No question, the proportion of Yes answers and the proportion of No answers must always add up to 1. Thus: $q=1-0.068\:=\:0.932\nonumber$ Now we are ready to plug into the two inequalities: $np=59\times0.068=4.012\nonumber$ and $nq=59\times0.932=54.988\nonumber$ Although $nq\:=\:54.988>5$, we have $np\:=\:4.012<5$, so the sample size was not large enough to use the Normal distribution. Example $3$ For a quantitative study, the sample size, $n$, needed in order to produce a confidence interval with a margin of error no more than $\pm E$, is $n=\left(\frac{z\sigma}{E}\right)^2\nonumber$ where $z$ is a value that is determined from the confidence level and $\sigma$ is the population standard deviation. You want to conduct a survey to estimate the population mean amount of years it takes psychologists to get through college and you require a margin of error of no more than $\pm0.1$ years. Suppose that you know that the population standard deviation is 1.3 years. If you want a 95% confidence interval that comes with a $z = 1.96$, at least how many psychologists must you survey? Round your answer up. Solution We start out by identifying the given values for each variable. Since we want a margin of error of no more than $\pm0.1$, we have: $E\:=\:0.1\nonumber$ We are told that the population standard is 1.3, so: $\sigma=1.3\nonumber$ We are also given the value of $z$: $z=1.96\nonumber$ Now put this into the formula to get: $n=\left(\frac{1.96\times1.3}{0.1}\right)^2\nonumber$ We put this into a calculator or computer to get: $\left(1.96\times1.3\div0.1\right)^2=649.2304\nonumber$ We round up and can conclude that we need to survey 650 psychologists. Example $4$ Based on the Central Limit Theorem, the standard deviation of the sampling distribution when samples of size $n$ are taken from a population with standard deviation, $\sigma$, is given by: $\sigma_\bar x=\frac{\sigma}{\sqrt{n}}\nonumber$ If the population standard deviation for the number of customers who walk into a fast food restaurant is 12, what is the standard deviation of the sampling distribution for samples of size 35? Round your answer to two decimal places. Solution First we identify each of the given variables. Since the population standard deviation was 12, we have: $\sigma=12\nonumber$ We are told that the sample size is 35, so: $n=35\nonumber$ Now we put these numbers into the formula for the standard deviation of the sampling distribution to get: $\sigma_\bar x=\frac{12}{\sqrt{35}}\nonumber$ We are now ready to put this into our calculator or computer. We put in: $\sigma_x=\frac{12}{\sqrt{35}}=12\div(35^\wedge 0.5) = 2.02837\nonumber$ Rounded to two decimal places, we can say that the standard deviation of the sampling distribution is 2.03. Example $5$: Z score The z-score for a given sample mean $\bar x$ for a sampling distribution with population mean $\mu$, population standard deviation $\sigma$, and sample size $n$ is given by: $z=\frac{\bar x-\mu}{\frac{\sigma}{\sqrt{n}}}\nonumber$ An environmental scientist collected data on the amount of glacier retreat. She measured 45 glaciers. The population mean retreat is 22 meters and the population standard deviation is 16 meters. The sample mean for her data was 27 meters and the sample standard deviation for her data was 18 meters. What was the z-score? Solution First we identify each of the given variables. Since the sample mean was 27, we have: $\bar x = 27\nonumber$ We are told that the population mean is 22 meters, so: $\mu=22\nonumber$ We are also given that the population standard deviation is 16 meters, hence: $\sigma=16\nonumber$ Finally, since she measured 45 glaciers, we have: $n=45\nonumber$ Now we put the numbers into the formula for the z-score to get: $z=\frac{27-22}{\frac{16}{\sqrt{45}}}\nonumber$ We are now ready to put this into our calculator or computer. We must pay attention to the order of operations and put parentheses around the numerator, since the subtraction happens for this expression before the division. We also must put parentheses around the denominator. We put in: $z=\left(27-22\right)\div\left(16\div\sqrt{45}\right)=2.0963\nonumber$ Exercise You want to come up with a 90% confidence interval for the proportion of people in your community who are obese and require a margin of error of no more than $\pm3\%$. According to the Journal of the American Medical Association (JAMA) 34% of all Americans are obese. The equation to find the sample size, $n$, needed in order to come up with a confidence interval is: $n=p\left(1-p\right)\left(\frac{z}{E}\right)^2$ where $p$ is the preliminary estimate for the population proportion. Based on calculations, $z=1.645$. How many people in your community must you survey? Evaluating Algebraic Expressions (L2.1) https://youtu.be/HLjUT8Kvc5U
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Expressions_Equations_and_Inequalities/Evaluate_Algebraic_Expressions.txt
Learning Objectives • Write out an inequality from words. • Go from a midpoint and error to an inequality. • Go from inequality to a midpoint and error. Inequalities are an essential component of statistics. One very important use of inequalities is when we have found a mean or proportion from a sample and want to write out an inequality that gives where the population mean or proportion is likely to lie. Another application is in probability where we want to find the probability of a value being more than a number, less than a number, or between two numbers. Converting Words to Inequalities Example $1$ You want to find the probability that it will a patient will "take at least three hours to wake up after surgery". Write an inequality for this situation. Solution The key words here are "at least". These words can be written symbolically as "≤". Therefore we can write "take at least three hours to wake up after surgery" as: $x ≤ 3\nonumber$ Example $2$ Suppose you want to find the probability that a relationship will last "more than 1 week and at most 8 weeks". Write an inequality for this situation. Solution Let's first translate the words "more than". This is equivalent to ">". Next translate the words "at most". This is equivalent to "≤". Now we can put this together to get: $1 < x ≤ 8\nonumber$ Midpoints and Inequalities There are two ways of thinking about an interval. The first is that x is greater than the lower bound and less than the upper bound. The second is that the center or midpoint of the interval is a given value and the interval goes no more than a certain distance from that value. In statistics, this is important when we look at confidence intervals. Both ways of presenting the interval are commonly used, so we need to be able to go from one way to the other. Example $3$ A researcher observed 45 startup companies to find a 95% confidence interval for the population mean amount of time it takes to make a profit. The sample mean was 14 months and the margin of error was plus or minus 8 months. In symbols the confidence interval can be written as: $14 ± 8\nonumber$ Express this as a trilinear inequality. Solution We first find the lower bound by subtracting: $14 − 8 = 6\nonumber$ Next, we find the upper bound by adding: $14 + 8 = 22\nonumber$ We can now put this together as a trilinear inequality: $6 ≤ x ≤ 22\nonumber$ Example $4$ A researcher interviewed 1000 Americans to asking them if they thought abortion should be against the law. The following 95% confidence interval was given for the population proportion of all Americans who are against abortion: $(0.41, 0.47)\nonumber$ Find the midpoint and the margin or error. That is write this interval in the form: $a\pm b$ Solution Let's first find the midpoint. This is the average of the left and right endpoints: $a\:=\:\frac{0.41+0.47}{2}=0.44\nonumber$ Next, find the distance from the midpoint to either boundary: $b=0.47-0.44=0.3\nonumber$ Finally we can put these two together to get: $0.44\pm0.03\nonumber$ Exercise $1$ A study was done to see how many years longer it takes low income students to finish college compared to high income students. The confidence interval for the population mean difference was found to be: $[ 0.67 , 0.84 ]\nonumber$ Find the midpoint and the margin of error. That is write this interval as in the form: $a ± b\nonumber$ Converting an Inequality from Interval Notation to Midpoint and Error Notation (Links to an external site.) Writing Equations and Inequalities for Scenarios
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Expressions_Equations_and_Inequalities/Inequalities_and_Midpoints.txt
Learning Outcomes • Solve equations that include square roots. Square roots occur frequently in a statistics course, especially when dealing with standard deviations and sample sizes. In this section we will learn how to solve for a variable when that variable lies under the square root sign. The key thing to remember is that the square of a square root is what lies inside. In other words, squaring a square root cancels the square root. Example $1$ Solve the following equation for $x$. $2+\sqrt{x-3}\:=\:6 \nonumber$ Solution What makes this a challenge is the square root. The strategy for solving is to isolate the square root on the left side of the equation and then square both sides. First subtract 2 from both sides: $\sqrt{x-3}=4 \nonumber$ Now that the square root is isolated, we can square both sides of the equation: $\left(\sqrt{x-3}\right)^2=4^2 \nonumber$ Since the square and the square root cancel we get: $x-3=16 \nonumber$ Finally add 3 to both sides to arrive at: $x=19 \nonumber$ It's always a good idea to check your work. We do this by plugging the answer back in and seeing if it works. We plug in $x=19$ to get \begin{align*}2+\sqrt{19-3} &=2+\sqrt{16} \[4pt] &=2+4 \[4pt] &= 6 \end{align*} Yes, the solution is correct. Example $2$ The standard deviation, $\sigma_\hat p$, of the sampling distribution for a proportion follows the formula: $\sigma_\hat p=\sqrt{\frac{p\left(1-p\right)}{n}} \nonumber$ Where $p$ is the population proportion and $n$ is the sample size. If the population proportion is 0.24 and you need the standard deviation of the sampling distribution to be 0.03, how large a sample do you need? Solution We are given that $p=0.24$ and $\sigma_{\hat p } = 0.03$ Plug in to get: $0.03=\sqrt{\frac{0.24\left(1-0.24\right)}{n}} \nonumber$ We want to solve for $n$, so we want $n$ on the left hand side of the equation. Just switch to get: $\sqrt{\frac{0.24\left(1-0.24\right)}{n}}\:=\:0.03 \nonumber$ Next, we subtract: $1-0.24\:=\:0.76 \nonumber$ And them multiply: $0.24\left(0.76\right)=0.1824 \nonumber$ This gives us $\sqrt{\frac{0.1824}{n}}\:=\:0.03 \nonumber$ To get rid of the square root, square both sides: $\left(\sqrt{\frac{0.1824}{n}}\right)^2\:=\:0.03^2 \nonumber$ The square cancels the square root, and squaring the right hand side gives: $\frac{0.1824}{n}\:=\:0.0009 \nonumber$ We can write: $\frac{0.1824}{n}\:=\frac{\:0.0009}{1} \nonumber$ Cross multiply to get: $0.0009\:n\:=\:0.1824 \nonumber$ Finally, divide both sides by 0.0009: $n\:=\frac{\:0.1824}{0.0009}=202.66667 \nonumber$ Round up and we can conclude that we need a sample size of 203 to get a standard error that is 0.03. We can check to see if this is reasonable by plugging $n = 203$ back into the equation. We use a calculator to get: $\sqrt{\frac{0.24\left(1-0.24\right)}{203}}\:=\:0.029975 \nonumber$ Since this is very close to 0.03, the answer is reasonable. Exercise The standard deviation, $\sigma_\bar x$, of the sampling distribution for a mean follows the formula: $\sigma_\bar x=\frac{\sigma}{\sqrt{n}} \nonumber$ Where $\sigma$ is the population standard deviation and $n$ is the sample size. If the population standard deviation is 3.8 and you need the standard deviation of the sampling distribution to be 0.5, how large a sample do you need? Solving Linear Equations in One Variable Learning Outcomes • Solve linear equations for the variable. It is a common task in algebra to solve an equation for a variable. The goal will be to get the variable on one side of the equation all by itself and have the other side of the equation just be a number. The process will involve identifying the operations that are done on the variable and apply the inverse operation to both sides of the equation. This will be managed in the reverse of the order of operations. Example $1$ Solve the following equation for $x$. $3x+4=11 \label{EQ1.1}$ Solution We begin by looking at the operations that are done to $x$, keeping track the order. The first operation is "multiply by 3" and the second is "add 4". We now do everything backwards. Since the last operation is "add 4", our first step is to subtract 4 from both sides of Equation \ref{EQ1.1}. $3x \cancel{+ 4} \color{Cerulean}{ \cancel{-4}} \color{black} =11 \color{Cerulean}{ -4} \nonumber$ which simplifies the equation $3x = 7 \nonumber$ Next, the way to undo "multiply by 3" is to divide both sides by 3. We get $\dfrac{\cancel{3}x}{\color{Cerulean}{\cancel{3}}} \color{black}= \dfrac{7}{\color{Cerulean}{3}} \nonumber$ or $x=\dfrac{7}{3} \nonumber$ Example $2$ The rectangle above is a diagram for a uniform distribution from 2 to 9 that asks for the first quartile. The area of the smaller red rectangle that has base from 2 to Q1 and height 1/7 is 1/4. Find Q1. Solution We start by using the area formula for a rectangle: $\text{Area} = \text{Base} \times \text{Height} \label{EQ1}$ We have: • Area = $\frac{1}{4}$ • Base = $Q1-2$ • Height = $\frac{1}{7}$ Plug this into Equation \ref{EQ1} to get: $\frac{1}{4}=\left(Q1-2\right)\left(\frac{1}{7}\right) \label{EQ2}$ We need to solve for $Q1$. First multiple both sides of Equation \ref{EQ2} by 7 to get: \begin{align} \color{Cerulean}{7} \color{black} \left(\dfrac{1}{4}\right) &= \color{Cerulean}{\cancel{7}} \color{black} \left(Q1-2\right) \cancel{ \left(\frac{1}{7}\right)} \nonumber \[5pt] \dfrac{7}{4} &=Q1-2 \label{EQ4} \end{align} Now add 2 to both sides of Equation \ref{EQ4} to get: \begin{align*} \dfrac{7}{4} \color{Cerulean} +2 \color{black} & =Q1 \cancel{-2} \color{Cerulean}{\cancel{+2}} \[5pt] \dfrac{7}{4}+2&=Q1 \end{align*} or $Q1=\frac{7}{4}+2 \nonumber$ Putting this into a calculator gives: $Q1=3.75 \nonumber$ Example $3$: z-score The z-score for a given value $x$ for a distribution with population mean $\mu$ and population standard deviation $\sigma$ is given by: $z=\frac{x-\mu}{\sigma} \nonumber$ An online retailer has found that the population mean sales per day is $2,841 and the population standard deviation is$895. A value of $x$ is considered an outlier if the z-score is less than -2 or greater than 2. How many sales must be made to have a z-score of 2? Solution First we identify each of the given variables. Since the population mean is 2,841, we have: $\mu=2841 \nonumber$ We are told that the population standard deviation is 895 meters, so: $\sigma=895 \nonumber$ We are also given that the z-score is 2, hence: $z=2 \nonumber$ Now we put the numbers into the formula for the z-score to get: $2=\frac{x-2841}{895} \nonumber$ We can next switch the order of the equation so that the $x$ is on the left hand side of the equation: $\frac{x-2841}{895}=2 \nonumber$ Next, we solve for $x$. First multiply both sides of the equation by 895 to get $x-2841=2\left(895\right)=1790 \nonumber$ Finally, we can add 2841 to both sides of the equation to get $x$ by itself: $x=1790+2841=4631 \nonumber$ We can conclude that if the day's sales is at \$4631, the z-score is 2. Exercise The rectangle below is a diagram for a uniform distribution from 5 to 11 that asks for the 72nd percentile. The area of the smaller red rectangle that has base from 5 to the 72nd percentile, $x$, and height 1/6 is 0.72. Find $x$.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Expressions_Equations_and_Inequalities/Solve_Equations_with_Roots.txt
• Finding Residuals In the linear regression part of statistics we are often asked to find the residuals. Given a data point and the regression line, the residual is defined by the vertical difference between the observed value of y and  y based on the equation of the regression line. • Find the Equation of a Line given its Graph There are two main ways of representing a line: the first is with its graph, and the second is with its equation. In this section we will practice how to find the equation of the line if we are given the graph of the line. The two key numbers in the equation of a line are the slope and the y-intercept. Thus the main steps in finding the equation of a line are finding the slope and finding the y-intercept. In statistics we are often presented with a scatterplot where we can eyeball the line. • Find y given x and the Equation of a Line A line can be thought of as a function, which means that if a value of x is given, the equation of the line produces exactly one value of y; This is particularly useful in regression analysis where the line is used to make a prediction of one variable given the value of the other variable. • Graph a Line given its Equation Often we are given an equation of a line and we want to visualize it. For this reason, it is important to be able to graph a line given its equation. We will look at lines that are in slope intercept form: y=a + bx where a is the y-intercept of the line and b is the slope of the line. The y-intercept is the value of where the line crosses the y-axis. The slope is the rise over run. • Interpreting the Slope of a Line A common issue when we learn about the equation of a line in an algebra is to state the slope as a number, but have no idea what it represents in the real world. The slope of a line is the rise over the run. If the slope is given by an integer or decimal value we can always put it over the number 1. In this case the line rises by the slope when it runs 1. "Runs 1" means that the x value increases by 1 unit. Therefore the slope represents how much y changes when x changes by 1 unit. • Interpreting the y-intercept of a Line Just like the slope of a line, many algebra classes go over the y-intercept of a line without explaining how to use it in the real world. The y-intercept of a line is the value of \(y\) where the line crosses the y-axis. In other words, it is the value of \(y\) when the value of \(x\) is equal to 0. Sometimes this has true meaning for the model that the line provides, but other times it is meaningless. We will encounter examples of both types in this section. • Plot an Ordered Pair We have already gone into detail about how to plot points on a number line, and that is very useful for single variable presentations. Now we will move to questions that involve comparing two variables. Working with two variables is frequently encountered in statistical studies and we would like to be able to display the results graphically. This is best done by plotting points in the xy-plane. Graphing Points and Lines in Two Dimensions Learning Outcomes 1. Find the slope of a line given its graph. 2. Find the y-intercept of a line given its graph. 3. Find the equation of a line given its graph. There are two main ways of representing a line: the first is with its graph, and the second is with its equation. In this section, we will practice how to find the equation of the line if we are given the graph of the line. The two key numbers in the equation of a line are the slope and the y-intercept. Thus the main steps in finding the equation of a line are finding the slope and finding the y-intercept. In statistics we are often presented with a scatterplot where we can eyeball the line. Once we have the graph of the line, getting the equation is helpful for making predictions based on the line. Finding the Slope of a Line Given Its Graph The steps to follow to fine the slope of the line given its graph are the following. Step 1: Identify two points on the line. Any two points will do, but it is recommended to find points with nice $x$ and $y$ coordinates. Step 2: The slope is the rise over the run. Thus if the points have coordinates $\left(x_1,y_1\right)$ and $\left(x_2,\:y_2\right)$, then the slope is: $Slope\:=\:\frac{Rise}{Run}=\frac{y_2-y_1}{x_2-x_1}\nonumber$ Example $1$ Find the slope of the line shown below. Solution First, we locate points on the line that are as easy as possible to work with. The points with integer coordinates are (0,-4) and (2,2). Next, we use the rise over run formula to find the slope of the line. $Slope\:=\:\frac{y_2-y_1}{x_2-x_1}=\frac{2-\left(-4\right)}{2-0}=\frac{6}{2}=3\nonumber$ Finding the y-intercept from the graph If the portion of the graph that is in view includes the y-axis, then the y-intercept is very easy to spot. You just see where it crosses the y-axis. On the other hand, if the portion of the graph in view does not contain the y-axis, then it is best to first find the equation of the line and then use the equation to find the y-intercept. Example $2$ Find the y-intercept of the line shown below. Solution We just look at the line and notice that it crosses the y-axis at $y=1$. Therefore, the y-intercept is 1 or (0,1). Finding the equation of the line given its graph If you are given the graph of a line and want to find its equation, then you first find the slope as in Example $1$. Then you use one of the points you found $\left(x_1,\:y_1\right)$ when you computed the slope, $m$, and put it into the point slope equation: $y-y_1=m\left(x-x_1\right)\nonumber$ Then you multiply the slope through and add $y_1$ to both sides to get $y$ by itself. Example $3$ Find the equation of the line shown below. Solution First we find the slope by identifying two nice points. Notice that the line passes through (0,-1) and (3,1). Now compute the slope using the rise over run formula: $Slope\:=\frac{\:rise}{run}=\frac{1-\left(-1\right)}{3-0}=\frac{2}{3}\nonumber$ Next use the point slope equation with the point (0,-1). $y-\left(-1\right)=\frac{2}{3}\left(x-0\right)\nonumber$ Now simplify: $y+1=\frac{2}{3}x\nonumber$ Finally subtract 1 from both sides to get: $y=\frac{2}{3}x-1\nonumber$ Example $4$ A study was done to look at the relationship between the square footage of a house and the price of the house. The scatter plot and regression line are shown below. Find the equation of the regression line. Solution First we find the slope by identifying two nice points. You will have to eyeball it and notice that the line passes through (1600, 300000) and (2000,400000). Now compute the slope using the rise over run formula: $\frac{\:rise}{run}=\frac{400000-300000}{2000-1600}=\frac{100000}{400}=250\nonumber$ Next use the point slope equation with the point (2000,400000). $y-\left(400000\right)=250\left(x-2000\right)\nonumber$ Now simplify: $y-400000=250x-500000\nonumber$ Finally add 400000 to both sides to get: $y=250x-100000\nonumber$ Notice that although the y-intercept is not visible from the graph of the line, we can see from the equation of the line that the y-intercept is -100000 or (0,-100000). Exercise The regression line and scatterplot below show the result of surveys that were taken in multiple years to find out the percent of households that had a landline telephone. Find the equation of this regression line. Ex 1: Find the Equation of a Line in Slope Intercept Form Given the Graph of a Line Finding the Equation of a Line Given Its Graph
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Graphing_Points_and_Lines_in_Two_Dimensions/Find_the_Equation_of_a_Line_given_its_Graph.txt
Learning Outcomes 1. Find the value of y given x and the equation of a line. 2. Use a line to make predictions. A line can be thought of as a function, which means that if a value of $x$ is given, the equation of the line produces exactly one value of $y$; This is particularly useful in regression analysis where the line is used to make a prediction of one variable given the value of the other variable. Example $1$ Consider the line with equation: $y=3x-4\nonumber$ Find the value of $y$ when $x$ is 5. Solution Just replace the variable $x$ with the number 5 in the equation and perform the arithmetic: $y\:=\:3\left(5\right)-4=15-4\:=11\nonumber$ Example $2$ A survey was done to look at the relationship between a woman's height, $x$ and the woman's weight, $y$. The equation of the regression line was found to be: $y=-220+5.5x\nonumber$ Use this equation to estimate the weight in pounds of a woman who is 5' 2" (62 inches) tall. Solution Just replace the variable $x$ with the number 62 in the equation and perform the arithmetic: $y\:=\:-220+5.5\left(62\right)\nonumber$ We can put this into a calculator or computer to get: $y\:=\:121\nonumber$ Therefore, our best prediction for the weight of a woman who is 5' 2'' tall is that she is 121 lbs. Exercise A biologist has collected data on the girth (how far around) of pine trees and the pine tree's height. She found the equation of the regression line to be: $y=1.3+2.7x\nonumber$ Where the girth, $x$, is measured in inches and the height, $y$, is measured in feet. Use the regression line to predict the height of a tree with girth 28 inches. https://youtu.be/cS95PlUKZ6I Finding Residuals Learning Outcomes • Given a Regression line and a data point, find the residual In the linear regression part of statistics we are often asked to find the residuals. Given a data point and the regression line, the residual is defined by the vertical difference between the observed value of $y$ and the computed value of $\hat y$ based on the equation of the regression line: $\text{Residual} = y - \hat y \nonumber\nonumber$ Example $1$ A study was conducted asking female college students how tall they are and how tall their mother is. The results are show in the table below: Table of Mother and Daughter Heights Mother's Height 63 67 64 60 65 67 59 60 Daughter's Height 58 64 65 61 65 67 61 64 The equation of the regression line is $\hat y=30.28\:+0.52x\nonumber$ Find the residual for the mother who is 59 inches tall. Solution First note that the Daughter's Height associated with the mother who is 59 inches tall is 61 inches. This is $y$. Next we use the equation of the regression line to find $\hat y$. Since $x=59$, we have $\hat y=30.28\:+0.52(59)\nonumber$ We can use a calculator to get: $\hat y = 60.96\nonumber$ Now we are ready to put the values into the residual formula: $\text{Residual} = y-\hat y = 61-60.96=0.04\nonumber$ Therefore the residual for the 59 inch tall mother is 0.04. Since this residual is very close to 0, this means that the regression line was an accurate predictor of the daughter's height. Example $2$ An online retailer wanted to see how much bang for the buck was obtained from online advertising. The retailer experimented with different weekly advertising budgets and logged the number of visitors who came to the retailer's online site. The regression line for this is shown below. Find the residual for the week when the retailer spent $600 on advertising. Solution First notice that the point of the scatterplot with x-coordinate of 600 has y-coordinate 800. Thus $y = 800$. Next note that the point on the line with x-coordinate 600 has y-coordinate 700. Thus $\hat y = 700$. Now we are ready to put the values into the residual formula: $\text{Residual} = y-\hat y = 800-700=100\nonumber$ Therefore the residual for the$600 advertising budget is -100. Exercise Data was taken from the recent Olympics on the GDP in trillions of dollars of 8 of the countries that competed and the number of gold medals that they won. The equation of the regression line is: $\hat y=7.55\:+\:1.57x\nonumber$ The table below shows the data: GDP 21 1.6 16 1.8 4 5.4 3.1 2.3 Medals 46 8 26 19 17 12 10 9 Find the residual for the country with a GDP of 4 trillion dollars.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Graphing_Points_and_Lines_in_Two_Dimensions/Find_y_given_x_and_the_Equation_of_a_Line.txt
Learning Outcomes 1. Identify the slope and y-intercept from the equation of a line. 2. Plot the y-intercept of a line given its equation. 3. Plot a second point on a line given the y-intercept and the slope. 4. Graph a line given its equation in slope y-intercept form. Often we are given an equation of a line and we want to visualize it. For this reason, it is important to be able to graph a line given its equation. We will look at lines that are in slope intercept form: $y=a + bx$ where $a$ is the y-intercept of the line and $b$ is the slope of the line. The y-intercept is the value of $y$ where the line crosses the y-axis. The slope is the rise over run. If we write the slope as a fraction, then the numerator tells us how far to move up (or down if it is negative) and the denominator tells us how far to the right we need to go. the main application to statistics is in regression analysis which is the study of how to use a line to make a prediction about one variable based on the value of the other variable. Example $1$ Graph the line given by the equation: $y=1+\frac{3}{2}x \nonumber$ Solution We follow the three step process: Step 1: Plot the y-intercept The y-intercept is the number that is not associated with the $x$. For this example, it is 1. The x-coordinate of the y-intercept is always 0. So the coordinates of the y-intercept are (0,1). Thus start at the origin and move up 1: Step 2: Plot the Slope. The slope of a line is the coefficient of the $x$ term. Here it is $\frac{3}{2}$. What this means is that we rise 3 and run to the right 2. Rising 3 from an original y-coordinate of 1 gives a new y-coordinate of 4. Running 2 to the right from an initial x-coordinate of 0gives a new x-coordinate of 2. Thus we next plot the point (2,4). Step 3: Connect the Dots The last thing we need to do is connect the dots with a line: Example $2$ A study was done to look at the relationship between the weight of a car, $x$, in tons and its gas mileage in mpg, $y$. The equation of the regression line was found to be: $y=110-70x$ Graph this line. Solution The fist step is to note that the y-intercept is 110, hence the graph goes through the point (0,110). The next step is to see that the slope is -70. We can always put a number over 1 in order to make it a fraction. The slope of $-\frac{70}{1}$ tells us that $y$ goes down by 70 if $x$ goes up by 1. We use this to find the second point. The y-coordinate is: $110\:-\:70\:=\:40$. The x-coordinate is 1. Thus, a second point is (1,40). We can now plot the two points and connect the dots with a line. Exercise The regression line that relates the ounces of beer consumed just before a test, $x$, and the score on the test, $y$, is given by $y=93-1.2x$ Graph this line. Graphing a Line in Slope-Intercept Form https://youtu.be/z3rM-ZidXaw
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Graphing_Points_and_Lines_in_Two_Dimensions/Graph_a_Line_given_its_Equation.txt
Learning Outcomes 1. Interpret the slope of a line as the change in $y$ when $x$ changes by 1. Template for Interpreting the Slope of a Line For every increase in the $x$-variable by 1, the $y$-variable tends to change by (xxx the slope). A common issue when we learn about the equation of a line in algebra is to state the slope as a number, but have no idea what it represents in the real world. The slope of a line is the rise over the run. If the slope is given by an integer or decimal value we can always put it over the number 1. In this case, the line rises by the slope when it runs 1. "Runs 1" means that the x value increases by 1 unit. Therefore the slope represents how much the y value changes when the x value changes by 1 unit. In statistics, especially regression analysis, the x value has real life meaning and so does the y value. Example $1$ A study was done to see the relationship between the time it takes, $x$, to complete a college degree and the student loan debt incurred, $y$. The equation of the regression line was found to be: $y=25142\:+14329x$ Interpret the slope of the regression line in the context of the study. Solution First, note that the slope is the coefficient in front of the $x$. Thus, the slope is 14,329. Next, the slope is the rise over the run, so it helps to write the slope as a fraction: $Slope\:=\frac{\:rise}{run}=\frac{14,329}{1}$ The rise is the change in $y$ and $y$ represents student loan debt. Thus, the numerator represents an increase of $14,329 of student loan debt. The run is the change in $x$ and $x$ represents the time it takes to complete a college degree. Thus, the denominator represents an increase of 1 year to complete a college degree. We can put this all together and interpret the slope as telling us that For every additional year it takes to complete a college degree, on average the student loan debt tends to increase by$14,329. Example $2$ Suppose that a research group tested the cholesterol level of a sample of 40 year old women and then waited many years to see the relationship between a woman's HDL cholesterol level in mg/dl, $x$, and her age of death, $y$. The equation of the regression line was found to be: $y=103\:-0.3x$ Interpret the slope of the regression line in the context of the study. Solution The slope of the regression line is -0.3. The slope as a fraction is: $Slope\:=\frac{\:rise}{run}=\frac{-0.3}{1}" width="233$ The rise is the change in $y$ and $y$ represents age of death. Since the slope is negative, the numerator indicates a decrease in lifespan. Thus, the numerator represents a decrease in lifespan of 0.3 years. The run is the change in $x$ and $x$ represents the HDL cholesterol level. Thus, the denominator represents an HDL cholesterol level increase of 1 mg/dl. Now, put this all together and interpret the slope as telling us that For every additional 1 mg/dl of HDL cholesterol, on average women are predicted to die 0.3 years younger. Example $3$ A researcher asked several employees who worked overtime "How many hours of overtime did you work last week?" and "On a scale from 1 to 10 how satisfied are you with your job?". The scatterplot and the regression line from this study are shown below. Interpret the slope of the regression line in the context of the study. Solution We first need to determine the slope of the regression line. To find the slope, we get two points that have as nice coordinates as possible. From the graph, we see that the line goes through the points (10,6) and (15,4). The slope of the regression line can now be found using the rise over the run formula: $Slope\:=\frac{\:rise}{run}=\frac{4-6}{15-10}=\frac{-2}{5}$ The rise is the change in $y$ and $y$ represents job satisfaction rating. Since the slope is negative, the numerator indicates a decrease in job satisfaction. Thus, the numerator represents a decrease in job satisfaction of 2 on the scale from 1 to 10. The run is the change in $x$ and $x$ represents the overtime work hours. Thus, the denominator represents an increase of 5 hours of overtime work. Now, put this all together and interpret the slope as telling us that For every additional 5 hours of overtime work that employees are asked to do, their job satisfaction tends to go down an average of 2 points. Exercise The scatterplot and regression line below are from a study that collected data on the population (in hundred thousands) of cities and the average number of hours per week the city's residents spend outdoors. Interpret the slope of this regression line in the context of the study. Interpret the Meaning of the Slope of a Linear Equation - Smokers Interpreting the Slope of a Regression Line
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Graphing_Points_and_Lines_in_Two_Dimensions/Interpreting_the_Slope_of_a_Line.txt
Learning Outcomes 1. Interpret the $y$-intercept of a line as the value of $y$ when $x$ equals to 0. 2. Determine whether the $y$-intercept is useful for interpreting the relationship between $x$ and $y$ Just like the slope of a line, many algebra classes go over the y-intercept of a line without explaining how to use it in the real world. The y-intercept of a line is the value of $y$ where the line crosses the y-axis. In other words, it is the value of $y$ when the value of $x$ is equal to 0. Sometimes this has true meaning for the model that the line provides, but other times it is meaningless. We will encounter examples of both types in this section. Template for the y-Intercept Interpretation When the value for the $x$-variable is 0, the best prediction for the value of the $y$-variable is (xxx the y-intercept). Example $1$ A study was done to see the relationship between the ounces of meat, $x$, that people eat each day on average and the hours per week, $y$ they watch sports. The equation of the regression line was found to be: $y=1.3\:+0.4x\nonumber$ Interpret the y-intercept of the regression line in the context of the study or explain why it has no practical meaning. Solution First, note that the y-intercept is the number that is not in front of the $x$. Thus, the y-intercept is 1.3. Next, the y-intercept is the value of $y$ when $x$ equals zero. For this example, $x$ represents the ounces of meat consumed each day. When the consumption of meat is 0, the best prediction for the value of the hours of sports each week is 1.3. If $x$ is equal to 0, this means the person does not consume any meat. Since there are people, called vegetarians, who consume no meat, it is meaningful to have an x-value of 0. The y-value of 1.3 represents the hours of sports the person watches. Putting this all together we can state: A vegetarian is predicted to watch 1.3 hours of sports each week. Example $2$ A neonatal nurse at Children's Hospital has collected data on the birth weight, $x$, in pounds the number of days, $y$, that the newborns stay in the hospital. The equation of the regression line was found to be $y=45\:-3.9x\nonumber$ Interpret the y-intercept of the regression line in the context of the study or explain why it has no practical meaning. Solution Again, we note that the y-intercept is the number that is not in front of the $x$. Thus, the y-intercept is 45. Next, the y-intercept is the value of $y$ when $x$ equals zero. When the birth weight in pounds is 0, the best prediction for the value of the number of days the newborn is predicted to stay in the hospital is 45 days. For this example, $x$ represents the new born baby's birth weight in pounds. If $x$ is equal to 0, this means the baby was born with a weight of 0 pounds. Since it makes no sense for a baby to weigh 0 pounds, we can say that the y-intercept of this regression line has no practical meaning. Example $3$ A researcher asked several people "How many cups of coffee did you drink last week?" and "How many times did you go to a shop or restaurant for a meal or a drink last week?" The scatterplot and the regression line from this study are shown below. Interpret the y-intercept of the regression line in the context of the study or explain why it has no practical meaning. Solution The y-intercept of a line is where it crosses the y-axis. In this case, the line crosses at around y = -1. The value of $x$, by definition is 0 and the x-axis represents the number of cups of coffee a person drank last week. Since there are people who don't drink coffee, it does male sense to have an x-value of 0. The y-axis represents the number of times the person went to a shop or restaurant last week to purchase a meal or a drink. It makes no sense to say that a person went -1 times to a shop or restaurant last week to purchase a meal or a drink. Therefore the y-intercept of this regression line has no practical meaning. Exercise The scatterplot and regression line below are from a study that collected data from a group of college students on the number of hours per week during the school year they work at a paid job and the number of units they are taking. Interpret the y-intercept of the regression line or explain why it has no practical meaning. Plot an Ordered Pair Learning Outcomes 1. Draw \(x\) and \(y\) axes. 2. Plot a point in the xy-plane We have already gone into detail about how to plot points on a number line, and that is very useful for single variable presentations. Now we will move to questions that involve comparing two variables. Working with two variables is frequently encountered in statistical studies and we would like to be able to display the results graphically. This is best done by plotting points in the xy-plane. Example \(1\) Plot the points: \((3,4)\),\((-2,1)\), and \((0,-1)\) Solution The first thing to do when plotting points is to sketch the x-axis and y-axis and decide on the tick marks. Here the numbers are all less than 5, so it is reasonable to count by 1's. Next, we plot the first point, \((3,4)\). This means to start at the origin, where the axes intersect. Then move 3 units to the right and 4 units up. After arriving there, we just draw a dot. For the next point, \((-2,1)\), we start at the origin, move 2 units to the left and 1 unit up and draw the dot. For the third point, \((0,-1)\), we don't move left or right at all since the x-coordinate is 0, but we do move 1 unit down and draw the dot. The plot is shown below. Example \(2\) A survey was done to look at the relationship between a person's age and their income. The first three answers are shown in the table below: Table of ages and income Age 49 24 35 Income 69,000 32,000 40,000 Graph the three points on the xy-plane. Solution Notice that the numbers are all relatively large. Therefore counting by 1's would not make sense. Instead, it makes better sense to count the Age axis, \(x\), by 10's and the Income axis, \(y\), by 1000's. The points are plotted below. Exercise A hotel manager was interested in seeing the relationship between the price per night, \(x\), that the hotel charged and the number of occupied rooms, \(y\). The results were (75,83), (100,60), (110,55), and (125,40). Plot these points in the xy-plane. Ex: Plotting Points on the Coordinate Plane Plotting Points
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Graphing_Points_and_Lines_in_Two_Dimensions/Interpreting_the_y-intercept_of_a_Line.txt
• Area of a Rectangle Rectangles are of fundamental importance in the portion of statistics that involves the uniform distribution. Every rectangle has a base and a height and an area. • Factorials and Combination Notation When we need to compute probabilities, we often need to multiple descending numbers. For example, if there is a deck of 52 cards and we want to pick five of them without replacement, then there are 52 choices for the first pick, 51 choices for the second pick since one card has already been picked, 50 choices for the third, 49 choices for the fourth, and 48 for the fifth. • Order of Operations When we are given multiple arithmetic operations within a calculation, there is a, established order that we must do them in based on how the expression is written.  Understanding these rules is especially important when using a calculator, since calculators are programmed to strictly follow the order of operations.  This comes up in every topic in statistics, so knowing the order of operations is an essential skill for all successful statistics students to have. • Order of Operations in Expressions and Formulas We have already encountered the order of operations: Parentheses, Exponents, Multiplication and Division, Addition and Subtraction. In this sections we will give some additional examples where order of operations must be used properly to evaluate statistics. • Perform Signed Number Arithmetic Even though negative numbers seem not that common in the real world, they do come up often when doing comparisons.  For example, a common question is how much bigger is one number than another, which involves subtraction.  In statistics we don't know the means until we collect the data and do the calculations.  This often results in subtracting a larger number from a smaller number which yields a negative number.  We need to be able to perform arithmetic on both positive and negative numbers. • Powers and Roots It can be a challenge when we first try to use technology to raise a number to a power or take a square root of a number. In this section, we will go over some pointers on how to successfully take powers and roots of a number. We will also continue our practice with the order of operations, remembering that as long as there are no parentheses, exponents always come before all other operations. We will see that taking a power of a number comes up in probability. • Using Summation Notation When we have an expression with many numbers added to each other, there is a notation that makes the formulas easier to write down. Operations on Numbers Learning Outcomes • Find the area of a rectangle. • Find the height of a rectangle given that the area is equal to 1. Rectangles are of fundamental importance in the portion of statistics that involves the uniform distribution. Every rectangle has a base and a height and an area. The formula for the area of a rectangle is: $\text{Area} = \text{Base} \times \text{Height} \label{AreaFormula}$ When working with the uniform distribution, the area represents the probability of an event being within the bounds of the base. Example $1$ Consider the rectangle shown below. Find the area of this rectangle. Solution We use the Area formula (Equation \ref{AreaFormula}). To find the base, we notice that it runs from 2 to 8, so we subtract these numbers to get the base: $Base\:=\:8\:-\:2\:=\:6\nonumber$ Next multiply by the height, 3, to get $Area\:=\:Base\:\times Height\:=\:6\:\times3\:=\:18\nonumber$ Example $2$ It turns out that the area of the rectangles that equal to 1 will occur the most often for a uniform distribution. Suppose that we know that the area of a rectangle that depicts a uniform distribution is equal to 1 and that the base of the rectangle goes from 4 to 7. Find the height of the rectangle. Solution First sketch the rectangle below, labeling the height as $h$. Next, find the base of the rectangle that goes from 4 to 7 by subtracting: $Base\:=\:7-4=3\nonumber$ Next, plug in what we know into the area equation: $1\:=\:Area\:=\:Base\:\times Height\:=\:3\times h\nonumber$ This tell us that 3 times a number is equal to 1. To find out what the number is, we just divide both sides by 3 to get: $h=\frac{1}{3}\nonumber$ Therefore the height of an area 1 rectangle with base from 4 to 7 is $\frac{1}{3}$. Example $3$ Suppose that we know that the area of a rectangle that depicts a uniform distribution is equal to 1 and that the base of the rectangle goes from 3 to 5. There is a smaller rectangle within the larger one with the same height, but whose base goes from 3.7 to 4.4. Find the area of the smaller rectangle. Solution First, sketch the larger rectangle with the smaller rectangle shaded in. Next, we find the height of the rectangle. We know that the area of the larger rectangle is 1. The base goes from 3 to 5, so the base is $5-3=2$ Hence: $1\:=\:Area\:=\:Base\:\times Height\:=\:2h\nonumber$ Dividing by 2, gives us that the height is $\frac{1}{2}$ or 0.5. Now we are ready to find the area of the smaller rectangle. We first find the base by subtracting: $\text{Base}\:=\:4.4-3.7\:=\:0.7\nonumber$ Next, use the area formula: $Area\:=\:Base\:\times Height\:=\:0.7\:\times0.5\:=\:0.35\nonumber$ Exercise $1$ Suppose that elementary students' ages are uniformly distributed from 5 to 11 years old. The rectangle that depicts this has base from 5 to 11 and area 1. The rectangle that depicts the probability that a randomly selected child will be between 6.5 and 8.6 years old has base from 6.5 to 8.6 and the same height as the larger rectangle. Find the area of the smaller rectangle
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Operations_on_Numbers/Area_of_a_Rectangle.txt
Learning Outcomes 1. Evaluate a factorial. 2. Use combination notation for statistics applications. When we need to compute probabilities, we often need to multiple descending numbers. For example, if there is a deck of 52 cards and we want to pick five of them without replacement, then there are 52 choices for the first pick, 51 choices for the second pick since one card has already been picked, 50 choices for the third, 49 choices for the fourth, and 48 for the fifth. If we want to find out how many different outcomes there are, we can use what we call the multiplication principle and multiple them: $52\times51\times50\times49\times48$. If we wanted to pick all 52 of the cards one at a time, then this list would be excessively long. Instead there is a notation that describes multiplying all the way down to 1, called the factorial. It must be exciting, since we use the symbol "!" for the factorial. Example $1$ Calculate $4!$ Solution We use the definition which says start at 4 and multiply until we get to 1: $4!\:=\:4\times3\times2\times1\:=\:24 \nonumber$ Example $2$ If we pick 5 cards from a 52 card deck without replacement and the same two sets of 5 cards, but in different orders, are considered different, how many sets of 5 cards are there? Solution From the introduction, the number of sets is just: $52\times51\times50\times49\times48 \nonumber$ This is not quite a factorial since it stops at 48; however, we can think of this as $52!$ with $47!$ removed from it. In other words we need to find $\frac{52!}{47!} \nonumber$ We could just multiply the numbers from the original list, but it is a good idea to practice with your calculator or computer to find this using the ! symbol. When you do use technology, you should get: $\frac{52!}{47!}=311,875,200 \nonumber$ Combinations One of the most important applications of factorials is combinations which count the number of ways of selecting a smaller collection from a larger collection when order is not important. For example if there are 12 people in a room and you want to select a team of 4 of them, then the number of possibilities uses combinations. Here is the definition: Definition: Combinations The number of ways of selecting k items without replacement from a collection of n items when order does not matter is: $\binom{n}{r}\:=\:_nC_r\:=\:\frac{n!}{r!\left(n-r\right)!}$ Notice that there are a few notations. The first is more of a mathematical notation while the second is the notation that a calculator uses. For example, in the TI 84+ calculator, the notation for the number of combinations when selecting 4 from a collection of 12 is: $12\:_nC_r\:4 \nonumber$ There are many internet sites that will perform combinations. For example the math is fun site asks you to put in $n$ and $r$ and also state whether order is important and repetition is allowed. If you click to make both "no" then you will get the combinations. Example $3$ Calculate $\binom{15}{11}=_{15}C_{11} \nonumber$ Solution Whether you use a hand calculator or a computer you should get the number: $1365$ Example $4$ The probability of winning the Powerball lottery if you buy one ticket is: $P(win)=\frac{1}{_{69}C_5\times26} \nonumber$ Calculate this probability. Solution First, let's calculate $_{69}C_5$. Using a calculator or computer, you should get 11,238,513. Next, multiply by 26 to get $11,238,513 \times 26=292,201,338 \nonumber$ Thus, there is a one in 292,201,338 chance of winning the Powerball lottery if you buy a ticket. We can also write this as a decimal by dividing: $P\left(win\right)=\frac{1}{292,201,338}=0.000000003422 \nonumber$ As you can see, your chances of winning the Powerball are very small. Exercise A classroom is full of 28 students and there will be one president of the class and a "Congress" of 4 others selected. The number of different leadership group possibilities is: $28\times_{27}C_4 \nonumber$ Calculate this number to find out how many different leadership group possibilities there are. Ex 1: Simplify Expressions with Factorials Combinations Combinations
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Operations_on_Numbers/Factorials_and_Combination_Notation.txt
Learning Outcomes 1. Use the order of operations to correctly perform multi-step arithmetic 2. Apply the order of operations to statistics related complex questions. When we are given multiple arithmetic operations within a calculation, there is a, established order that we must do them in based on how the expression is written. Understanding these rules is especially important when using a calculator, since calculators are programmed to strictly follow the order of operations. This comes up in every topic in statistics, so knowing the order of operations is an essential skill for all successful statistics students to have. PEMDAS The order of operations are as follows: 1. Parentheses 2. Exponents 3. Multiplication and Division 4. Addition and Subtraction When there is a tie, the rule is to go from left to right. Notice that Multiplication and division are listed together as item 3. If you see multiplication and division in the same expression the rule is to go from left to right. Similarly, if you see addition and subtraction in the same expression the rule is to from go left to right. The same goes for two of the same arithmetic operators. Example $1$ Evaluate: $20-6\div3+\left(2\times3^2\right)$ Solution We start with what is inside the parentheses: $2+3^2$. Since exponents comes before addition, we find $3^2=9$ first. We now have $20-6\div3+\left(2\times9\right) \nonumber$ We continue inside the parentheses and perform the multiplication: $2\times9=18$. This gives $20-6\div3+18 \nonumber$ Since division comes before addition and subtraction, we next calculate $6\div3=2$ to get $20-2+18 \nonumber$ Since subtraction and addition are tied, we go from left to right. We calculate: $20-2=18$ to get $18+18\:=36 \nonumber$ The key to arriving at the correct answer is to go slow and write down each step in the arithmetic. Hidden Parentheses You may think that since you always have a calculator or computer at hand, that you don't need to worry about order of operations. Unfortunately, the way that expressions are written is not the same as the way that they are entered into a computer or calculator. In particular, exponents need to be treated with care as do fractions bars. Example $3$ Evaluate $2.1^{6-2}$ Solution First, note that we use the symbol "^" to tell a computer or calculator to exponentiate. If you were to enter 2.1^6-2 into a computer, it would give you the answer of 83.766121 which is not correct, since the computer will first expontiate and then subtract. Since the subtraction is within the exponent, it must be performed first. To tell a calculator or computer to perform the subtraction first, we use parentheses: 2.1^(6 - 2) = 19.4481 Example $4$: z-scores The "z-score" is defined by: $z=\frac{x-\mu}{\sigma} \nonumber$ Find the z-score rounded to one decimal place if: $x=2.323,\:\mu=1.297,\:\sigma=0.241 \nonumber$ Solution Once again, if we put these numbers into the z-score formula and use a computer or calculator by entering $3.323\:-\:1.297\:\div\:0.241$ we will get -0.259 which is the wrong answer. Instead, we need to know that the fraction bar separates the numerator and the denominator, so the subtraction must be done first. We compute $\frac{2.323-1.297}{0.241}\:=\left(2.323-1.297\right)\div0.241=\:4.25726141 \nonumber$ Now round to one decimal place to get 4.3. Notice that if you rounded before you did the arithmetic, you would get exactly 5 which is very different. 4.3 is more accurate. Exercise Suppose the equation of the regression line for the number of pairs of socks a person owns, $y$, based on the number of pairs of shoes, $x$, the person owns is $\hat y=6+2x \nonumber$ Use this regression line to predict the number of pairs of socks a person owns for a person who owns 4 pairs of shoes. Order of Operations in Expressions and Formulas Learning Outcomes • Use Order of Operations in Statistics Formulas. We have already encountered the order of operations: Parentheses, Exponents, Multiplication and Division, Addition and Subtraction. In this section, we will give some additional examples where the order of operations must be used properly to evaluate statistics. Example $1$ The sample standard deviation asks us to add up the squared deviations, take the square root and divide by one less than the sample size. For example, suppose that there are three data values: 3, 5, 10. The mean of these values is 6. Then the standard deviation is: $s=\sqrt{\frac{\left(3-6\right)^2+\left(5-6\right)^2+\left(10-6\right)^2}{3-1}}\nonumber$ Evaluate this number rounded to the nearest hundredth. Solution The first thing in the order of operations is to do what is in the parentheses. We must subtract: $3-6=-3,\:\:\:5-6\:=\:-1,\:\:\:10-6=4 \nonumber$ We can substitute the numbers in to get: $=\sqrt{\frac{\left(-3\right)^2+\left(-1\right)^2+\left(4\right)^2}{3-1}}\nonumber$ Next, we exponentiate: $\left(-3\right)^2=9,\:\:\:\left(-1\right)^2=1,\:\:\:4^2=16 \nonumber$ Substitute these in to get: $\sqrt{\frac{9+1+16}{3-1}} \nonumber$ We can now perform the addition inside the square root to get: $\sqrt{\frac{26}{3-1}} \nonumber$ Next, perform the subtraction of the denominator to get: $\sqrt{\frac{26}{2}} \nonumber$ We can divide to get: $\sqrt{13} \nonumber$ We don't want to do this by hand, so in a calculator or computer type in: $13^{0.5} = 3.61 \nonumber$ Example $2$ When calculating the probability that a value will be less than 4.6 if the value is taken randomly from a uniform distribution between 3 and 7, we have to calculate: $\left(4.6-3\right)\times\frac{1}{7-3} \nonumber$ Find this probability. Solution We can use a calculator or computer, but we must be very careful about the order of operations. Notice that there are implied parentheses due to the fraction bar. The answer is: $\dfrac{(4.6 - 3) \times 1}{7-3} \nonumber$ Using technology, we get: $\left(4.6-3\right)\times\frac{1}{7-3}\:=\:0.4 \nonumber$ Exercise When finding the upper bound, $U$, of a confidence interval given the lower bound, $L$, and the margin of error, $E$, we use the formula $U=\:L+2E \nonumber$ Find the upper bound of the confidence interval for the proportion of babies that are born preterm if the lower bound is 0.085 and the margin of error is 0.03.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Operations_on_Numbers/Order_of_Operations.txt
Learning Outcomes 1. Add signed numbers. 2. Subtract signed numbers. 3. Multiply signed numbers. 4. Divide signed numbers. Even though negative numbers seem not that common in the real world, they do come up often when doing comparisons. For example, a common question is how much bigger is one number than another, which involves subtraction. In statistics we don't know the means until we collect the data and do the calculations. This often results in subtracting a larger number from a smaller number which yields a negative number. Because of this and for many other reasons, we need to be able to perform arithmetic on both positive and negative numbers. Adding Signed Numbers We will assume that you are very familiar with adding positive numbers, but when there are negative numbers involved, there are some rules to follow: 1. When adding two negative numbers, ignore the negative signs, add the positive numbers and then make the result negative. 2. When adding two numbers such that one is positive and the other is negative, ignore the sign, subtract the smaller from the larger. If the larger of the positive numbers was originally negative, then make the result negative. Otherwise keep the result positive. Example $1$ Add: $-4+\left(-3\right) \nonumber$ Solution First we ignore the signs and add the positive numbers. $4+3=7 \nonumber$ Next we make the result negative. $-4+\left(-3\right)=-7 \nonumber$ Example $2$ Add: $-2+5 \nonumber$ Solution Since one of the numbers is positive and the other is negative, we subtract: $5-2=3 \nonumber$ Of the two numbers, 2 and 5, 5 is the larger one and started positive. Hence we keep the result positive: $-2+5=3 \nonumber$ Subtracting Numbers Subtraction comes up often when we want to find the width of an interval in statistics. Here are the cases for subtracting: $a-b$: 1. If $a\ge b\ge0$, then this is just ordinary subtraction. 2. If $b\ge a\ge0$, then find $b-a$ and make the result negative. 3. If $a<0,\:b\ge0$, then make both positive, add the two positive numbers and make the result negative. 4. If $b<0$ then you use the rule that subtracting a negative number is the same as adding the positive number. Example $3$ Evaluate $5-9$ Solution Since 9 is bigger than 5, we subtract: $9-5\:=\:4 \nonumber$ Next, we make the result negative to get: $5-9=-4 \nonumber$ Example $4$ Evaluate $-9-4$ Solution We are in the case $a<0,\:b\ge0$. Therefore, we first make both positive and add the positive numbers. $9+4\:=\:13 \nonumber$ The final step is to make the answer negative to get $-9-4=-13 \nonumber$ Example $5$: Uniform distributions In statistics, we call a distribution Uniform if an event is just as likely to be in any given interval within the bounds as any other interval within the bounds as long as the intervals are both of the same width. Finding the width of a given interval is usually the first step in solving a question involving uniform distributions. Suppose that the temperature on a winter day has a Uniform distribution on [-8,4]. Find the width of this interval Solution To find the width of an interval, we subtract the left endpoint from the right endpoint: $4\:-\:\left(-8\right) \nonumber$ Since we are subtracting a negative number, the "-" signs become addition: $4-\left(-8\right)\:=\:4+8=12 \nonumber$ Thus the width of the interval is 12. Multiplying and Dividing Signed Numbers When we have a multiplication or division problem, we just remember that two negatives make a positive. So if there are an even number of negative numbers that are multiplied or divided, the result is negative. If there are an odd number of negative numbers that are multiplied or divided, the result is positive. Example $6$ Perform the arithmetic: $\frac{\left(-6\right)\left(-10\right)}{\left(-4\right)\left(-5\right)} \nonumber$ Solution First, just ignore all of the negative signs and multiply the numerator and denominator separately: $\frac{\left(6\right)\left(10\right)}{\left(4\right)\left(5\right)}=\frac{60}{20} \nonumber$ Now divide: $\frac{60}{20}=\frac{6}{2}=3 \nonumber$ Finally, notice that there are four negative numbers in the original multiplication and division problem. Four is an even number, so the answer is positive: $\frac{\left(-6\right)\left(-10\right)}{\left(-4\right)\left(-5\right)}=3 \nonumber$ Example $7$ A confidence interval for the population mean difference in books read per year by men and women was was found to be [-4,1]. Find the midpoint of this interval. Solution First recall that to find the midpoint of two numbers, we add then and then divide by 2. Hence, our first step is to add -4 and 1. Since 1 is positive and -4 is negative, we first subtract the two numbers: $4-1=3 \nonumber$ Of the two numbers, 4 and 1, 4 is the larger one and started negative. Hence we change the sign to negative:: $-4+1=-3 \nonumber$ The final step in finding the midpoint is to divide by 2. First we divide them as positive numbers: $\dfrac{3}{2}=1.5 \nonumber$ Since the original quotient has a single negative number (an odd number of negative numbers), the answer is negative. Thus the midpoint of -4 and 1 is -1.5. Exercise The difference between the observed value and the expected value in linear regression is called the residual. Suppose that the three observed values are: -4, 2, and 5. The expected values are -3, 7, and -1. First find the residuals and then find the sum of the residuals.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Operations_on_Numbers/Perform_Signed_Number_Arithmetic.txt
Learning Outcomes 1. Raise a number to a power using technology. 2. Take the square root of a number using technology. 3. Apply the order of operations when there is root or a power. It can be a challenge when we first try to use technology to raise a number to a power or take a square root of a number. In this section, we will go over some pointers on how to successfully take powers and roots of a number. We will also continue our practice with the order of operations, remembering that as long as there are no parentheses, exponents always come before all other operations. We will see that taking a power of a number comes up in probability and taking a root comes up in finding standard deviations. Powers Just about every calculator, computer, and smartphone can take powers of a number. We just need to remember that the symbol "^" is used to mean "to the power of". We also need to remember to use parentheses if we need to force other arithmetic to come before the exponentiation. Example $1$ Evaluate: $1.04^5$ and round to two decimal places. Solution This definitely calls for the use of technology. Most calculators, whether hand calculators or computer calculators, use the symbol "^" (shift 6 on the keyboard) for exponentiation. We type in: $1.04^5 = 1.2166529\nonumber$ We are asked to round to two decimal places. Since the third decimal place is a 6 which is 5 or greater, we round up to get: $1.04^5\approx1.22\nonumber$ Example $2$ Evaluate: $2.8^{5.3\times0.17}$ and round to two decimal places. Solution First note that on a computer we use "*" (shift 8) to represent multiplication. If we were to put in 2.8 ^ 5.3 * 0.17 into the calculator, we would get the wrong answer, since it will perform the exponentiation before the multiplication. Since the original question has the multiplication inside the exponent, we have to force the calculator to perform the multiplication first. We can ensure that multiplication occurs first by including parentheses: $2.8 ^{5.3 \times 0.17} = 2.52865\nonumber$ Now round to decimal places to get: $2.8^{5.3\times0.17}\approx2.53\nonumber$ Example $3$ If we want to find the probability that if we toss a six sided die five times that the first two rolls will each be a 1 or a 2 and the last three die rolls will be even, then the probability is: $\left(\frac{1}{3}\right)^2\:\times\left(\frac{1}{2}\right)^3\nonumber$ What is this probability rounded to three decimal places? Solution We find: $(1 / 3) ^ 2 (1 / 2) ^ 3 \approx 0.013888889\nonumber$ Now round to three decimal places to get $\left(\frac{1}{3}\right)^2\:\times\left(\frac{1}{2}\right)^3 \approx0.014\nonumber$ Square Roots Square roots come up often in statistics, especially when we are looking at standard deviations. We need to be able to use a calculator or computer to compute a square root of a number. There are two approaches that usually work. The first approach is to use the $\sqrt{\:\:}$ symbol on the calculator if there is one. For a computer, using sqrt() usually works. For example if you put 10*sqrt(2) in the Google search bar, it will show you 14.1421356. A second way that works for pretty much any calculator, whether it is a hand held calculator or a computer calculator, is to realize that the square root of a number is the same thing as the number to the 1/2 power. In order to not have to wrap 1/2 in parentheses, it is easier to type in the number to the 0.5 power. Example $3$ Evaluate $\sqrt{42}$ and round your answer to two decimal places. Solution Depending on the technology you are using you will either enter the square root symbol and then the number 42 and then close the parentheses if they are presented and then hit enter. If you are using a computer, you can use sqrt(42). The third way that will work for both is to enter: $42^{0.5} \approx 6.4807407\nonumber$ You must then round to two decimal places. Since 0 is less than 5, we round down to get: $\sqrt{42}\approx6.48\nonumber$ Example $4$ The "z-score" is for the value of 28 for a sampling distribution with sample size 60 coming from a population with mean 28.3 and standard deviation 5 is defined by: $z=\frac{28-28.3}{\frac{5}{\sqrt{60}}}\nonumber$ Find the z-score rounded to two decimal places. Solution We have to be careful about the order of operations when putting it into the calculator. We enter: $(28 - 28.3)/(5 / 60 ^\wedge 0.5) = -0.464758\nonumber$ Finally, we round to 2 decimal places. Since 4 is smaller than 5, we round down to get: $z=\frac{28-28.3}{\frac{5}{\sqrt{60}}}=-0.46\nonumber$ Exercise The standard error, which is an average of how far sample means are from the population mean is defined by: $\sigma_\bar x=\frac{\sigma}{\sqrt{n}}\nonumber$ where $\sigma_\bar x$ is the standard error, $\sigma$ is the standard deviation, and $n$ is the sample size. Find the standard error if the population standard deviation, $\sigma$, is 14 and the sample size, $n$, is 11. Using Summation Notation Learning Outcomes 1. Evaluate an expression that includes summation notation. 2. Apply summation notation to calculate statistics. This notation is called summation notation and appears as: $\sum_{i=1}^{n}a_i \nonumber$ In this notation, the $a_i$ is an expression that contains the index $i$ and you plug in 1 and then 2 and then 3 all the way to the last number $n$ and then add up all of the results. Example $1$ Calculate $\sum_{i=1}^{4}3i\nonumber$ Solution First notice that i = 1, then 2, then 3 and finally 4. We are supposed to multiply each of these by 3 and add them up: \begin{align*} \sum_{i=1}^{4}3i &= 3\left(1\right)+3\left(2\right)+3\left(3\right)+3\left(4\right) \[4pt] &=3+6+9+12\:=\:30 \end{align*}\nonumber Example $2$ The formula for the sample mean, sometimes called the average, is $\bar x\:=\:\frac{\sum_{i-1}^nx_i}{n}\nonumber$ A survey was conducted asking 8 older adults how many sexual partners they have had in their lifetime. Their answers were {4,12,1,3,4,9,24,7}. Use the formula to find the sample mean. Solution Notice that the numerator of the formula just tells us to add the numbers up. Computing the numerator first gives: $\sum_{i=1}^8x_i=4+12+1+3+4+9+24+7\:=64\nonumber$ Now that we have the numerator calculated, the formula tells us to divide by n, which is just 8. We have: $\bar x\:=\frac{\:64}{8}=8\nonumber$ Thus, the sample mean number of sexual partners this group had in their lifetimes is 8. Example $3$ The next most important statistic is the standard deviation. The formula for the sample standard deviation is: $s=\sqrt{\frac{\sum_{i=1}^n\left(x_i-\bar x\right)^2}{n-1}}\nonumber$ Let's consider the data in the previous example. Find the standard deviation. Solution The formula is quite complicated, but if tackle it one piece at a time using the order of operations properly, we can succeed in finding the sample standard deviation for the data. Notice that there are parentheses, so based on the order of operations, we must do the subtraction within the parentheses first. Since this is all part of the sum, we have eight different subtractions to do. From our calculations in the previous example, the sample mean was $\bar x = 8$. We compute the 8 subtractions: $4-\:8\:=\:-4,\:\:12-8=4,\:1-8=-7,\:3-8=-5,\:\nonumber$ $\:4-8=-4,\:9-8=1,\:24-8=16,\:7-8=-1\nonumber$ The next arithmetic to do is to square each of the differences to get: $\left(-4\right)^2=16,\:\:\left(4\right)^2=16,\left(-7\right)^2=49,\:\left(-5\right)^2=25,\:\nonumber$ $\left(-4\right)^2=16,\:1^2=1,\:16^2=256,\:(-1)^2=1\nonumber$ Now we have all the entries in the summation, so we add them all up: $16+16+49+25+16+1+256+1=380\nonumber$ Now we can write $s=\sqrt{\frac{380}{8-1}}=\sqrt{\frac{380}{7}}\nonumber$ We can put this into the calculator or computer to get: $s=\sqrt{\frac{380}{7}}=\:7.3679\nonumber$ Exercise: expected value The expected value, EV, is defined by the formula $EV=\sum_{i=1}^nx_i\:P\left(x_i\right)\nonumber$ Where $x_i$ are the possible outcomes and $P\left(x_i\right)$ are the probabilities of the outcomes occurring. Suppose the table below shows the number of eggs in a bald eagle clutch and the probabilities of that number occurring. Probability Distribution Table with Outcomes, x, and probabilities, P(x) x 1 2 3 4 P(x) 0.2 0.4 0.3 0.1 Find the expected value. Ex 1: Find a Sum Written in Summation / Sigma Notation Summation Notation and Expected Value
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Operations_on_Numbers/Powers_and_Roots.txt
• Set Notation A set is just a collection of items and there are different ways of representing a set. We want to be able to both read the various ways and be able to write down the representation ourselves in order to best display the set. We have already seen how to represent a set on a number line, but that can be cumbersome, especially if we want to just use a keyboard. • The Complement of a Set Complements come up very often in statistics, so it is worth revisiting this, but instead of graphically we will focus on set notation. Recall that the complement of a set is everything that is not in that set. Sometimes it is much easier to find the probability of a complement than of the original set, and there is an easy relationship between the probability of an event happening and the probability of the complement of that event happening. • The Union and Intersection of Two Sets All statistics classes include questions about probabilities involving the union and intersections of sets. In English, we use the words "Or", and "And" to describe these concepts. In this section we will learn how to decipher these types of sentences and will learn about the meaning of unions and intersections. • Venn Diagrams Venn Diagrams are a simple way of visualizing how sets interact. Many times we will see a long wordy sentence that describes a numerical situation, but it is a challenge to understand. As the saying goes, "A picture is worth a thousand words." In particular a Venn Diagram describes how many elements are in each set displayed and how many elements are in their intersections and complements. Sets Learning Outcomes 1. Read set notation. 2. Describe sets using set notation. A set is just a collection of items and there are different ways of representing a set. We want to be able to both read the various ways and be able to write down the representation ourselves in order to best display the set. We have already seen how to represent a set on a number line, but that can be cumbersome, especially if we want to just use a keyboard. Imagine how difficult it would be to text a friend about a cool set if the only way to do this was with a number line. Fortunately, mathematicians have agreed on notation to describe a set. Example $1$ If we just have a few items to list, we enclose them in curly brackets "{" and "}" and separate the items with commas. For example, $\{\text{Miguel}, \text{Kristin}, \text{Leo}, \text{Shanice}\} \nonumber$ means the set the contains these four names. Example $2$ If we just have a long collection of numbers that have a clear pattern, we use the "..." notation to mean "start here, keep going, and end there". For example, $\{3, 6, 9, 12, ..., 90\}\nonumber$ This set contains more than just the five numbers that are shown. It is clear that the numbers are separated by three each. After the 12, even though it is not explicitly shown, is a 15 which is part of this set. It also contains 18, 21 and keeps going including all the multiples of 3 until it gets to its largest number 90. Example $3$ If we just have a collection of numbers that have a clear pattern, but never ends, we use the "..." without a number at the end. For example, $\left\{\frac{1}{2},\:\frac{2}{3},\:\frac{3}{4},\:\frac{4}{5},\:...\right\}\nonumber$ This set contains an infinite number of fractions, since there is no number followed by the "...". Example $4$ Sometimes we have a set that it best described by stating a rule. For example, if you want to describe the set of all people who are over 18 years old but not 30 years old, you announce the conditions by putting them to the left of a vertical line segment. We read the line segment as "such that". $\left\{x\:|\:x>18\:and\:x\ne30\right\}\nonumber$ This can be read as "the set of all numbers $x$ such that $x$ is greater than 18 and $x$ is not equal to 30". Exercise Describe using set notation the collection of all positive even whole numbers that are not equal to 20 or 50.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Sets/Set_Notation.txt
Learning Outcomes 1. Determine the complement of a set. 2. Write the complement of a set using set notation. We saw in the section "Represent an Inequality as an Interval on a Number Line" how to graph the complement for a set defined by an inequality. Complements come up very often in statistics, so it is worth revisiting this, but instead of graphically we will focus on set notation. Recall that the complement of a set is everything that is not in that set. Sometimes it is much easier to find the probability of a complement than of the original set, and there is an easy relationship between the probability of an event happening and the probability of the complement of that event happening. $P\left(A\right)=1-P\left(not\:A\right) \nonumber$ Example $1$ Find the complement of the set: $A=\left\{x\mid x<4\right\} \nonumber$ Solution The complement of the set of all numbers that are less than 4 is the set of all numbers that are at least as big as 4. Notice that the number 4 is not in the set A, since the inequality is strict (does not have an "="). Therefore the number 4 is in the complement of the set A. In set notation: $A^c=\left\{x\mid x\ge4\right\} \nonumber$ Example $2$ When computing probabilities the complement is sometimes much easier than the original set. For example suppose you roll a die 6 times and want to find the probability that the number 3 comes up at least once. Find the complement of this event. Solution First note that the event of at least once means that there could be one 3, two 3's, three 3's, four 3's, five 3's, or six 3's. It turns out that this would be a burden to deal with each of these possibilities. However the complement is quite easy. The complement of getting at least one 3 is that you go no 3's. Example $3$ Suppose that we want to find the probability that at least 20 people in the class have done their homework. Find the complement of this event. Solution Sometimes it is easiest to list nearby outcomes and then determine the outcomes that satisfy the event. Finally, to find the complement, you select the rest. First list numbers near 20: $...,\:17,\:18,\:19,\:20,\:21,\:22,\:... \nonumber$ Now, the ones that are at least 20 are all the ones including 20 and to the right of 20: $20,\:21,\:22,\:... \nonumber$ These are the large numbers. The complement includes all the small numbers. $...,\:17,\:18,\:19 \nonumber$ We can write this in set notation as: $\left\{x\mid x\le19\right\} \nonumber$ or equivalently $\left\{x\mid x < 20\right\} \nonumber$ Example $4$ Suppose a number is picked at random from the whole numbers from 1 to 10. Let A be the event that a number is both even and less than 8. Find the complement of A. Solution First, the set of numbers that are both even and less than 8 is: $A\:=\:\left\{2,\:4,\:6\right\} \nonumber$ The complement of this set is all the numbers from 1 to 10 that are not in A: $A^c=\left\{1,\:3,\:5,\:7,\:8,\:9,\:10\right\} \nonumber$ Exercise Suppose that two six sided dice are rolled. Let the A be the event that either the first die is even or the sum of the dice is greater than 5 or both have occurred. Find the complement of A.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Sets/The_Complement_of_a_Set.txt
Learning Outcomes 1. Find the union of two sets. 2. Find the intersection of two sets. 3. Combine unions intersections and complements. All statistics classes include questions about probabilities involving the union and intersections of sets. In English, we use the words "Or", and "And" to describe these concepts. For example, "Find the probability that a student is taking a mathematics class or a science class." That is expressing the union of the two sets in words. "What is the probability that a nurse has a bachelor's degree and more than five years of experience working in a hospital." That is expressing the intersection of two sets. In this section we will learn how to decipher these types of sentences and will learn about the meaning of unions and intersections. Unions An element is in the union of two sets if it is in the first set, the second set, or both. The symbol we use for the union is $\cup$. The word that you will often see that indicates a union is "or". Example $1$: Union of Two sets Let: $A=\left\{2,5,7,8\right\} \nonumber$ and $B=\lbrace1,4,5,7,9\rbrace \nonumber$ Find $A\cup B$ Solution We include in the union every number that is in A or is in B: $A\cup B=\left\{1,2,4,5,7,8,9\right\} \nonumber$ Example $2$: Union of Two sets Consider the following sentence, "Find the probability that a household has fewer than 6 windows or has a dozen windows." Write this in set notation as the union of two sets and then write out this union. Solution First, let A be the set of the number of windows that represents "fewer than 6 windows". This set includes all the numbers from 0 through 5: $A=\left\{0,1,2,3,4,5\right\} \nonumber$ Next, let B be the set of the number of windows that represents "has a dozen windows". This is just the set that contains the single number 12: $B=\left\{12\right\} \nonumber$ We can now find the union of these two sets: $A\cup B=\left\{0,1,2,3,4,5,12\right\} \nonumber$ Intersections An element is in the intersection of two sets if it is in the first set and it is in the second set. The symbol we use for the intersection is $\cap$. The word that you will often see that indicates an intersection is "and". Example $3$: Intersection of Two sets Let: $A=\left\{3,4,5,8,9,10,11,12\right\} \nonumber$ and $B=\lbrace5,6,7,8,9\rbrace \nonumber$ Find $A\cap B$. Solution We only include in the intersection that numbers that are in both A and B: $A\cap B=\left\{5,8,9\right\} \nonumber$ Example $4$: Intersection of Two sets Consider the following sentence, "Find the probability that the number of units that a student is taking is more than 12 units and less than 18 units." Assuming that students only take a whole number of units, write this in set notation as the intersection of two sets and then write out this intersection. Solution First, let A be the set of numbers of units that represents "more than 12 units". This set includes all the numbers starting at 13 and continuing forever: $A=\left\{13,\:14,\:15,\:...\right\} \nonumber$ Next, let B be the set of the number of units that represents "less than 18 units". This is the set that contains the numbers from 1 through 17: $B=\left\{1,\:2,\:3,\:...,\:17\right\} \nonumber$ We can now find the intersection of these two sets: $A\cap B=\left\{13,\:14,\:15,\:16,\:17\right\} \nonumber$ Combining Unions, Intersections, and Complements One of the biggest challenges in statistics is deciphering a sentence and turning it into symbols. This can be particularly difficult when there is a sentence that does not have the words "union", "intersection", or "complement", but it does implicitly refer to these words. The best way to become proficient in this skill is to practice, practice, and practice more. Example $5$ Consider the following sentence, "If you roll a six sided die, find the probability that it is not even and it is not a 3." Write this in set notation. Solution First, let A be the set of even numbers and B be the set that contains just 3. We can write: $A=\left\{2,4,6\right\},\:\:\:B\:=\:\left\{3\right\} \nonumber$ Next, since we want "not even" we need to consider the complement of A: $A^c=\left\{1,3,5\right\} \nonumber$ Similarly since we want "not a 3", we need to consider the complement of B: $B^c=\left\{1,2,4,5,6\right\} \nonumber$ Finally, we notice the key word "and". Thus, we are asked to find: $A^c\cap B^c=\:\left\{1,3,5\right\}\cap\left\{1,2,4,5,6\right\}=\left\{1,5\right\} \nonumber$ Example $6$ Consider the following sentence, "If you randomly select a person, find the probability that the person is older than 8 or is both younger than 6 and is not younger than 3." Write this in set notation. Solution First, let A be the set of people older than 8, B be the set of people younger than 6, and C be the set of people younger than 3. We can write: $A=\left\{x\mid x>8\right\},\:\:\:B\:=\:\left\{x\mid x<6\right\},\:C=\left\{x\mid x<3\right\} \nonumber$ We are asked to find $A\cup\left(B\cap C^c\right) \nonumber$ Notice that the complement of "$<$" is "$\ge$". Thus: $C^c=\left\{x\mid x\ge3\right\} \nonumber$ Next we find: $B\cap C^c=\left\{x\mid x<6\right\}\cap\left\{x\mid x\ge3\right\}=\left\{x\mid3\le x<6\right\} \nonumber$ Finally, we find: $A\cup\left(B\cap C^c\right)=\:\left\{x\mid x>8\right\}\cup\left\{x\mid3\le x<6\right\} \nonumber$ The clearest way to display this union is on a number line. The number line below displays the answer: Exercise Suppose that we pick a person at random and are interested in finding the probability that the person's birth month came after July and did not come after September. Write this event using set notation.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Sets/The_Union_and_Intersection_of_Two_Sets.txt
Learning Outcomes 1. Read a Venn Diagram to extract information. 2. Draw a Venn Diagram. Venn Diagrams are a simple way of visualizing how sets interact. Many times we will see a long wordy sentence that describes a numerical situation, but it is a challenge to understand. As the saying goes, "A picture is worth a thousand words." In particular, a Venn Diagram describes how many elements are in each set displayed and how many elements are in their intersections and complements. Example \(1\) Consider the Venn Diagram shown below. Describe how many elements are in each of the sets. Solution Once we understand how to read the Venn Diagram we can use it in many applications. For the Venn Diagram above, there are 12 from A that are not in B, there are 5 in both A and B, and there are 14 in B that are not in A. If we wanted to find the total in A, we would just add 12 and 5 to get 17 total in A. Similarly, there are 19 total in B. Example \(2\) Consider the Venn Diagram below that shows the results of a study asking students whether their first college class was at the same place they are at now, whether they are right handed, and whether they are enjoying their experience at their college. Determine how many students are: 1. Right handed and enjoy college. 2. At the same place but not right handed. 3. Enjoy college. Solution 1. To be right handed and enjoy college they must be in both the Right circle and the Enjoying circle. Notice that the numbers 12 and 15 are in both these circles. Thus, there are 12 + 15 = 27 total students who are right handed and enjoy college. 2. To be in the same place and not be right handed, the number must be in the same place circle but not in the right circle. We see that 2 and 22 are the numbers in the same place circle but not in the right circle. Adding these gives 2 + 22 = 24 total students who are at the same place but not right handed. 3. We must count all the numbers in the Enjoying circle. These are 2, 10, 12, and 15. Adding these up gives: 2 + 10 + 12 + 15 = 39. Thus, 39 students enjoy college. Example \(3\) Suppose that a group of 40 households was looked at. 24 of them housed dogs, 30 of them housed cats, and 18 of them housed both cats and dogs. Sketch a Venn Diagram that displays this information. Solution To get ready to sketch the Venn Diagram, we first plan on what it will look like. There are two main groups here: houses with dogs and houses with cats. Therefore we will have two circles. The intersection will have the number 18. Since there are 24 houses with dogs and 18 also have cats, we subtract 24 - 18 = 6 to find the houses with dogs but no cats. Similarly, we subtract 30 - 18 = 12 houses with cats and no dogs. If we add 18 + 6 + 12 = 36, we find the total number of houses with a dog, cat or both. Therefore there are 40 - 36 = 4 houses without any pets. Now we are ready to put in the numbers into the Venn Diagram. It is shown below. Exercise Suppose that a group of 55 businesses was researched. 29 of them were open on the weekends, 25 of them paid more than minimum wage for everyone , 17 of them were both open on the weekends and paid more than minimum wage for everyone, and 4 of them were government consulting businesses. None of the government consulting businesses were open on the weekend nor did they pay more than minimum wage for everyone. Sketch a Venn Diagram that displays this information.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/Sets/Venn_Diagrams.txt
• Distance between Two Points on a Number Line The number line is the main visual base in statistics and we often want to look at two points on the number line and determine the distance between them. This is used to find the base of a rectangle or another figure that lies above the number line. By the end of this section, you will be able to determine the distance between any two points on a number line that comes from a statistics application. • Plotting Points and Intervals on the Number Line The number line is of fundamental importance and is used repeatedly in statistics. It is a tool to visualize all of the possible outcomes of a study and to organize the results of the study. Often a diagram is placed above the number line to provide us with a picture of the results. By the end of this section, you will be able to plot points and intervals on a number line and use these plots to understand the possible outcomes and actual outcomes of studies. • Represent an Inequality as an Interval on a Number Line Inequalities come up frequently in statistics and it is often helpful to plot the inequality on the number line in order to visualize the inequality. This helps both for inequalities that involve real numbers and for inequalities that refer to just integer values. As an extension of this idea, we often want to look at the complement of an inequality, that is all numbers that make the inequality false. In this section we will look at examples that accomplish this task. • The Midpoint As the word sounds, "midpoint" means "the point in the middle". Finding a midpoint is not too difficult and has applications in many areas of statistics, from confidence intervals to sketching distributions, to means. Thumbnail: Demonstration the addition on the line number. (CC BY 3.0 unported; Stephan Kulla). The Number Line Learning Outcomes 1. Calculate the distance between two points on a number line when both are non-negative. 2. Calculate the distance between two points on a number line when at least one is negative. The number line is the main visual base in statistics and we often want to look at two points on the number line and determine the distance between them. This is used to find the base of a rectangle or another figure that lies above the number line. By the end of this section, you will be able to determine the distance between any two points on a number line that comes from a statistics application. Finding the Distance Between Two Points with Positive Coordinates on a Number Line The key to finding the distance between two points is to remember that the geometric definition of subtraction is the distance between the two numbers as long as we subtract the smaller number from the larger. Example $1$ Find the distance between the points 2.5 and 9.8 as shown below on the number line. Solution To find the distance, we just subtract: $9.8\:-\:2.5\:=\:7.3 \nonumber$ Example $2$ When finding probabilities involving a uniform distribution, we have to find the base of a rectangle that lies on a number line. Find the base of the rectangle shown below that represents a uniform distribution from 2 to 9. Solution We just subtract: $9\:-\:2\:=\:7 \nonumber$ Finding the Distance Between Two Points on a Number Line When the Coordinates Are Not Both Positive In statistics, it is common to have points on a number line where the points are not both positive and we need to find the distance between them. Example $3$ The diagram below shows the confidence interval for the difference between the proportion of men who are planning on going into the health care profession and the proportion of women. What is the width of the confidence interval? Solution Whenever we want want to find the distance between two numbers, we always subtract. Recall that subtracting a negative number is adding. $0.01\:-\:\left(-0.04\right)\:=\:0.01\:+\:0.04\:=\:0.05 \nonumber$ Therefore the width of the confidence interval is 0.05. Example $4$ The mean value of credit card accounts is -6358 dollars. A study was done of recent college graduates and found their mean value for their credit card accounts was -5215 dollars. The number line below shows this situation. How far apart are these values? Solution We subtract the two numbers and recall that when we subtract two negative numbers when we are looking at the right minus the left, we make them positive and subtract the positive numbers. $-5215\:-\:\left(-6358\right)\:=\:6358\:-\:5215\:=\:1143 \nonumber$ Thus the mean credit card balances are \$1143 apart. Exercise In statistics, we are asked to find a z-score, which tells us how unusual an event is. The first step in finding a z-score is to calculate the distance a value is from the mean. The number line below depicts the mean of 18.56 and the value of 20.43. Find the distance between these two points.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/The_Number_Line/Distance_between_Two_Points_on_a_Number_Line.txt
Learning Outcomes 1. Plot a point on the number line 2. Plot an interval on the number line The number line is of fundamental importance and is used repeatedly in statistics. It is a tool to visualize all of the possible outcomes of a study and to organize the results of the study. Often a diagram is placed above the number line to provide us with a picture of the results. By the end of this section, you will be able to plot points and intervals on a number line and use these plots to understand the possible outcomes and actual outcomes of studies. Drawing Points on a Number Line A number line is just a horizontal line that is used to display all the possible outcomes. It is similar to a ruler in that it helps us describe and compare numbers. Similar to a ruler that can be marked with many different scales such as inches or centimeters, we get to choose the scale of the number line and where the center is. Example \(1\) The standard normal distribution is plotted above a number line. The most important values are the integers between -3 and 3. The number 0 is both the mean (average) and median (center). 1. Plot the number line that best displays this information. 2. Plot the value -1.45 on this number line. Solution 1. We sketch a line, mark 0 as the center, and label the numbers -3, -2, -1, 0, 1, 2, 3 from left to right. 1. To plot the point -1.45, we first have to understand that this number is between -1 and -2. It is close to half way between -1 and -2. We put a circle on the number line that is close to halfway between these values as shown below. Example \(2\) When working with box plots, we need to first set up a number line that labels what is called the five point summary: Minimum, First Quartile, Median, Third Quartile, and Maximum. Suppose the five point summary for height in inches for a basketball team is: 72,74,78,83,89. Plot these points on a number line Solution When plotting points on a number line, we first have to decide what range of the line we want to show in order to best display the points that appear. Technically all numbers are on every number line, but that does not mean we show all numbers. In this example, the numbers are all between 70 and 90, so we certainly don't need to display the number 0. A good idea is to let 70 be on the far left and 90 be on the far right and then plot the points between them. We also have to decide on the spacing of the tick marks. Since the range from 70 to 90 is 20, this may be too many numbers to display. Instead we might want to count by 5's. Below is the number line that shows the numbers 70 to 90 and counts by 5's. The five point summary is plotted on this line. Exercise A histogram will be drawn to display the annual income that experienced registered nurses make. The boundaries of the bars of the histogram are: \$81,000, \$108,000, \$135,000, \$162,000, and \$189,000. Plot these points on a number line. Plotting an Interval on a Number Line Often in statistics, instead of just having to plot a few points on a number line, we need to instead plot a whole interval on the number line. This is especially useful when we want to exhibit a range of values between two numbers, to the left of a number or to the right of a number. Example \(3\) A 95% confidence interval for the proportion of Americans who work on weekends is found to be 0.24 to 0.32, with the center at 0.28. Use a number line to display this information. Solution We just draw a number line, include the three key numbers: 0.24, 0.32, and 0.28 and highlight the part of the interval between 0.23 and 0.31. Example \(4\): rejection region In Hypothesis testing, we sketch something called the rejection region which is an interval that goes off to infinity or to negative infinity. Suppose that the mean number of hours to work on the week's homework is 4.2. The rejection region for the hypothesis test is all numbers larger than 7.3 hours. Plot the mean and sketch the rejection region on a number line. Solution We plot the point 4.2 on the number line and shade everything to the right of 7.3 on the number line.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/The_Number_Line/Plotting_Points_and_Intervals_on_the_Number_Line.txt
Learning Outcomes 1. Graph and inequality on a number line. 2. Graph the complement on a number line for both continuous and discrete variables. Inequalities come up frequently in statistics and it is often helpful to plot the inequality on the number line in order to visualize the inequality. This helps both for inequalities that involve real numbers and for inequalities that refer to just integer values. As an extension of this idea, we often want to look at the complement of an inequality, that is all numbers that make the inequality false. In this section we will look at examples that accomplish this task. Sketching an Inequality on a number line where the possible values are real numbers. There are four different inequalities: \(<,\:\le,\:>,\:\ge\). What makes this the most challenging is when they are expressed in words. Here are some of the words that are used for each: • \(<\): "Less Than", "Smaller", "Lower", "Younger" • \(\le\): "Less Than or Equal to", "At Most", "No More Than", "Not to Exceed" • \(>\): "Greater Than", "Larger", "Higher", "Bigger", "Older", "More Than" • \(\ge\): "Greater Than or Equal to", "At Least", "No Less than" These are the most common words that correspond to the inequalities, but there are others that come up less frequently. Example \(1\) Graph the inequality: \(3<x\le5\) on a number line Solution First notice that the interval does not include the number 3, but does include the number 5. We can represent not including a number with an open circle and including a number with a closed circle. The number line representation of the inequality is shown below. Example \(2\) In statistics, we often want to find probabilities of an event being at least as large or no more than a given value. It helps to first plot the interval on a number line. Suppose you want to find the probability that you will have to wait in line for at least 4minutes. Sketch this inequality on a number line. Solution First, notice that "At Least" has the symbol \(\ge\). Thus, we have a closed circle on the number 4. There is no upper bound, so we draw a long arrow from 4 to the right of 4. The solution is shown below Example \(3\) Another main topic that comes up in statistics is confidence intervals. For example in recent poll to see the percent of Americans who think that Congress is doing a good job found that a 95% confidence interval had lower bound of 0.18 and an upper bound of 0.24. This can be written as [0.18,0,24]. Sketch this interval on the number line. Solution The first thing we need to do is decide on the tick marks to put on the number line. If we counted by 1's, then the interval of interest would be too small to stand out. Instead we will count by 0.1's. The number line is shown below. Example \(4\) Often in statistics, we deal with discrete variables. Most of the time this will mean that only whole number values can occur. For example, you want to find out the probability that a college student is taking at most three classes. Graph this on a number line. Solution First note that the outcomes can only be whole numbers. Second, note that "at most" means \(\le\). Thus the possible outcomes are: 0, 1, 2, and 3. The number line below displays these outcomes. Graphing the Complement In statistics, we often want to graph the complement of an interval. The complement means everything that is not in the interval. Example \(5\) Graph the complement of the interval [2,4). Solution Notice that the complement of numbers inside the interval between 2 and 4 is the numbers outside that interval. This will consist of the numbers to the left of 2 and to the right of 4. Since the number 2 is included in the original interval, it will not be included in the complement. Since the number 4 is not included in the original interval, it will be included in the complement. The complement is shown on the number line below. Example \(6\) Some calculators can only find probabilities for values less than a certain number. If we want the probability of an interval greater than a number, we need to use the complement. Suppose that you want to find the probability that a person will have traveled to more than two foreign countries in the last twelve months. Find the complement of this and graph it on a number line. Solution First notice that only whole numbers are possible since it does not make sense to go to a fractional number of countries. Second note that the lowest number that is more than 2 is 3. If 3 is included in the original list, then 3 will not be included in the complement. Thus, the highest number that is in the complement of "more than 2" is 2. The number line below shows the complement of more than 2. Exercise Suppose you want to find the probability that at least 4 people in your class have a last name that contains the letter "W". To make this calculation you will need to first find the complement of "at least 4". Sketch this complement on the number line.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/The_Number_Line/Represent_an_Inequality_as_an_Interval_on_a_Number_Line.txt
Learning Outcomes 1. Find the midpoint between two numbers. 2. Sketch the midpoint of two numbers on a number line. As the word sounds, "midpoint" means "the point in the middle". Finding a midpoint is not too difficult and has applications in many areas of statistics, from confidence intervals to sketching distributions, to means. Finding the Midpoint Between Two Numbers If we are given two numbers, then the midpoint is just the average of the two numbers. To calculate the midpoint, we add them up and then divide the result by 2. The formula is as follows: Definition: the Midpoint Let $a$ and $b$ be two numbers. Then the midpoint, $M$ of these two numbers is $M\:=\frac{a+b}{2} \label{midpoint}$ Example $1$ Find the midpoint of the numbers $3.5$ and $7.2$. Solution The most important thing about finding the midpoint is that the addition of the two numbers must occur before the division by 2. We can either do this one step at a time in our calculator or we can enclose the sum in parentheses. In this example we will perform the addition first: $3.5+7.2\:=\:10.7 \nonumber$ Now we are ready to divide by 2: $\frac{10.7}{2}=5.35 \nonumber$ Thus the midpoint of 3.5 and 7.2 is 5.35. Example $2$ A major topic in statistics is the confidence interval which tells us the most likely interval that the mean or the proportion will lie in. Often the lower and upper bound of the confidence interval are given, but the midpoint of these two numbers is the best guess for what we are looking for. Suppose a 95% confidence interval for the difference between two means is -1.34 and 2.79. Find the midpoint of these numbers, which is the best guess for the difference between the two means. Solution We use the formula for the midpoint (Equation \ref{midpoint}): $M\:=\:\frac{a+b}{2}=\:\frac{-1.34+2.79}{2} \nonumber$ Now let's use a calculator. We will need parentheses around the numerator: $\left(-1.34+2.79\right)\div2\:=\:0.725 \nonumber$ Thus, the midpoint of the numbers -1.34 and 2.79 is 0.725. Sketching the Midpoint on a Number Line Visualizing the midpoint can often reveal it much better than just writing down its value. The diagrams are of fundamental importance in statistics. Example $3$ Sketch the points -3, 5 and the midpoint of these two numbers on a number line. Solution We start by finding the midpoint using the midpoint formula (Equation \ref{midpoint}): $M\:=\frac{\:-3+5}{2}=\left(-3+5\right)\div2\:=\:1 \nonumber$ Now we sketch these three points on the number line: Example $4$: hypothesis testing Another application of the midpoint involves hypothesis testing. Sometimes we are given the hypothesized mean, which is the midpoint. We are also given the sample mean, which is either the left or right endpoint. The goal is to find the other endpoint. Suppose that the midpoint (hypothesized mean) is at 3.8 and the right endpoint (sample mean) is at 5.1. Find the value of the left endpoint. Solution It helps to sketch the diagram on the number line as shown below. Now since 3.8 is the midpoint, the distance from the left endpoint to the midpoint is equal to the distance from 3.8 to 5.1. The distance from 3.8 to 5.1 is: $5.1\:-\:3.8\:=\:1.3 \nonumber$ Therefore the left endpoint is 1.3 to the left of 3.8. This can be found by subtracting the two numbers: $3.8\:-\:1.3\:=\:2.5 \nonumber$ Therefore the left endpoint is at 2.5. Exercise Suppose that the midpoint (hypothesized proportion) is at 0.31 and the left endpoint (sample proportion) is at 0.28. Find the value of the right endpoint.
textbooks/stats/Introductory_Statistics/Support_Course_for_Elementary_Statistics/The_Number_Line/The_Midpoint.txt
Introduction Probability models and techniques permeate many important areas of modern life. A variety of types of random processes, reliability models and techniques, and statistical considerations in experimental work play a significant role in engineering and the physical sciences. The solutions of management decision problems use as aids decision analysis, waiting line theory, inventory theory, time series, cost analysis under uncertainty — all rooted in applied probability theory. Methods of statistical analysis employ probability analysis as an underlying discipline. Modern probability developments are increasingly sophisticated mathematically. To utilize these, the practitioner needs a sound conceptual basis which, fortunately, can be attained at a moderate level of mathematical sophistication. There is need to develop a feel for the structure of the underlying mathematical model, for the role of various types of assumptions, and for the principal strategies of problem formulation and solution. Probability has roots that extend far back into antiquity. The notion of “chance” played a central role in the ubiquitous practice of gambling. But chance acts were often related to magic or religion. For example, there are numerous instances in the Hebrew Bible in which decisions were made “by lot” or some other chance mechanism, with the understanding that the outcome was determined by the will of God. In the New Testament, the book of Acts describes the selection of a successor to Judas Iscariot as one of “the Twelve.” Two names, Joseph Barsabbas and Matthias, were put forward. The group prayed, then drew lots, which fell on Matthias. Early developments of probability as a mathematical discipline, freeing it from its religious and magical overtones, came as a response to questions about games of chance played repeatedly. The mathematical formulation owes much to the work of Pierre de Fermat and Blaise Pascal in the seventeenth century. The game is described in terms of a well defined trial (a play); the result of any trial is one of a specific set of distinguishable outcomes. Although the result of any play is not predictable, certain “statistical regularities” of results are observed. The possible results are described in ways that make each result seem equally likely. If there are N such possible “equally likely” results, each is assigned a probability 1/N. The developers of mathematical probability also took cues from early work on the analysis of statistical data. The pioneering work of John Graunt in the seventeenth century was directed to the study of “vital statistics,” such as records of births, deaths, and various diseases. Graunt determined the fractions of people in London who died from various diseases during a period in the early seventeenth century. Some thirty years later, in 1693, Edmond Halley (for whom the comet is named) published the first life insurance tables. To apply these results, one considers the selection of a member of the population on a chance basis. One then assigns the probability that such a person will have a given disease. The trial here is the selection of a person, but the interest is in certain characteristics. We may speak of the event that the person selected will die of a certain disease– say “consumption.” Although it is a person who is selected, it is death from consumption which is of interest. Out of this statistical formulation came an interest not only in probabilities as fractions or relative frequencies but also in averages or expectatons. These averages play an essential role in modern probability. We do not attempt to trace this history, which was long and halting, though marked by flashes of brilliance. Certain concepts and patterns which emerged from experience and intuition called for clarification. We move rather directly to the mathematical formulation (the “mathematical model”) which has most successfully captured these essential ideas. This is the model, rooted in the mathematical system known as measure theory, is called the Kolmogorov model, after the brilliant Russian mathematician A.N. Kolmogorov (1903-1987). Kolmogorov succeeded in bringing together various developments begun at the turn of the century, principally in the work of E. Borel and H. Lebesgue on measure theory. Kolmogorov published his epochal work in German in 1933. It was translated into English and published in 1956 by Chelsea Publishing Company. Outcomes and events Probability applies to situations in which there is a well defined trial whose possible outcomes are found among those in a given basic set. The following are typical. • A pair of dice is rolled; the outcome is viewed in terms of the numbers of spots appearing on the top faces of the two dice. If the outcome is viewed as an ordered pair, there are thirty six equally likely outcomes. If the outcome is characterized by the total number of spots on the two die, then there are eleven possible outcomes (not equally likely). • A poll of a voting population is taken. Outcomes are characterized by responses to a question. For example, the responses may be categorized as positive (or favorable), negative (or unfavorable), or uncertain (or no opinion). • A measurement is made. The outcome is described by a number representing the magnitude of the quantity in appropriate units. In some cases, the possible values fall among a finite set of integers. In other cases, the possible values may be any real number (usually in some specified interval). • Much more sophisticated notions of outcomes are encountered in modern theory. For example, in communication or control theory, a communication system experiences only one signal stream in its life. But a communication system is not designed for a single signal stream. It is designed for one of an infinite set of possible signals. The likelihood of encountering a certain kind of signal is important in the design. Such signals constitute a subset of the larger set of all possible signals. These considerations show that our probability model must deal with • A trial which results in (selects) an outcome from a set of conceptually possible outcomes. The trial is not successfully completed until one of the outcomes is realized. • Associated with each outcome is a certain characteristic (or combination of characteristics) pertinent to the problem at hand. In polling for political opinions, it is a person who is selected. That person has many features and characteristics (race, age, gender, occupation, religious preference, preferences for food, etc.). But the primary feature, which characterizes the outcome, is the political opinion on the question asked. Of course, some of the other features may be of interest for analysis of the poll. Inherent in informal thought, as well as in precise analysis, is the notion of an event to which a probability may be assigned as a measure of the likelihood the event will occur on any trial. A successful mathematical model must formulate these notions with precision. An event is identified in terms of the characteristic of the outcome observed. The event “a favorable response” to a polling question occurs if the outcome observed has that characteristic; i.e., iff (if and only if) the respondent replies in the affirmative. A hand of five cards is drawn. The event “one or more aces” occurs iff the hand actually drawn has at least one ace. If that same hand has two cards of the suit of clubs, then the event “two clubs” has occurred. These considerations lead to the following definition. Definition. The event determined by some characteristic of the possible outcomes is the set of those outcomes having this characteristic. The event occurs iff the outcome of the trial is a member of that set (i.e., has the characteristic determining the event). • The event of throwing a “seven” with a pair of dice (which we call the event SEVEN) consists of the set of those possible outcomes with a total of seven spots turned up. The event SEVEN occurs iff the outcome is one of those combinations with a total of seven spots (i.e., belongs to the event SEVEN). This could be represented as follows. Suppose the two dice are distinguished (say by color) and a picture is taken of each of the thirty six possible combinations. On the back of each picture, write the number of spots. Now the event SEVEN consists of the set of all those pictures with seven on the back. Throwing the dice is equivalent to selecting randomly one of the thirty six pictures. The event SEVEN occurs iff the picture selected is one of the set of those pictures with seven on the back. • Observing for a very long (theoretically infinite) time the signal passing through a communication channel is equivalent to selecting one of the conceptually possible signals. Now such signals have many characteristics: the maximum peak value, the frequency spectrum, the degree of differentibility, the average value over a given time period, etc. If the signal has a peak absolute value less than ten volts, a frequency spectrum essentially limited from 60 herz to 10,000 herz, with peak rate of change 10,000 volts per second, then it is one of the set of signals with those characteristics. The event "the signal has these characteristics" has occured. This set (event) consists of an uncountable infinity of such signals. One of the advantages of this formulation of an event as a subset of the basic set of possible outcomes is that we can use elementary set theory as an aid to formulation. And tools, such as Venn diagrams and indicator functions for studying event combinations, provide powerful aids to establishing and visualizing relationships between events. We formalize these ideas as follows: • Let $\Omega$ be the set of all possible outcomes of the basic trial or experiment. We call this the basic space or the sure event, since if the trial is carried out successfully the outcome will be in $\Omega$; hence, the event $\Omega$ is sure to occur on any trial. We must specify unambiguously what outcomes are “possible.” In flipping a coin, the only accepted outcomes are “heads” and “tails.” Should the coin stand on its edge, say by leaning against a wall, we would ordinarily consider that to be the result of an improper trial. • As we note above, each outcome may have several characteristics which are the basis for describing events. Suppose we are drawing a single card from an ordinary deck of playing cards. Each card is characterized by a “face value” (two through ten, jack, queen, king, ace) and a “suit” (clubs, hearts, diamonds, spades). An ace is drawn (the event ACE occurs) iff the outcome (card) belongs to the set (event) of four cards with ace as face value. A heart is drawn iff the card belongs to the set of thirteen cards with heart as suit. Now it may be desirable to specify events which involve various logical combinations of the characteristics. Thus, we may be interested in the event the face value is jack or king and the suit is heart or spade. The set for jack or king is represented by the union $J \cup K$ and the set for heart or spade is the union $H \cup S$. The occurrence of both conditions means the outcome is in the intersection (common part) designated by $\cap$. Thus the event referred to is $E = (J \cup K) \cap (H \cup S)$ The notation of set theory thus makes possible a precise formulation of the event $E$. • Sometimes we are interested in the situation in which the outcome does not have one of the characteristics. Thus the set of cards which does not have suit heart is the set of all those outcomes not in event H . In set theory, this is the complementary set (event) $H^c$. • Events are mutually exclusive iff not more than one can occur on any trial. This is the condition that the sets representing the events are disjoint (i.e., have no members in common). • The notion of the impossible event is useful. The impossible event is, in set terminology, the empty set $\emptyset$. Event $\emptyset$ cannot occur, since it has no members (contains no outcomes). One use of $\emptyset$ is to provide a simple way of indicating that two sets are mutually exclusive. To say $AB = \emptyset$ (here we use the alternate $AB$ for $A \cap B$) is to assert that events $A$ and $B$ have no outcome in common, hence cannot both occur on any given trial. • The language and notaton of sets provide a precise language and notation for events and their combinations. We collect below some useful facts about logical (often called Boolean) combinations of events (as sets). The notion of Boolean combinations may be applied to arbitrary classes of sets. For this reason, it is sometimes useful to use an index set to designate membership. We say the index J is countable if it is finite or countably infinite; otherwise it is uncountable. In the following it may be arbitrary. ${A_i : i \in J}$ is the class of sets $A_i$, one for each index $i$ in the index set $J$ For example, if $J = {1, 2, 3}$ then ${A_i : i \in J}$ is the class ${A_1, A_2, A_3}$, and $\bigcup_{i \in J} A_i = A_1 \cup A_2 \cup A_3$, $\bigcup_{i \in J} A_i = A_1 \cap A_2 \cap A_3$, If $J = {1, 2, \cdot\cdot\cdot}$ then ${A_i: i \in J}$ is the sequence ${A_1: 1 \le i}$, and $\bigcup_{i \in J} A_i = \bigcup_{i = 1}^{\infty} A_i$, $\bigcap_{i \in J} A_i = \bigcap_{i = 1}^{\infty} A_i$ If event E is the union of a class of events, then event E occurs iff at least one event in the class occurs. If F is the intersection of a class of events, then event F occurs iff all events in the class occur on the trial. The role of disjoint unions is so important in probability that it is useful to have a symbol indicating the union of a disjoint class. We use the big V to indicate that the sets combined in the union are disjoint. Thus, for example, we write $A = \bigvee_{i = 1}^{n} A_i$ to signify $A = \bigcup_{i = 1}^{n} A_i$ with the proviso that the $A_i$ form a disjoint class Events derived from a class Consider the class ${E_1, E_2, E_3}$ of events. Let $A_k$ be the event that exactly $k$ occur on a trial and $B_k$ be the event that $k$ or more occur on a trial. Then $A_0 = E_1^c E_2^c E_3^c$, $A_1 = E_1 E_2^c E_3^c \bigvee E_1^c E E_3^c \bigvee E_1^c E_2^c E_3$, $A_2 = E_1 E_2 E_3^c \bigvee E_1 E_2^c E_3 \bigvee E_1^c E_2 E_3$, $A_3 = E_1 E_2 E_3$ The unions are disjoint since each pair of terms has $E_i$ in one and $E_i^c$ in the other, for at least one $i$. Now the $B_k$ can be expressed in terms of the $A_k\. For example \(V_2 = A_2 \bigvee A_3$ The union in this expression for $B_2$ is disjoint since we cannot have exactly two of the $E_i$ occur and exactly three of them occur on the same trial. We may express $B_2$ directly in terms of the $E_i$ as follows: $B_2 = E_1 E_2 \cup E_1 E_3 \cup E_2 E_3$ Here the union is not disjoint, in general. However, if one pair, say ${E_1, E_3}$ is disjoint, then $E_1 E_3 = \emptyset$ and the pair ${E_1 E_2, E_2 E_3}$ is disjoint (draw a Venn diagram). Suppose $C$ is the event the first two occur or the last two occur but no other combination. Then $C = E_1 E_2 E_3^c \bigvee E_1^c E_2 E_3$ Let $D$ be the event that one or three of the events occur, $D = A_1 \bigvee A_3 = E_1 E_2^c E_3^c \bigvee E_1^c E_2 E_3^c \bigvee E_1^c E_2^c E_3 \bigvee E_1 E_2 E_3$ The important patterns in set theory known as DeMorgan's rules are useful in the handing of events. For an arbitrary class ${A_i: i \in J}$ of events, $[\bigcup_{i \in J} A_i]^c = \bigcap_{i \in J} A_i^c$ and $[\bigcap_{i \in J} A_i]^c = \bigcup_{i \in J} A_i^c$ An outcome is not in the union (i.e., not in at least one) of the $A_i$ iff it fails to be in all $A_i$, and it is not in the intersection (i.e. not in all) iff it fails to be in at least one of the $A_i$. continuation of example Express the event of no more than one occurrence of the events in ${E_1, E_2, E_3}$ as $B_2^c$. $B_2^c = [E_1 E_2 \cup E_1 E_3 \cup E_2 E_3]^c = (E_1^c \cup E_2^c) (E_1^c \cup E_3^c) (E_2^3 \cup E_3^c) = E_1^c E_2^c \cup E_1^c E_3^c \cup E_2^c E_3^c$ The last expression shows that not more than one of the $E_i$ occurs iff at least two of them fail to occur.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/01%3A_Probability_Systems/1.01%3A_Likelihood.txt
Probability measures In the module "Likelihood" we introduce the notion of a basic space ΩΩ of all possible outcomes of a trial or experiment, events as subsets of the basic space determined by appropriate characteristics of the outcomes, and logical or Boolean combinations of the events (unions, intersections, and complements) corresponding to logical combinations of the defining characteristics. Occurrence or nonoccurrence of an event is determined by characteristics or attributes of the outcome observed on a trial. Performing the trial is visualized as selecting an outcome from the basic set. An event occurs whenever the selected outcome is a member of the subset representing the event. As described so far, the selection process could be quite deliberate, with a prescribed outcome, or it could involve the uncertainties associated with “chance.” Probability enters the picture only in the latter situation. Before the trial is performed, there is uncertainty about which of these latent possibilities will be realized. Probability traditionally is a number assigned to an event indicating the likelihood of the occurrence of that event on any trial. We begin by looking at the classical model which first successfully formulated probability ideas in mathematical form. We use modern terminology and notation to describe it. Classical probability 1. The basic space $\Omega$ consists of a finite number N of possible outcomes. -There are thirty six possible outcomes of throwing two dice. -There are $C(52,5) = \dfrac{52!}{5! 47!} = 2598960$ different hands of five cards (order not important). -There are $2^5 = 32$ results (sequences of heads or tails) of flipping five coins. 2. Each possible outcome is assigned a probability 1/$N$ 3. If event (subset) $A$ has $N_A$ elements, then the probability assigned event $A$ is $P(A) = N_A /N$ (i.e., the fraction favorable to $A$) With this definition of probability, each event $A$ is assigned a unique probability, which may be determined by counting $N_A$, the number of elements in $A$ (in the classical language, the number of outcomes "favorable" to the event) and $N$ the total number of possible outcomes in the sure event $\Omega$. Probabilities for hands of cards Consider the experiment of drawing a hand of five cards from an ordinary deck of 52 playing cards. The number of outcomes, as noted above, is $N = C(52,5) = 2598960 N = C(52,5) = 2598960$. What is the probability of drawing a hand with exactly two aces? What is the probability of drawing a hand with two or more aces? What is the probability of not more than one ace? Solution Let $A$ be the event of exactly two aces, $B$ be the event of exactly three aces, and $C$ be the event of exactly four aces. In the first problem, we must count the number $N_A$ of ways of drawing a hand with two aces. We select two aces from the four, and select the other three cards from the 48 non aces. Thus $N_A = C(4, 2) C(48,3) = 103776$, so that $P(A) = \dfrac{N_A}{N} = \dfrac{103776}{2598960} \approx 0.0399$ There are two or more aces iff there are exactly two or exactly three or exactly four. Thus the event $D$ of two or more is $D = A \bigvee B \bigvee C$, since $A, B, C$ are mutually exclusive, $N_D = N_A + N_b + N_c = C(4, 2) C(48, 3) + C(4, 3) C(48, 2) + C(4, 4) C(48, 1) = 103776 + 4512 + 48 = 108336$ so that $P(D) \approx 0.0417$. There is one ace or none iff there are not two or more aces. We thus want $P(D^c)$. Now the number in $D_c$ is the number not in $D$ which is $N - N_D$, so that $P(D^c) = \dfrac{N - N_D}{N} = 1 - \dfrac{N_D}{N} = 1 - P(D) = 0.9583$ This example illustrates several important properties of the classical probability. $P(A) = N_A / N$ is a nonnegative quantity. $P(\Omega) = N/N = 1$ If $A, B, C$ are mutually exclusive, then the number in the disjoint union is the sum of the numbers in the individual events, so that $P(A \bigvee B \bigvee C) = P(A) + P(B) + P(C)$ Several other elementary properties of the classical probability may be identified. It turns out that they can be derived from these three. Although the classical model is highly useful, and an extensive theory has been developed, it is not really satisfactory for many applications (the communications problem, for example). We seek a more general model which includes classical probability as a special case and is thus an extension of it. We adopt the Kolmogorov model (introduced by the Russian mathematician A. N. Kolmogorov) which captures the essential ideas in a remarkably successful way. Of course, no model is ever completely successful. Reality always seems to escape our logical nets. The Kolmogorov model is grounded in abstract measure theory. A full explication requires a level of mathematical sophistication inappropriate for a treatment such as this. But most of the concepts and many of the results are elementary and easily grasped. And many technical mathematical considerations are not important for applications at the level of this introductory treatment and may be disregarded. We borrow from measure theory a few key facts which are either very plausible or which can be understood at a practical level. This enables us to utilize a very powerful mathematical system for representing practical problems in a manner that leads to both insight and useful strategies of solution. Our approach is to begin with the notion of events as sets introduced above, then to introduce probability as a number assigned to events subject to certain conditions which become definitive properties. Gradually we introduce and utilize additional concepts to build progressively a powerful and useful discipline. The fundamental properties needed are just those illustrated in Example for the classical case. Definition A probability system consists of a basic set $\Omega$ of elementary outcomes of a trial or experiment, a class of events as subsets of the basic space, and a probability measure $P(\cdot)$ which assigns values to the events in accordance with the following rules (P1): For any event $A$, the probability $P(A) \ge 0$. (P2): The probability of the sure event $P(\Omega) = 1$. (P3): Countable additivity. If ${A_i : 1 \in J}$ is a mutually exclusive, countable class of events, then the probability of the disjoint union is the sum of the individual probabilities. The necessity of the mutual exclusiveness (disjointedness) is illustrated in Example. If the sets were not disjoint, probability would be counted more than once in the sum. A probability, as defined, is abstract—simply a number assigned to each set representing an event. But we can give it an interpretation which helps to visualize the various patterns and relationships encountered. We may think of probability as mass assigned to an event. The total unit mass is assigned to the basic set $\Omega$. The additivity property for disjoint sets makes the mass interpretation consistent. We can use this interpretation as a precise representation. Repeatedly we refer to the probability mass assigned a given set. The mass is proportional to the weight, so sometimes we speak informally of the weight rather than the mass. Now a mass assignment with three properties does not seem a very promising beginning. But we soon expand this rudimentary list of properties. We use the mass interpretation to help visualize the properties, but are primarily concerned to interpret them in terms of likelihoods. (P4): $P(A^c) = 1 - P(A)$. The follows from additivity and the fact that $1 = P(\Omega) = P(A \bigvee A^c) = P(A) + P(A^c)$ (P5): $P(\emptyset) = 0$. The empty set represents an impossible event. It has no members, hence cannot occur. It seems reasonable that it should be assigned zero probability (mass). Since $\emptyset = \Omega^c$, this follows logically from P(4) and (P2). Figure 1.2.1: Partitions of the union $A \cup B$ (P6): If $A \subset B$, then $P(A) \le P(B)$. From the mass point of view, every point in $A$ is also in $B$, so that $B$ must have at least as much mass as $A$. Now the relationship $A \subset B$ means that if $A$ occurs, $B$ must also. Hence $B$ is at least as likely to occur as $A$. From a purely formal point of view, we have $B = A \bigvee A^c B$ so that $P(B) = P(A) + P(A^c B) \ge P(A)$ since $P(A^c B) \ge 0$ (P7):$P(A \cup B) = P(A) + P(A^c B) = P(B) + P(AB^c) = P(AB^c) + P(AB) + P(A^cB)$ $= P(A) + P(B) - P(AB)$ The first three expressions follow from additivity and partitioning of $A \cup B$ as follows (see Figure 1.2.1). $A \cup B = A \bigvee A^c B = B \bigvee AB^c = AB^c \bigvee AB \bigvee A^c B$ If we add the first two expressions and subtract the third, we get the last expression. In terms of probability mass, the first expression says the probability in $A \cup B$ is the probability mass in $A$ plus the additional probability mass in the part of $B$ which is not in $A$. A similar interpretation holds for the second expression. The third is the probability in the common part plus the extra in $A$ and the extra in $B$. If we add the mass in $A$ and $B$ we have counted the mass in the common part twice. The last expression shows that we correct this by taking away the extra common mass. (P8): If ${B_i : i \in J}$ is a countable, disjoint class and $A$ is contained in the union, then $A = \bigvee_{i \in J} AB_i$ so that $P(A) = \sum_{i \in J} P(AB_i)$ (P9): Subadditivity. If $A = \bigcup_{i = 1}^{\infty} A_i$, then $P(A) \le \sum_{i = 1}^{\infty} P(A_i)$. This follows from countable additivity, property (P6), and the fact (Partitions) $A = \bigcup_{i = 1}^{\infty} A_i = \bigvee_{i = 1}^{\infty} B_i$, where $B_i = A_i A_1^c A_2^c \cdot\cdot\cdot A_{i - 1}^c \subset A_i$ This includes as a special case the union of a finite number of events. Some of these properties, such as (P4), (P5), and (P6), are so elementary that it seems they should be included in the defining statement. This would not be incorrect, but would be inefficient. If we have an assignment of numbers to the events, we need only establish (P1), (P2), and (P3) to be able to assert that the assignment constitutes a probability measure. And the other properties follow as logical consequences. Flexibility at a price In moving beyond the classical model, we have gained great flexibility and adaptability of the model. It may be used for systems in which the number of outcomes is infinite (countably or uncountably). It does not require a uniform distribution of the probability mass among the outcomes. For example, the dice problem may be handled directly by assigning the appropriate probabilities to the various numbers of total spots, 2 through 12. As we see in the treatment of conditional probability, we make new probability assignments (i.e., introduce new probability measures) when partial information about the outcome is obtained. But this freedom is obtained at a price. In the classical case, the probability value to be assigned an event is clearly defined (although it may be very difficult to perform the required counting). In the general case, we must resort to experience, structure of the system studied, experiment, or statistical studies to assign probabilities. The existence of uncertainty due to “chance” or “randomness” does not necessarily imply that the act of performing the trial is haphazard. The trial may be quite carefully planned; the contingency may be the result of factors beyond the control or knowledge of the experimenter. The mechanism of chance (i.e., the source of the uncertainty) may depend upon the nature of the actual process or system observed. For example, in taking an hourly temperature profile on a given day at a weather station, the principal variations are not due to experimental error but rather to unknown factors which converge to provide the specific weather pattern experienced. In the case of an uncorrected digital transmission error, the cause of uncertainty lies in the intricacies of the correction mechanisms and the perturbations produced by a very complex environment. A patient at a clinic may be self selected. Before his or her appearance and the result of a test, the physician may not know which patient with which condition will appear. In each case, from the point of view of the experimenter, the cause is simply attributed to “chance.” Whether one sees this as an “act of the gods” or simply the result of a configuration of physical or behavioral causes too complex to analyze, the situation is one of uncertainty, before the trial, about which outcome will present itself. If there were complete uncertainty, the situation would be chaotic. But this is not usually the case. While there is an extremely large number of possible hourly temperature profiles, a substantial subset of these has very little likelihood of occurring. For example, profiles in which successive hourly temperatures alternate between very high then very low values throughout the day constitute an unlikely subset (event). One normally expects trends in temperatures over the 24 hour period. Although a traffic engineer does not know exactly how many vehicles will be observed in a given time period, experience provides some idea what range of values to expect. While there is uncertainty about which patient, with which symptoms, will appear at a clinic, a physician certainly knows approximately what fraction of the clinic's patients have the disease in question. In a game of chance, analyzed into “equally likely” outcomes, the assumption of equal likelihood is based on knowledge of symmetries and structural regularities in the mechanism by which the game is carried out. And the number of outcomes associated with a given event is known, or may be determined. In each case, there is some basis in statistical data on past experience or knowledge of structure, regularity, and symmetry in the system under observation which makes it possible to assign likelihoods to the occurrence of various events. It is this ability to assign likelihoods to the various events which characterizes applied probability. However determined, probability is a number assigned to events to indicate their likelihood of occurrence. The assignments must be consistent with the defining properties (P1), (P2), (P3) along with derived properties (P4) through (P9) (plus others which may also be derived from these). Since the probabilities are not “built in,” as in the classical case, a prime role of probability theory is to derive other probabilities from a set of given probabilites.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/01%3A_Probability_Systems/1.02%3A_Probability_Systems.txt
What is Probability? The formal probability system is a model whose usefulness can only be established by examining its structure and determining whether patterns of uncertainty and likelihood in any practical situation can be represented adequately. With the exception of the sure event and the impossible event, the model does not tell us how to assign probability to any given event. The formal system is consistent with many probability assignments, just as the notion of mass is consistent with many different mass assignments to sets in the basic space. The defining properties (P1), (P2), (P3) and derived properties provide consistency rules for making probability assignments. One cannot assign negative probabilities or probabilities greater than one. The sure event is assigned probability one. If two or more events are mutually exclusive, the total probability assigned to the union must equal the sum of the probabilities of the separate events. Any assignment of probability consistent with these conditions is allowed. One may not know the probability assignment to every event. Just as the defining conditions put constraints on allowable probability assignments, they also provide important structure. A typical applied problem provides the probabilities of members of a class of events (perhaps only a few) from which to determine the probabilities of other events of interest. We consider an important class of such problems in the next chapter. There is a variety of points of view as to how probability should be interpreted. These impact the manner in which probabilities are assigned (or assumed). One important dichotomy among practitioners. • One group believes probability is objective in the sense that it is something inherent in the nature of things. It is to be discovered, if possible, by analysis and experiment. Whether we can determine it or not, “it is there.” • Another group insists that probability is a condition of the mind of the person making the probability assessment. From this point of view, the laws of probability simply impose rational consistency upon the way one assigns probabilities to events. Various attempts have been made to find objective ways to measure the strength of one's belief or degree of certainty that an event will occur. The probability $P(A)$ expresses the degree of certainty one feels that event A will occur. One approach to characterizing an individual's degree of certainty is to equate his assessment of $P(A)$ with the amount a he is willing to pay to play a game which returns one unit of money if A occurs, for a gain of $(1 - a)$, and returns zero if A does not occur, for a gain of $-a$. Behind this formulation is the notion of a fair game, in which the “expected” or “average” gain is zero. The early work on probability began with a study of relative frequencies of occurrence of an event under repeated but independent trials. This idea is so imbedded in much intuitive thought about probability that some probabilists have insisted that it must be built into the definition of probability. This approach has not been entirely successful mathematically and has not attracted much of a following among either theoretical or applied probabilists. In the model we adopt, there is a fundamental limit theorem, known as Borel's theorem, which may be interpreted “if a trial is performed a large number of times in an independent manner, the fraction of times that event $A$ occurs approaches as a limit the value $P(A)$. Establishing this result (which we do not do in this treatment) provides a formal validation of the intuitive notion that lay behind the early attempts to formulate probabilities. Inveterate gamblers had noted long-run statistical regularities, and sought explanations from their mathematically gifted friends. From this point of view, probability is meaningful only in repeatable situations. Those who hold this view usually assume an objective view of probability. It is a number determined by the nature of reality, to be discovered by repeated experiment. There are many applications of probability in which the relative frequency point of view is not feasible. Examples include predictions of the weather, the outcome of a game or a horse race, the performance of an individual on a particular job, the success of a newly designed computer. These are unique, nonrepeatable trials. As the popular expression has it, “You only go around once.” Sometimes, probabilities in these situations may be quite subjective. As a matter of fact, those who take a subjective view tend to think in terms of such problems, whereas those who take an objective view usually emphasize the frequency interpretation. Subjective probability and a football game The probability that one's favorite football team will win the next Superbowl Game may well be only a subjective probability of the bettor. This is certainly not a probability that can be determined by a large number of repeated trials. The game is only played once. However, the subjective assessment of probabilities may be based on intimate knowledge of relative strengths and weaknesses of the teams involved, as well as factors such as weather, injuries, and experience. There may be a considerable objective basis for the subjective assignment of probability. In fact, there is often a hidden “frequentist” element in the subjective evaluation. There is an assessment (perhaps unrealized) that in similar situations the frequencies tend to coincide with the value subjectively assigned. The probabilty of rain Newscasts often report that the probability of rain of is 20 percent or 60 percent or some other figure. There are several difficulties here. • To use the formal mathematical model, there must be precision in determining an event. An event either occurs or it does not. How do we determine whether it has rained or not? Must there be a measurable amount? Where must this rain fall to be counted? During what time period? Even if there is agreement on the area, the amount, and the time period, there remains ambiguity: one cannot say with logical certainty the event did occur or it did not occur. Nevertheless, in this and other similar situations, use of the concept of an event may be helpful even if the description is not definitive. There is usually enough practical agreement for the concept to be useful. • What does a 30 percent probability of rain mean? Does it mean that if the prediction is correct, 30 percent of the area indicated will get rain (in an agreed amount) during the specified time period? Or does it mean that 30 percent of the occasions on which such a prediction is made there will be significant rainfall in the area during the specified time period? Again, the latter alternative may well hide a frequency interpretation. Does the statement mean that it rains 30 percent of the times when conditions are similar to current conditions? Regardless of the interpretation, there is some ambiguity about the event and whether it has occurred. And there is some difficulty with knowing how to interpret the probability figure. While the precise meaning of a 30 percent probability of rain may be difficult to determine, it is generally useful to know whether the conditions lead to a 20 percent or a 30 percent or a 40 percent probability assignment. And there is no doubt that as weather forecasting technology and methodology continue to improve the weather probability assessments will become increasingly useful. Another common type of probability situation involves determining the distribution of some characteristic over a population—usually by a survey. These data are used to answer the question: What is the probability (likelihood) that a member of the population, chosen “at random” (i.e., on an equally likely basis) will have a certain characteristic? Empirical probability based on survey data A survey asks two questions of 300 students: Do you live on campus? Are you satisfied with the recreational facilities in the student center? Answers to the latter question were categorized “reasonably satisfied,” “unsatisfied,” or “no definite opinion.” Let $C$ be the event “on campus;” $O$ be the event “off campus;” $S$ be the event “reasonably satisfied;” $U$ be the event ”unsatisfied;” and $N$ be the event “no definite opinion.” Data are shown in the following table. Survey Data S U N C 127 31 42 O 46 43 11 If an individual is selected on an equally likely basis from this group of 300, the probability of any of the events is taken to be the relative frequency of respondents in each category corresponding to an event. There are 200 on campus members in the population, so $P(C) = 200/300$ and $P(O) = 100/300$. The probability that a student selected is on campus and satisfied is taken to be $P(CS) = 127/300$. The probability a student is either on campus and satisfied or off campus and not satisfied is $P(CS \bigvee OU) = P(CS) + P(OU) = 127/300 + 43/300 = 170/300$ If there is reason to believe that the population sampled is representative of the entire student body, then the same probabilities would be applied to any student selected at random from the entire student body. It is fortunate that we do not have to declare a single position to be the “correct” viewpoint and interpretation. The formal model is consistent with any of the views set forth. We are free in any situation to make the interpretation most meaningful and natural to the problem at hand. It is not necessary to fit all problems into one conceptual mold; nor is it necessary to change mathematical model each time a different point of view seems appropriate. Probability and odds Often we find it convenient to work with a ratio of probabilities. If $A$ and $B$ are events with positive probability the odds favoring $A$ over $B$ is the probability ratio $P(A)P(B)$. If not otherwise specified, $B$ is taken to be $A^c$ and we speak of the odds favoring $A$ $O(A) = \dfrac{P(A)}{P(A^c)} = \dfrac{P(A)}{1 - P(A)}$ This expression may be solved algebraically to determine the probability from the odds $P(A) = \dfrac{O(A)}{1 + O(A)}$ In particular, if $O(A) = a/b$ then $P(A) = \dfrac{a/b}{1+a/b} = \dfrac{a}{a+b}$. $O(A) = 0.7/0.3 = 7/3$. If the odds favoring $A$ is 5/3, then $P(A) = 5/(5 + 3) = 5/8$. Partitions and Boolean combinations of events The countable additivity property (P3) places a premium on appropriate partitioning of events. Definition A partition is a mutually exclusive class ${A_i : i \in J}$ such that $\Omega = \bigvee_{i \in J} A_i$ A partition of event $A$ is a mutually exclusive class ${A_i : i \in J}$ such that $A = \bigvee_{i \in J} A_i$ Remarks. • A partition is a mutually exclusive class of events such that one (and only one) must occur on each trial. • A partition of event $A$ is a mutually exclusive class of events such that $A$ occurs iff one (and only one) of the $A_i$ occurs. • A partition (no qualifier) is taken to be a partition of the sure event $\Omega$. • If class ${B_i : i \in J}$ is mutually exclusive and $A \subset B = \bigvee_{i \in J} B_i$, then the class ${AB_i : i \in J}$ is a partition of $A$ and $A = \bigvee_{i \in J} AB_i$. We may begin with a sequence ${A_1: 1 \le i}$ and determine a mutually exclusive (disjoint) sequence ${B_1: 1 \le i}$ as follows: $B_1 = A_1$, and for any $i > 1$, $B_i = A_i A_{1}^{c} A_{2}^{c} \cdot\cdot\cdot A_{i - 1}^{c}$ Thus each $B_i$ is the set of those elements of $A_i$ not in any of the previous members of the sequence. This representation is used to show that subadditivity (P9) follows from countable additivity and property (P6). Since each $B_i \subset A_i$, by (P6) $P(B_i) \le P(A_i)$. Now $P(\bigcup_{i = 1}^{\infty} A_i) = P(\bigvee_{i = 1}^{\infty} B_i) = \sum_{i = 1}^{\infty} P(B_i) \le \sum_{i = 1}^{\infty} P(A_i)$ The representation of a union as a disjoint union points to an important strategy in the solution of probability problems. If an event can be expressed as a countable disjoint union of events, each of whose probabilities is known, then the probability of the combination is the sum of the individual probailities. In in the module on Partitions and Minterms, we show that any Boolean combination of a finiteclass of events can be expressed as a disjoint union in a manner that often facilitates systematic determination of the probabilities. The indicator function One of the most useful tools for dealing with set combinations (and hence with event combinations) is the indicator function $I_E$ for a set $E \subset \Omega$. It is defined very simply as follows: $I_E (\omega) = \begin{cases}1 & \text{for } \omega \in E \ 0 & \text{for } \omega \in E^c \end{cases}$ Remark. Indicator fuctions may be defined on any domain. We have occasion in various cases to define them on the real line and on higher dimensional Euclidean spaces. For example, if $M$ is the interval [$a,b$] on the real line then $I_M(t) = 1$ for each $t$ in the interval (and is zero otherwise). Thus we have a step function with unit value over the interval $M$. In the abstract basic space $\Omega$ we cannot draw a graph so easily. However, with the representation of sets on a Venn diagram, we can give a schematic representation, as in Figure 1.3.1. Figure 1.3.1. Representation of the indicator function $I_E$ for event $E$. Much of the usefulness of the indicator function comes from the following properties. (IF1): $I_A \le I_B$ iff $A \subset B$. If $I_A \le I_B$, then $\omega \in A$ implies $I_A (\omega) = I_B (\omega) = 1$, so $\omega \in B$, then $I_A (\omega) = 1$ implies $\omega \in A$ implies $\omega \in B$ implies $I_B (\omega) = 1$. (IF2): $I_A = I_B$ iff $A = B$ $A = B$ iff both $A \subset B$ and $B \subset A$ iff $I_A \le I_B$ and $I_B \le I_A$ iff $I_A = I_B$ (IF3): $I_{A^c} = 1 - I_A$ This follows from the fact $I_{A^c} (\omega) = 1$ iff $I_A (\omega) = 0$. (IF4): $I_{AB} = I_A I_B = \text{min } {I_A, I_B}$ (extends to any class) An element ω belongs to the intersection iff it belongs to all iff the indicator function for each event is one iff the product of the indicator functions is one. (IF5): $I_{A \cup B} = I_A + I_B - I_A I_B = \text{min }{I_A, I_B}$ (the maximum rule extends to any class) The maximum rule follows from the fact that $\omega$ is in the union iff it is in any one or more of the events in the union iff any one or more of the individual indicator function has value one iff the maximum is one. The sum rule for two events is established by DeMorgan's rule and properties (IF2), (IF3), and (IF4). $I_{A \cup B} = 1 - I_{A^c B^c} = 1 - [1 - I_A][1 - I_B] = 1 - 1 + I_B + I_A - I_A I_B$ (IF6): If the pair ${A, B}$ is disjoint, $I_{A \bigvee B} = I_A+ I_B$ (extends to any disjoint class) The following example illustrates the use of indicator functions in establishing relationships between set combinations. Other uses and techniques are established in the module on Partitions and Minterms. Indicator functions and set combinations Suppose ${A_i : 1 \le i \le n}$ is a partition. If $B = \bigvee_{i = 1}^{n} A_i C_i$, then $B^c = \bigvee_{i = 1}^{n} A_i C_{i}^{c}$ Proof Utilizing properties of the indicator function established above, we have $I_B = \sum_{i = 1}^{n} I_{A_i} I_{C_i}$ Note that since the $A_i$ form a partition, we have $\sum_{i = 1}^{n} I_{A_i} = 1$, so that the indicator function for the complementary event is $I_{B^c} = 1 - \sum_{i = 1}^{n} I_{A_i} I_{C_i} = \sum_{i = 1}^{n} I_{A_i} - \sum_{i = 1}^{n} I_{A_i} I_{C_i} = \sum_{i = 1}^{n} [1 - I_{C_i}] = \sum_{i = 1}^{n} I_{A_i} I_{C_{i}^{c}}$ The last sum is the indicator function for $\bigvee_{i = 1}^{n} A_i C_{i}^{c}$ A technical comment on the class of events The class of events plays a central role in the intuitive background, the application, and the formal mathematical structure. Events have been modeled as subsets of the basic space of all possible outcomes of the trial or experiment. In the case of a finite number of outcomes, any subset can be taken as an event. In the general theory, involving infinite possibilities, there are some technical mathematical reasons for limiting the class of subsets to be considered as events. The practical needs are these: 1. If $A$ is an event, its complementary set must also be an event. 2. If ${A_i : i \in J}$ is a finite or countable class of events, the union and the intersection of members of the class need to be events. A simple argument based on DeMorgan's rules shows that if the class contains complements of all its sets and countable unions, then it contains countable intersections. Likewise, if it contains complements of all its sets and countable intersections, then it contains countable unions. A class of sets closed under complements and countable unions is known as a sigma algebra of sets. In a formal, measure-theoretic treatment, a basic assumption is that the class of events is a sigma algebra and the probability measure assigns probabilities to members of that class. Such a class is so general that it takes very sophisticated arguments to establish the fact that such a class does not contain all subsets. But precisely because the class is so general and inclusive in ordinary applications we need not be concerned about which sets are permissible as events A primary task in formulating a probability problem is identifying the appropriate events and the relationships between them. The theoretical treatment shows that we may work with great freedom in forming events, with the assurrance that in most applications a set so produced is a mathematically valid event. The so called measurability question only comes into play in dealing with random processes with continuous parameters. Even there, under reasonable assumptions, the sets produced will be events.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/01%3A_Probability_Systems/1.03%3A_Interpretations.txt
Exercise $1$ Let $\Omega$ consist of the set of positive integers. Consider the subsets $A = \{\omega: \omega \le 12\}$ $B = \{\omega: \omega < 8\}$ $C = \{\omega: \omega \text{ is even}\}$ $D = \{\omega: \omega \text{ is a multiple of } 3\}$ $E = \{\omega: \omega \text{ is a multiple of } 4\}$ Describe in terms of $A, B, C, D, E$ and their complements the following sets: a. {1, 3, 5, 7} b. {3, 6, 9} c. {8, 10} d. The even integers greater than 12 e. The positive integers which are multiples of six. f. The integers which are even and no greater than 6 or which are odd and greater than 12. Answer $a = BC^c$ $b= DAE^c$ $c = CAB^cD^c$ $d = CA^c$ $e = CD$ $f = BC \bigvee A^cC^c$ Exercise $2$ Let $\Omega$ be the set of integers 0 through 10. Let $A = \{5, 6, 7, 8\}$, $B =$ the odd integers in $\Omega$, and $C =$ the integers in $\Omega$ which are even or less than three. Describe the following sets by listing their elements. a. $AB$ b. $AC$ c. $AB^c \cup C$ d. $ABC^c$ e. $A \cup B^c$ f. $A \cup BC^c$ g. $ABC$ h. $A^c BC^c$ Answer a. $AB = {5, 7}$ b. $AC = {6, 8}$ c. $AB^c \cup C = C$ d. $ABC^c = AB$ e. $A \cup B^c = {0, 2, 4, 5, 6, 7, 8, 10}$ f. $ABC = \emptyset$ g. $A^c BC^c = {3, 9}$ Exercise $3$ Consider fifteen-word messages in English. Let $A =$ the set of such messages which contain the word “bank” and let $B =$ the set of messages which contain the word “bank” and the word “credit.” Which event has the greater probability? Why? Answer $B \subset A$ implies $P(B) \le P(A)$. Exercise $4$ A group of five persons consists of two men and three women. They are selected one-by-one in a random manner. Let $E_i$ be the event a man is selected on the $i$th selection. Write an expression for the event that both men have been selected by the third selection. Answer $A = E_1 E_2 \bigvee E_1 E_2^c E_3 \bigvee E_1^c E_2 E_3$ Exercise $5$ Two persons play a game consecutively until one of them is successful or there are ten unsuccessful plays. Let $E_i$ be the event of a success on the $i$th play of the game. Let $A, B, C$ be the respective events that player one, player two, or neither wins. Write an expression for each of these events in terms of the events $E_i$, $1 \le i \le 10$. Answer $A = E_1 \bigvee E_1^c E_2^c E_3 \bigvee E_1^c E_2^c E_3^c E_4^c E_5 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8^c E_9$ $B = E_1^c E_2 \bigvee E_1^c E_2^c E_3^c E_4 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8^c E_9^c E_{10}$ $C = \bigcap_{i = 1}^{10} E_i^c$ Exercise $6$ Suppose the game in Exercise 1.4.5 could, in principle, be played an unlimited number of times. Write an expression for the event $D$ that the game will be terminated with a success in a finite number of times. Write an expression for the event $F$ that the game will never terminate. Answer Let $F_0 = \Omega$ and $F_k = \bigcap_{i = 1}^{k} E_i^c$ for $k \ge 1$. Then $D = \bigvee_{n = 1}^{\infty} F_{n - 1} E_n$ and $F = D^c = \bigcap_{i = 1}^{\infty} E_i^c$ Exercise $7$ Find the (classical) probability that among three random digits, with each digit (0 through 9) being equally likely and each triple equally likely: a. All three are alike. b. No two are alike. c. The first digit is 0. d. Exactly two are alike. Answer Each triple has probability $1/10^3 = 1/1000$ a. Ten triples, all alike: $P = 10/1000$. b. $10 \times 9 \times 8$ triples all different: $P = 720/1000$. c. 100 triples with first one zero: $P = 100/1000$ d. $C(3, 2) = 3$ ways to pick two positions alike; 10 ways to pick the common value; 9 ways to pick the other. $P = 270/1000$. Exercise $8$ The classical probability model is based on the assumption of equally likely outcomes. Some care must be shown in analysis to be certain that this assumption is good. A well known example is the following. Two coins are tossed. One of three outcomes is observed: Let $\omega_1$ be the outcome both are “heads,” $\omega_2$ the outcome that both are “tails,” and $\omega_3$ be the outcome that they are different. Is it reasonable to suppose these three outcomes are equally likely? What probabilities would you assign? Answer $P(\{\omega_1\}) = P(\{\omega_2\}) = 1/4$, $P(\{\omega_3\}) = 1/2$ Exercise $9$ A committee of five is chosen from a group of 20 people. What is the probability that a specified member of the group will be on the committee? Answer $C(20, 5)$ committees; $C(19, 4)$ have a designated member. $P = \dfrac{19!}{4! 15!} \cdot \dfrac{5! 15!}{20!} = 5/20 = 1/4$ Exercise $10$ Ten employees of a company drive their cars to the city each day and park randomly in ten spots. What is the (classical) probability that on a given day Jim will be in place three? There are $n!$ equally likely ways to arrange $n$ items (order important). Answer 10! permutations,$1 \times 9!$ permutations with Jim in place 3. $P = 9!/10! = 1/10$. Exercise $11$ An extension of the classical model involves the use of areas. A certain region $L$ (say of land) is taken as a reference. For any subregion $A$, define $P(A) = area(A)/area(L)$. Show that $P(\cdot)$ is a probability measure on the subregions of $L$. Answer Additivity follows from additivity of areas of disjoint regions. Exercise $12$ John thinks the probability the Houston Texans will win next Sunday is 0.3 and the probability the Dallas Cowboys will win is 0.7 (they are not playing each other). He thinks the probability both will win is somewhere between—say, 0.5. Is that a reasonable assumption? Justify your answer. Answer $P(AB) = 0.5$ is not reasonable. It must no greater than the minimum of $P(A) = 0.3$ and $P(B) = 0.7$. Exercise $13$ Suppose $P(A) = 0.5$ and $P(B) = 0.3$. What is the largest possible value of $P(AB)$? Using the maximum value of $P(AB)$, dertermine $P(AB^c)$, $P(A^c B)$, $P(A^c B^c)$ and $P(A \cup B)$. Are these values determined uniquely? Answer Draw a Venn diagram, or use algebraic expressions $P(AB^c) = P(A) - P(AB) = 0.2$ $P(A^c B) = P(B) - P(AB) = 0$ $P(A^c B^c) = P(A^c) - P(A^c B) = 0.5$ $P(A \cup B) = 0.5$ Exercise $14$ For each of the following probability “assignments”, fill out the table. Which assignments are not permissible? Explain why, in each case. $P(A)$ $P(B)$ $P(AB)$ $P(A \cup B)$ $P(AB^c)$ $P(A^c B)$ $P(A) + P(B)$ 0.3 0.7 0.4 0.2 0.1 0.4 0.3 0.7 0.2 0.3 0.5 0 0.3 0.8 0 Answer $P(A)$ $P(B)$ $P(AB)$ $P(A \cup B)$ $P(AB^c)$ $P(A^c B)$ $P(A) + P(B)$ 0.3 0.7 0.4 0.6 -0.1 0.3 1.0 0.2 0.1 0.4 -0.1 -0.2 -0.3 0.3 0.3 0.7 0.2 0.8 0.1 0.5 1.0 0.3 0.5 0 0.8 0.3 0.5 0.8 0.3 0.8 0 1.1 0.3 0.8 1.1 Only the third and fourth assignments are permissible. Exercise $15$ The class $\{A, B, C\}$ of events is a partition. Event $A$ is twice as likely as $C$ and event $B$ is as likely as the combination $A$ or $C$. Determine the probabilities $P(A)$, $P(B)$, $P(C)$. Answer $P(A) + P(B) + P(C) = 1$, $P(A) = 2P(C)$, and $P(B) = P(A) + P(C) = 3P(C)$, which implies $P(C) = 1/6$, $P(A) = 1/3$, $P(B) = 1/2$ Exercise $16$ Determione the probability $P(A \cup B \cup C)$ in terms of the probabilities of the events $A, B, C$ and their intersections. Answer $P(A \cup B \cup C) = P(A \cup B) + P(C) - P(AC \cup BC)$ $= P(A) + P(B) - P(AB) + P(C) - P(AC) - P(BC) + P(ABC)$ Exercise $17$ If occurrence of event $A$ implies occurrence of $B$, show that $P(A^c B) = P(B) - P(A)$. Answer $P(AB) = P(A)$ and $P(AB) + P(A^c B) = P(B)$ implies $P(A^c B) = P(B) - P(A)$. Exercise $18$ Show that $P(AB) \ge P(A) + P(B) - 1$. Answer Follows from $P(A) + P(B) - P(AB) = P(A \cup B) \le 1$. Exercise $19$ The set combination $A \oplus B = AB^c \bigvee A^c B$ is known as the disjunctive union or the symetric difference of $A$ and $B$. This is the event that only one of the events $A$ or $B$ occurs on a trial. Determine $P(A \oplus B)$ in terms of $P(A)$, $P(B)$, and $P(AB)$ Answer A Venn diagram shows $P(A \oplus B) = P(AB^c) + P(AB^c) = P(A) + P(B) - 2P(AB)$. Exercise $20$ Use fundamental properties of probability to show a. $P(AB) \le P(A) \le P(A \cup B) \le P(A) + P(B)$ b. $P(\bigcap_{j = 1}^{\infty} E_j) \le P(E_i) \le P(\bigcup_{j = 1}^{\infty} E_j) \le \sum_{j = 1}^{\infty} P(E_j)$ Answer $AB \subset A \subset A \cup B$ implies $P(AB) \le P(A) \le P(A \cup B) = P(A) + P(B) - P(AB) \le P(A) + P(B)$. The general case follows similarly, with the last inequality determined by subadditivity. Exercise $21$ Suppose $P_1, P_2$ are probability measures and $c_1, c_2$ are positive numbers such that $c_1 + c_2 = 1$. Show that the assignment $P(E) = c_1 P_1(E) + c_2P_2(E)$ to the class of events is a probability measure. Such a combination of probability measures is known as a mixture. Extend this to $P(E) = \sum_{i = 1}^{n} c_i P_i (E)$, where the $P_i$ are probabilities measures, $c_i > 0$, and $\sum_{i = 1}^{n} c_i = 1$ Answer Clearly $P(E) \ge 0$. $P(\Omega) = c_1 P_1 (\Omega) + c_2 P_2 (\Omega) = 1$. $E = \bigvee_{i = 1}^{\infty} E_i$ implies $P(E) = c_1 \sum_{i = 1}^{\infty} P_1 (E_i) + c_2 \sum_{i = 1}^{\infty} P_2 (E_i) = \sum_{i =1}^{\infty} P(E_i)$ The pattern is the same for the general case, except that the sum of two terms is replaced by the sum of $n$ terms $c_i P_i (E)$. Exercise $22$ Suppose $\{A_1, A_2, \cdot\cdot\cdot, A_n\}$ is a partition and $\{c_1, c_2, \cdot\cdot\cdot, c_n\}$ is a class of positive constants. For each event $E$, let $Q(E) = \sum_{i = 1}^{n} c_i P(EA_i) / \sum_{i = 1}^{n} c_i P(A_i)$ Show that $Q(\cdot)$ us a probability measure. Answer Clearly $Q(E) \ge 0$ and since $A_i \Omega = A_i$ we have $Q(\Omega) = 1$. If $E = \bigvee_{k = 1}^{\infty} E_k$, then $P(EA_i) = \sum_{k = 1}^{\infty} P(E_k A_i)$ $\forall i$ Interchanging the order of summation shows that $Q$ is countably additive.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/01%3A_Probability_Systems/1.04%3A_Problems_on_Probability_Systems.txt
A fundamental problem in elementary probability is to find the probability of a logical (Boolean) combination of a finite class of events, when the probabilities of certain other combinations are known. If we partition an event \(F\) into component events whose probabilities can be determined, then the additivity property implies the probability of \(F\) is the sum of these component probabilities. Frequently, the event \(F\) is a Boolean combination of members of a finite class– say, \(\{A, B, C\}\) or \(\{A, B, C, D\}\). For each such finite class, there is a fundamental partition determined by the class. The members of this partition are called minterms. Any Boolean combination of members of the class can be expressed as the disjoint union of a unique subclass of the minterms. If the probability of every minterm in this subclass can be determined, then by additivity the probability of the Boolean combination is determined. We examine these ideas in more detail. • 2.1: Minterms If we partition an event F into component events whose probabilities can be determined, then the additivity property implies the probability of F is the sum of these component probabilities. Frequently, the event F is a Boolean combination of members of a finite class. For each such finite class, there is a fundamental partition determined by the class. The members of this partition are called minterms. • 2.2: Minterms and MATLAB Calculations • 2.3: Problems on Minterm Analysis 02: Minterm Analysis Partitions and minterms To see how the fundamental partition arises naturally, consider first the partition of the basic space produced by a single event $A$. $\Omega = A \bigvee A^c$ Now if $B$ is a second event, then $A = AB \bigvee AB^c$ and $A^c = A^c B \bigvee A^c B^c$ so that $\Omega = A^c B^c \bigvee A^c B \bigvee AB^c \bigvee AB$ The pair $\{A, B\}$ has partitioned $\Omega$ into $\{A^c B^c, A^c B, AB^c, AB\}$. Continuation is this way leads systematically to a partition by three events $\{A, B, C\}$, four events $\{A, B, C, D\}$, etc. We illustrate the fundamental patterns in the case of four events $\{A, B, C, D\}$. We form the minterms as intersections of members of the class, with various patterns of complementation. For a class of four events, there are $2^4 = 16$ such patterns, hence 16 minterms. These are, in a systematic arrangement, $A^c B^c C^c D^c$ $A^c B C^c D^c$ $A B^c C^c D^c$ $A B C^c D^c$ $A^c B^c C^c D$ $A^c B C^c D$ $A B^c C^c D$ $A B C^c D$ $A^c B^c C D^c$ $A^c B C D^c$ $A B^c C D^c$ $A B C D^c$ $A^c B^c C D$ $A^c B C D$ $A B^c C D$ $A B C D$ No element can be in more than one minterm, because each differs from the others by complementation of at least one member event. Each element $\omega$ is assigned to exactly one of the minterms by determining the answers to four questions: Is it in $A$? Is it in $B$? Is it in $C$? Is it in $D$? Suppose, for example, the answers are: Yes, No, No, Yes. Then ω is in the minterm $A B^c C^c D$. In a similar way, we can determine the membership of each $\omega$ in the basic space. Thus, the minterms form a partition. That is, the minterms represent mutually exclusive events, one of which is sure to occur on each trial. The membership of any minterm depends upon the membership of each generating set $A, B, C$ or $D$, and the relationships between them. For some classes, one or more of the minterms are empty (impossible events). As we see below, this causes no problems. An examination of the development above shows that if we begin with a class of n events, there are $2^n$ minterms. To aid in systematic handling, we introduce a simple numbering system for the minterms, which we illustrate by considering again the four events $A, B, C, D$, in that order. The answers to the four questions above can be represented numerically by the scheme No $\sim 0$ and Yes $\sim 1$ Thus, if $\omega$ is in $A^c B^c C^c D^c$, the answers are tabulated as 0 0 0 0. If $\omega$ is in $A B^c C^c D$, then this is designated 1 0 0 1. With this scheme, the minterm arrangement above becomes 0000 $\sim$ 0 0100 $\sim$ 4 1000 $\sim$ 8 1100 $\sim$ 12 0001 $\sim$ 1 0101 $\sim$ 5 1001 $\sim$ 9 1101 $\sim$ 13 0010 $\sim$ 2 0110 $\sim$ 6 1010 $\sim$ 10 1110 $\sim$ 14 0011 $\sim$ 3 0111 $\sim$ 7 1011 $\sim$ 11 1111 $\sim$ 15 We may view these quadruples of zeros and ones as binary representations of integers, which may also be represented by their decimal equivalents, as shown in the table. Frequently, it is useful to refer to the minterms by number. If the members of the generating class are treated in a fixed order, then each minterm number arrived at in the manner above specifies a minterm uniquely. Thus, for the generating class $\{A, B, C, D\}$, in that order, we may designate $A^c B^c C^c D^c = M_0$ (minterm 0) $AB^cC^c D = M_9$ (minterm 9), etc. We utilize this numbering scheme on special Venn diagrams called minterm maps. These are illustrated in Figure, for the cases of three, four, and five generating events. Since the actual content of any minterm depends upon the sets $A, B, C$, and $D$ in the generating class, it is customary to refer to these sets as variables. In the three-variable case, set $A$ is the right half of the diagram and set $C$ is the lower half; but set B is split, so that it is the union of the second and fourth columns. Similar splits occur in the other cases. Remark. Other useful arrangements of minterm maps are employed in the analysis of switching circuits. Three variables Four variables Five variables Figure 2.1.1. Minterm maps for three, four, or five variables Minterm maps and the minterm expansion The significance of the minterm partition of the basic space rests in large measure on the following fact. Minterm expansion Each Boolean combination of the elements in a generating class may be expressed as the disjoint union of an appropriate subclass of the minterms. This representation is known as the minterm expansion for the combination. In deriving an expression for a given Boolean combination which holds for any class $\{A, B, C, D\}$ of four events, we include all possible minterms, whether empty or not. If a minterm is empty for a given class, its presence does not modify the set content or probability assignment for the Boolean combination. The existence and uniqueness of the expansion is made plausible by simple examples utilizing minterm maps to determine graphically the minterm content of various Boolean combinations. Using the arrangement and numbering system introduced above, we let $M_i$ represent the $i$th minterm (numbering from zero) and let $p(i)$ represent the probability of that minterm. When we deal with a union of minterms in a minterm expansion, it is convenient to utilize the shorthand illustrated in the following. $M(1, 3, 7) = M_1 \bigvee M_3 \bigvee M_7$ and $p(1, 3, 7) = p(1) + p(3) + p(7)$ Figure 2.1.2. $E = AB \cup A^c (B \cup C^c)^c = M(1:6, 7)$ Minterm expansion for Example 2.1.1 Consider the following simple example. Example $1$ Minterm expansion Suppose $E = AB \cup A^c (B \cup C^c)^c$. Examination of the minterm map in Figure 2.1.2 show that $AB$ consists of the union of minterms $M_6$, $M_7$, which we designate $M(6,7)$. The combination $B \cup C^c = M(0, 2, 3, 4, 6, 7)$, so that its complement $(B \cup C^c)^c = M(1, 5)$. This leaves the comon part $A^c (B \cup C^c)^c = M_1$, Hence, $E = M(1, 6, 7)$. Similarly, $F = A \cup B^c C = M(1, 4, 5, 6, 7)$. A key to establishing the expansion is to note that each minterm is either a subset of the combination or is disjoint from it. The expansion is thus the union of those minterms included in the combination. A general verification using indicator functions is sketched in the last section of this module. Use of minterm maps A typical problem seeks the probability of certain Boolean combinations of a class of events when the probabilities of various other combinations is given. We consider several simple examples and illustrate the use of minterm maps in formulation and solution. Example $2$ Survey on software Statistical data are taken for a certain student population with personal computers. An individual is selected at random. Let $A =$ the event the person selected has word processing, $B =$ the event he or she has a spread sheet program, and $C =$ the event the person has a data base program. The data imply • The probability is 0.80 that the person has a word processing program: $P(A) = 0.8$ • The probability is 0.65 that the person has a spread sheet program: $P(B) = 0.65$ • The probability is 0.30 that the person has a data base program: $P(C) = 0.3$ • The probability is 0.10 that the person has all three: $P(ABC) = 0.1$ • The probability is 0.05 that the person has neither word processing nor spread sheet: $P(A^c B^c = 0.05$ • The probability is 0.65 that the person has at least two: $P(AB \cup AC \cup BC) = 0.65$ • The probability of word processor and data base, but no spread sheet is twice the probabilty of spread sheet and data base, but no word processor: $P(AB^cC) = 2P(A^cBC)$ a. What is the probability that the person has exactly two of the programs? b. What is the probability that the person has only the data base program? Several questions arise: • Are these data consistent? • Are the data sufficient to answer the questions? • How may the data be utilized to anwer the questions? Solution The data, expressed in terms of minterm probabilities, are: $P(A) = p(4, 5, 6, 7) = 0.80$; hence $P(A^c) = p(0, 1, 2, 3) = 0.20$ $P(B) = p(2, 3, 6, 7) = 0.65$; hence $P(B^c) = p(0, 1, 4, 5) = 0.35$ $P(C) = p(1, 3, 5, 7) = 0.30$; hence $P(C^c) = p(0, 2, 4, 6) = 0.70$ $P(ABC) = p(7) = 0.10$ $P(A^c B^c) = p(0, 1) = 0.05$ $P(AB \cup AC \cup BC) = p(3, 5, 6, 7) = 0.65$ $P(AB^c C) = p(5) = 2p(3) = 2P(A^c BC)$ These data are shown on the minterm map in Figure 2.1.3 a. We use the patterns displayed in the minterm map to aid in an algebraic solution for the various minterm probabilities. $p(2, 3) = p(0, 1, 2, 3) - p(0, 1) = 0.20 - 0.05 = 0.15$ $p(6,7) = p(2, 3, 6, 7) - p(2, 3) = 0.65 - 0.15 = 0.50$ $p(6) = p(6,7) - p(7) = 0.50 - 0.10 = 0.40$ $p(3,5) = p(3, 5, 6, 7) - p(6,7) = 0.65 -0.50 = 0.15 \Rightarrow p(3) = 0.05$, $p(5) = 0.10 \Rightarrow p(2) = 0.10$ $p(1) = p(1, 3, 5, 7) - p(3, 5) - p(7) = 0.30 - 0.15 - 0.10 = 0.05 \Rightarrow p(0) = 0$ $p(4) = p(4, 5, 6, 7) - p(5) - p(6, 7) = 0.80 - 0.10 - 0.50 = 0.20$ Thus, all minterm probabilities are determined. They are displayed in Figure 2.1.3 b. From these we get $P(A^c BC \bigvee AB^cC \bigvee ABC^c) = p(3, 5, 6) = 0.05 + 0.10 + 0.40 = 0.55$ and $P(A^c B^c C) = p(1) = 0.05$ a. Data for software survey, Example 2.3.1 b. Minterm probabilities for software survey. Example 3.3.1 Figure 2.1.3. Minterm maps for software survey. Example $3$ Survey on personal computers A survey of 1000 students shows that 565 have PC compatible desktop computers, 515 have Macintosh desktop computers, and 151 have laptop computers. 51 have all three, 124 have both PC and laptop computers, 212 have at least two of the three, and twice as many own both PC and laptop as those who have both Macintosh desktop and laptop. A person is selected at random from this population. What is the probability he or she has at least one of these types of computer? What is the probability the person selected has only a laptop? Figure 2.1.4. Minterm probabilities for computer survey. Example 2.1.3 Solution Let $A =$ the event of owning a PC desktop, $B =$ the event of owning a Macintosh desktop, and $C =$ the event of owning a laptop. We utilize a minterm map for three variables to help determine minterm patterns. For example, the event $AC = M_5 \bigvee M_7$ so that $P(AC) = p(5) + p(7) = p(5, 7)$. The data, expressed in terms of minterm probabilities, are: $P(A) = p(4, 5, 6, 7) = 0.565$, hence $P(A^c) = p(0, 1, 2, 3) = 0.435$ $P(B) = p(2, 3, 6, 7) = 0.515$, hence $P(B^c) = p(0, 1, 4, 5) = 0.485$ $P(C) = p(1, 3, 5, 7) = 0.151$, hence $P(C^c) = p(0, 2, 4, 6) = 0.849$ $P(ABC) = p(7) = 0.051$ $P(AC) = p(5, 7) = 0.124$ $P(AB \cup AC \cup BC) = p(3, 5, 6, 7) = 0.212$ $P(AC) = p(5, 7) = 2p(3, 7) = 2 P(BC)$ We use the patterns displayed in the minterm map to aid in an algebraic solution for the various minterm probabilities. $p(5) = p(5, 7) - p(7) = 0.124 - 0.051 = 0.073$ $p(1, 3) = P(A^c C) = 0.151 - 0.124 = 0.027$ $P(AC^c) = p(4, 6) = 0.565 - 0.124 = 0.441$ $p(3, 7) = P(BC) = 0.124/2 = 0.062$ $p(3) = 0.062 - 0.051 = 0.011$ $p(6) = p(3, 4, 6, 7) - p(3) - p(5, 7) = 0.212 - 0.011 - 0.124 = 0.077$ $p(4) = P(A) - p(6) - p(5, 7) = 0.565 - 0.077 - 0.1124 = 0.364$ $p(1) = p(1, 3) - p(3) = 0.027 - 0.11 = 0.016$ $p(2) = P(B) - p(3, 7) - p(6) = 0.515 - 0.062 - 0.077 = 0.376$ $p(0) = P(C^c) - p(4, 6) - p(2) = 0.849 0.441 - 0.376 = 0.032$ We have determined the minterm probabilities, which are displayed on the minterm map Figure 2.1.4. We may now compute the probability of any Boolean combination of the generating events $A, B, C$. Thus, $P(A \cup B \cup C) = 1 - P(A^c B^c C^c) - 1 - p(0) = 0.968$ and $P(A^c B^c C) = p(1) = 0.016$ Figure 2.1.5. Minterm probabilities for opinion survey. Example 2.1.4 Example $4$ Opinion survey A survey of 1000 persons is made to determine their opinions on four propositions. Let $A, B, C, D$ be the events a person selected agrees with the respective propositions. Survey results show the following probabilities for various combinations: $P(A) = 0.200$, $P(B) = 0.500$, $P(C) = 0.300$, $P(D) = 0.700$, $P(A(B \cup C^c) D^c) = 0.055$ $P(A \cup BC \cup D^c) = 0.520$, $P(A^cBC^c D) = 0.120$, $P(ABCD) = 0.015$, $P(AB^c C) = 0.030$ $P(A^c B^c C^c D) = 0.195$, $P(A^c BC) = 0.120$, $P(A^c B^c D^c) = 0.120$, $P(AC^c) = 0.140$ $P(ACD^c) = 0.025$, $P(ABC^cD^c) = 0.020$ Determine the probabilities for each minterm and for each of the following combinations $A^c (BC^c \cup B^c C)$ - that is, not $A$ and ($B$ or $C$, but not both) $A \cup BC^c$ - that is, $A$ or ($B$ and not $C$) Solution At the outset, it is not clear that the data are consistent or sufficient to determine the minterm probabilities. However, an examination of the data shows that there are sixteen items (including the fact that the sum of all minterm probabilities is one). Thus, there is hope, but no assurance, that a solution exists. A step elimination procedure, as in the previous examples, shows that all minterms can in fact be calculated. The results are displayed on the minterm map in Figure 2.1.5. It would be desirable to be able to analyze the problem systematically. The formulation above suggests a more systematic algebraic formulation which should make possible machine aided solution. Systematic formulation Use of a minterm map has the advantage of visualizing the minterm expansion in direct relation to the Boolean combination. The algebraic solutions of the previous problems involved ad hoc manipulations of the data minterm probability combinations to find the probability of the desired target combination. We seek a systematic formulation of the data as a set of linear algebraic equations with the minterm probabilities as unknowns, so that standard methods of solution may be employed. Consider again the software survey of Example 2.1.1. Example $5$ The softerware survey problem reformulated The data, expressed in terms of minterm probabilities, are: $P(A) = p(4, 5, 6, 7) = 0.80$ $P(B) = p(2, 3, 6, 7) = 0.65$ $P(C) = p(1, 3, 5, 7) = 0.30$ $P(ABC) = p(7) = 0.10$ $P(A^cB^c) = p(0,1) = 0.05$ $P(AB \cup AC \cup BC) = p(3, 5, 6, 7) = 0.65$ $P(AB^cC) = p(5) = 2p(3) = 2P(A^cBC)$, so that $p(5) - 2p(3) = 0$ We also have in any case $P(\Omega) = P(A \cup A^c) = p(0,1, 2, 3, 4, 5, 6, 7) = 1$ to complete the eight items of data needed for determining all eight minterm probabilities. The first datum can be expressed as an equation in minterm probabilities: $0 \cdot p(0) + 0 \cdot p(1) + 0 \cdot p(2) + 0 \cdot p(3) + 1 \cdot p(4) + 1 \cdot p(5) + 1 \cdot p(6) + 1 \cdot p(7) = 0.80$ This is an algebraic equation in $p(0), \cdot\cdot\cdot, p(7)$ with a matrix of coefficients [0 0 0 0 1 1 1 1] The others may be written out accordingly, giving eight linear algebraic equations in eight variables $p(0)$ through $p(7)$. Each equation has a matrix or vector of zero-one coefficients indicating which minterms are included. These may be written in matrix form as follows: $\begin{bmatrix} 1 & 1 & 1& & 1 & 1 & 1 & 1 & 1 \ 0 & 0 & 0 & & 0 & 1 & 1 & 1 & 1 \ 0 & 0 & 1 & & 1 & 0 & 0 & 1 & 1 \ 0 & 1 & 0 & & 1 & 0 & 1 & 0 & 1 \ 0 & 0 & 0 & & 0 & 0 & 0 & 0 & 1 \ 1 & 1 & 0 & & 0 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & & 1 & 0 & 1 & 1 & 1 \ 0 & 0 & 0 & & -2 & 0 & 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} p(0) \ p(1) \ p(2) \ p(3) \ p(4) \ p(5) \ p(6) \ p(7) \end{bmatrix} = \begin{bmatrix} 1 \ 0.80 \ 0.65 \ 0.30 \ 0.10 \0.05 \ 0.65 \ 0 \end{bmatrix} = \begin{bmatrix} P(\Omega) \ P(A) \ P(B) \ P(C) \ P(ABC) \ P(A^c B^c) \ P(AB \cup AC \cup BC) \ P(AB^cC) - 2P(A^c BC) \end{bmatrix}$ • The patterns in the coefficient matrix are determined by logical operations. We obtained these with the aid of a minterm map. • The solution utilizes an algebraic procedure, which could be carried out in a variety of ways, including several standard computer packages for matrix operations. We show in the module Minterm Vectors and MATLAB how we may use MATLAB for both aspects. Indicator functions and the minterm expansion Previous discussion of the indicator function shows that the indicator function for a Boolean combination of sets is a numerical valued function of the indicator functions for the individual sets. • As an indicator function, it takes on only the values zero and one. • The value of the indicator function for any Boolean combination must be constant on each minterm. For example, for each ω in the minterm $AB^cCD^c$, we must have $I_A(\omega) = 1$, $I_B(\omega) = 0$, $I_C(\omega) = 1$, and $I_D(\omega) = 0$. Thus, any function of $I_A$, $I_B$, $I_C$, $I_D$ must be constant over the minterm. • Consider a Boolean combination $E$ of the generating sets. If $\omega$ is in $E \cap M_i$, then $I_E(\omega) = 1$ for all $\omega \in M_i$, so that $M_i \subset E$. Since each $\omega \in M_i$ or some $i, E$ must be the union of those minterms sharing an $\omega$ with $E$. • Let $\{M_i: i \in J_E\}$ be the subclass of those minterms on which $I_E$ has the value one. Then $E = \bigvee_{J_E} M_i$ which is the minterm expansion of $E$.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/02%3A_Minterm_Analysis/2.01%3A_Minterms.txt
The concepts and procedures in this unit play a significant role in many aspects of the analysis of probability topics and in the use of MATLAB throughout this work. Minterm vectors and MATLAB The systematic formulation in the previous module Minterms shows that each Boolean combination, as a union of minterms, can be designated by a vector of zero-one coefficients. A coefficient one in the $i$th position (numbering from zero) indicates the inclusion of minterm $M_i$ in the union. We formulate this pattern carefully below and show how MATLAB logical operations may be utilized in problem setup and solution. Suppose $E$ is a Boolean combination of $A, B, C$. Then, by the minterm expansion, $E = \bigvee_{J_E} M_i$ where $M_i$ is the $i$th minterm and $J_E$ is the set of indices for those $M_i$ included in $E$. For example, consider $E = A(B \cup C^c) \cup A^c (B \cup C^c)^c = M_1 \bigvee M_4 \bigvee M_6 \bigvee M_7 = M(1, 4, 6, 7)$ $F = A^c B^c \cup AC = M_0 \bigvee M_1 \bigvee M_5 \bigvee M_7 = M(0, 1, 5, 7)$ We may designate each set by a pattern of zeros and ones ($e_0, e_1, \cdot\cdot\cdot, e_7$). The ones indicate which minterms are present in the set. In the pattern for set $E$, minterm $M_i$ is included in $E$ iff $e_i = 1$. This is, in effect, another arrangement of the minterm map. In this form, it is convenient to view the pattern as a minterm vector, which may be represented by a row matrix or row vector [$e_0, e_1, \cdot\cdot\cdot, e_7$]. We find it convenient to use the same symbol for the name of the event and for the minterm vector or matrix representing it. Thus, for the examples above, $E \sim$ [0 1 0 0 1 0 1 1] and $F \sim$ [1 1 0 0 0 1 0 1] It should be apparent that this formalization can be extended to sets generated by any finite class. Minterm vectors for Boolean combinations If $E$ and $F$ are combinations of $n$ generating sets, then each is represented by a unique minterm vector of length $2^n$. In the treatment in the module Minterms, we determine the minterm vector with the aid of a minterm map. We wish to develop a systematic way to determine these vectors. As a first step, we suppose we have minterm vectors for $E$ and $F$ and want to obtain the minterm vector of Boolean combinations of these. 1. The minterm expansion for $E \cup F$ has all the minterms in either set. This means the $j$th element of the vector for $E \cup F$ is the maximum of the $j$th elements for the two vectors. 2. The minterm expansion for $E \cap F$ has only those minterms in both sets. This means the $j$th element of the vector for $E \cap F$ is the minimum of the $j$th elements for the two vectors. 3. The minterm expansion for $E^c$ has only those minterms not in the expansion for $E$. This means the vector for $E^c$ has zeros and ones interchanged. The $j$th element of $E^c$ is one iff the corresponding element of $E$ is zero. We illustrate for the case of the two combinations $E$ and $F$ of three generating sets, considered above $E = A(B \cup C^c) \cup A^c (B \cup C^c)^c \cup A^c (B \cup C^c)^c \sim$ [0 1 0 0 1 0 1 1] and $F = A^c B^c \cup AC \sim$ [1 1 0 0 0 1 0 1] Then $E \cup F \sim$ [1 1 0 0 1 1 1 1], $E \cap F \sim$ [0 1 0 0 0 0 0 1], and $E^c \sim$ [1 0 1 1 0 1 0 0] MATLAB logical operations MATLAB logical operations on zero-one matrices provide a convenient way of handling Boolean combinations of minterm vectors represented as matrices. For two zero-one matrices $E, F$ of the same size • $E|F$ is the matrix obtained by taking the maximum element in each place. • $E\text{&}F$ is the matrix obtained by taking the minimum element in each place. • $E^c$ is the matrix obtained by interchanging one and zero in each place in $E$. Thus, if $E, F$ are minterm vectors for sets by the same name, then $E|F$ is the minterm vector for $E \cup F$, $E\text{&}F$ is the minterm vector for $E \cap F$, and $E = 1 - E$ is the minterm vector for $E^c$. This suggests a general approach to determining minterm vectors for Boolean combinations. Start with minterm vectors for the generating sets. Use MATLAB logical operations to obtain the minterm vector for any Boolean combination. Suppose, for example, the class of generating sets is $\{A, B, C\}$. Then the minterm vectors for $A$, $B$, and $C$, respectively, are $A =$ [0 0 0 0 1 1 1 1] $B =$ [0 0 1 1 0 0 1 1] $C =$ [0 1 0 1 0 1 0 1] If $E = AB \cup C^c$, then the logical combination $E = (A \text{&} B)\ |\ C$ of the matrices yields $E =$ [1 0 1 0 1 0 1 1]. MATLAB implementation A key step in the procedure just outlined is to obtain the minterm vectors for the generating elements $\{A, B, C\}$. We have an m-function to provide such fundamental vectors. For example to produce the second minterm vector for the family (i.e., the minterm vector for $B$), the basic zero-one pattern is replicated twice to give 0 0 1 1 0 0 1 1 The function minterm(n,k) generates the kth minterm vector for a class of n generating sets. minterms for the class {a, b, c}. >> A = minterm(3,1) A = 0 0 0 0 1 1 1 1 >> B = minterm(3,2) B = 0 0 1 1 0 0 1 1 >> C = minterm(3,3) C = 0 1 0 1 0 1 0 1 minterm patterns for the boolean combinations $F = AB \cup B^c C$ $G = A \cup A^c C$ F = (A&B)|(~B&C) F = 0 1 0 0 0 1 1 1 >> G = A|(~A&C) G = 0 1 0 1 1 1 1 1 >> JF = find(F)-1 % Use of find to determine index set for F JF = 1 5 6 7 % Shows F = M(1, 5, 6, 7) These basic minterm patterns are useful not only for Boolean combinations of events but also for many aspects of the analysis of those random variables which take on only a finite number of values. Zero-one arrays in MATLAB The treatment above hides the fact that a rectangular array of zeros and ones can have two quite different meanings and functions in MATLAB. A numerical matrix (or vector) subject to the usual operations on matrices.. A logical array whose elements are combined by a. Logical operators to give new logical arrays; b. Array operations (element by element) to give numerical matrices; c. Array operations with numerical matrices to give numerical results. Some simple examples will illustrate the principal properties. >>> A = minterm(3,1); >> B = minterm(3,2); >> C = minterm(3,3); >> F = (A&B)|(~B&C) F = 0 1 0 0 0 1 1 1 >> G = A|(~A&C) G = 0 1 0 1 1 1 1 1 >> islogical(A) % Test for logical array ans = 0 >> islogical(F) ans = 1 >> m = max(A,B) % A matrix operation m = 0 0 1 1 1 1 1 1 >> islogical(m) ans = 0 >> m1 = A|B % A logical operation m1 = 0 0 1 1 1 1 1 1 >> islogical(m1) ans = 1 >> a = logical(A) % Converts 0-1 matrix into logical array a = 0 0 0 0 1 1 1 1 >> b = logical(B) >> m2 = a|b m2 = 0 0 1 1 1 1 1 1 >> p = dot(A,B) % Equivalently, p = A*B' p = 2 >> p1 = total(A.*b) p1 = 2 >> p3 = total(A.*B) p3 = 2 >> p4 = a*b' % Cannot use matrix operations on logical arrays ??? Error using ==> mtimes % MATLAB error signal Logical inputs must be scalar. Often it is desirable to have a table of the generating minterm vectors. Use of the function minterm in a simple “for loop” yields the following m-function. The function mintable(n) Generates a table of minterm vectors for $n$ generating sets. mintable for three variables >> M3 = mintable(3) M3 = 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 As an application of mintable, consider the problem of determining the probability of $k$ of $n$ events. If $\{A_i: 1 \le i \le n\}$ is any finite class of events, the event that exactly $k$ of these events occur on a trial can be characterized simply in terms of the minterm expansion. The event $A_{kn}$ that exactly $k$ occur is given by $A_{kn} = the\ disjoint\ union\ of\ those\ minterms\ with\ exactly\ k\ positions\ uncomplemented$ In the matrix these are the minterms corresponding to columns with exactly $k$ ones. The event BknBkn that $k$ or more occur is given by $B_{kn} = \bigvee_{r = k}^{n} A_{rn}$ If we have the minterm probabilities, it is easy to pick out the appropriate minterms and combine the probabilities. The following example in the case of three variables illustrates the procedure. the software survey (continued) In the software survey problem, the minterm probabilities are $pm =$ [0 0.05 0.10 0.05 0.20 0.10 0.40 0.10] where $A =$ event has word processor, $B =$ event has spread sheet, $C =$ event has a data base program. It is desired to get the probability an individual selected has $k$ of these, $k = 0, 1, 2, 3$. Solution We form a mintable for three variables. We count the number of “successes” corresponding to each minterm by using the MATLAB function sum, which gives the sum of each column. In this case, it would be easy to determine each distinct value and add the probabilities on the minterms which yield this value. For more complicated cases, we have an m-function called csort (for sort and consolidate) to perform this operation. >> pm = 0.01*[0 5 10 5 20 10 40 10]; >> M = mintable(3) M = 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 >> T = sum(M) % Column sums give number T = 0 1 1 2 1 2 2 3 % of successes on each >> [k,pk] = csort(T,pm); % minterm, determines % distinct values in T and >> disp([k;pk]') % consolidates probabilities 0 0 1.0000 0.3500 2.0000 0.5500 3.0000 0.1000 For three variables, it is easy enough to identify the various combinations “by eye” and make the combinations. For a larger number of variables, however, this may become tedious. The approach is much more useful in the case of Independent Events, because of the ease of determining the minterm probabilities. Minvec procedures Use of the tilde $\sim$ to indicate the complement of an event is often awkward. It is customary to indicate the complement of an event $E$ by $E^c$. In MATLAB, we cannot indicate the superscript, so we indicate the complement by $E^c$ instead of $\sim E$. To facilitate writing combinations, we have a family of minvec procedures (minvec3, minvec4, ..., minvec10) to expedite expressing Boolean combinations of $n = 3, 4, 5, \cdot\cdot\cdot, 10$ sets. These generate and name the minterm vector for each generating set and its complement. boolean combinations using minvec3 We wish to generate a matrix whose rows are the minterm vectors for $\Omega = A \cup A^c, A, AB, ABC, C,$ and $A^c C^c$, respectively. >> minvec3 % Call for the setup procedure Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired >> V = [A|Ac; A; A&B; A&B&C; C; Ac&Cc]; % Logical combinations (one per % row) yield logical vectors >> disp(V) 1 1 1 1 1 1 1 1 % Mixed logical and 0 0 0 0 1 1 1 1 % numerical vectors 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 1 0 1 0 0 0 0 0 Minterm probabilities and Boolean combination If we have the probability of every minterm generated by a finite class, we can determine the probability of any Boolean combination of the members of the class. When we know the minterm expansion or, equivalently, the minterm vector, we simply pick out the probabilities corresponding to the minterms in the expansion and add them. In the following example, we do this “by hand” then show how to do it with MATLAB . Consider $E = A (B \cup C^c) \cup A^c (B \cup C^c)^c$ and $F = A^c B^c \cup AC$ of the example above, and suppose the respective minterm probabilites are $p_0 = 0.21$, $p_1 = 0.06$, $p_2 = 0.29$, $p_3 = 0.11$, $p_4 = 0.09$, $p_5 = 0.03$, $p_6 = 0.14$, $p_7 = 0.07$ Use of a minterm map shows $E = M(1, 4, 6, 7)$ and $F = M(0, 1, 5, 7)$. so that $P(E) = p_1 + p_4 + p_6 + p_7 = p(1, 4, 6, 7) = 0.36$ and $P(F) = p(0, 1, 5, 7) = 0.37$ This is easily handled in MATLAB. • Use minvec3 to set the generating minterm vectors. • Use logical matrix operations $E = (A \text{&} (B|Cc))|(Ac\text{&}((B|Cc)))$ and $F = (Ac \text{&} Bc)|(A\text{&}C)$ to obtain the (logical) minterm vectors for $E$ and $F$ • If $pm$ is the matrix of minterm probabilities, perform the algebraic dot product or scalar product of the pmpm matrix and the minterm vector for the combination. This can be called for by the MATLAB commands PE = E*pm' and PF = F*pm'. The following is a transcript of the MATLAB operations. >> minvec3 % Call for the setup procedure Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. >> E = (A&(B|Cc))|(Ac&~(B|Cc)); >> F = (Ac&Bc)|(A&C); >> pm = 0.01*[21 6 29 11 9 3 14 7]; >> PE = E*pm' % Picks out and adds the minterm probabilities PE = 0.3600 >> PF = F*pm' PF = 0.3700 solution of the software survey problem We set up the matrix equations with the use of MATLAB and solve for the minterm probabilities. From these, we may solve for the desired “target” probabilities. >> minvec3 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Data vector combinations are: >> DV = [A|Ac; A; B; C; A&B&C; Ac&Bc; (A&B)|(A&C)|(B&C); (A&Bc&C) - 2*(Ac&B&C)] DV = 1 1 1 1 1 1 1 1 % Data mixed numerical 0 0 0 0 1 1 1 1 % and logical vectors 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 -2 0 1 0 0 >> DP = [1 0.8 0.65 0.3 0.1 0.05 0.65 0]; % Corresponding data probabilities >> pm = DV\DP' % Solution for minterm probabilities pm = -0.0000 % Roundoff -3.5 x 10-17 0.0500 0.1000 0.0500 0.2000 0.1000 0.4000 0.1000 >> TV = [(A&B&Cc)|(A&Bc&C)|(Ac&B&C); Ac&Bc&C] % Target combinations TV = 0 0 0 1 0 1 1 0 % Target vectors 0 1 0 0 0 0 0 0 >> PV = TV*pm % Solution for target probabilities PV = 0.5500 % Target probabilities 0.0500 An alternate approach The previous procedure first obtained all minterm probabilities, then used these to determine probabilities for the target combinations. The following procedure does not require calculation of the minterm probabilities. Sometimes the data are not sufficient to calculate all minterm probabilities, yet are sufficient to allow determination of the target probabilities. Suppose the data minterm vectors are linearly independent, and the target minterm vectors are linearly dependent upon the data vectors (i.e., the target vectors can be expressed as linear combinations of the data vectors). Now each target probability is the same linear combination of the data probabilities. To determine the linear combinations, solve the matrix equation $TV = CT * DV\ which\ has\ the\ MATLAB\ solution\ CT = TV/DV$ Then the matrix $tp$ of target probabilites is given by $tp = CT * DP'$. Continuing the MATLAB procedures above, we have: >> CT = TV/DV; >> tp = CT*DP' tp = 0.5500 0.0500 The procedure mincalc The procedure mincalc performs calculations as in the preceding examples. The refinements consist of determining consistency and computability of various individual minterm probabilities and target probilities. The consistency check is principally for negative minterm probabilities. The computability tests are tests for linear independence by means of calculation of ranks of various matrices. The procedure picks out the computable minterm probabilities and the computable target probabilities and calculates them. To utilize the procedure, the problem must be formulated appropriately and precisely, as follows: Use the MATLAB program minvecq to set minterm vectors for each of q basic events. Data consist of Boolean combinations of the basic events and the respective probabilities of these combinations. These are organized into two matrices: • The data vector matrix $DV$ has the data Boolean combinations– one on each row. MATLAB translates each row into the minterm vector for the corresponding Boolean combination. The first entry (on the first row) is A | Ac (for $A \bigvee A^c$), which is the whole space. Its minterm vector consists of a row of ones. • The data probability matrix $DP$ is a row matrix of the data probabilities. The first entry is one, the probability of the whole space. The objective is to determine the probability of various target Boolean combinations. These are put into the target vector matrix $TV$, one on each row. MATLAB produces the minterm vector for each corresponding target Boolean combination. Computational note. In mincalc, it is necessary to turn the arrays DV and TV consisting of zero-one patterns into zero-one matrices. This is accomplished for DV by the operation . and similarly for TV. Both the original and the transformed matrices have the same zero-one pattern, but MATLAB interprets them differently. Usual case Ṡuppose the data minterm vectors are linearly independent and the target vectors are each linearly dependent on the data minterm vectors. Then each target minterm vector is expressible as a linear combination of data minterm vectors. Thus, there is a matrix $CT$ such that $TV = CT * DV$. MATLAB solves this with the command $CT = TV/DV$ The target probabilities are the same linear combinations of the data probabilities. These are obtained by the MATLAB operation $tp = DP * CT'$. Cautionary notes The program mincalc depends upon the provision in MATLAB for solving equations when less than full data are available (based on the singular value decomposition). There are several situations which should be dealt with as special cases. It is usually a good idea to check results by hand to determine whether they are consistent with data. The checking by hand is usually much easier than obtaining the solution unaided, so that use of MATLAB is advantageous even in questionable cases. The Zero Problem. If the total probability of a group of minterms is zero, then it follows that the probability of each minterm in the group is zero. However, if mincalc does not have enough information to calculate the separate minterm probabilities in the case they are not zero, it will not pick up in the zero case the fact that the separate minterm probabilities are zero. It simply considers these minterm probabilities not computable. Linear dependence. In the case of linear dependence, the operation called for by the command CT = TV/DV may not be able to solve the equations. The matrix may be singular, or it may not be able to decide which of the redundant data equations to use. Should it provide a solution, the result should be checked with the aid of a minterm map. Consistency check. Since the consistency check is for negative minterms, if there are not enough data to calculate the minterm probabilities, there is no simple check on the consistency. Sometimes the probability of a target vector included in another vector will actually exceed what should be the larger probability. Without considerable checking, it may be difficult to determine consistency. In a few unusual cases, the command CT = TV/DV does not operate appropriately, even though the data should be adequate for the problem at hand. Apparently the approximation process does not converge. MATLAB Solutions for examples using mincalc software survey % file mcalc01 Data for software survey minvec3; DV = [A|Ac; A; B; C; A&B&C; Ac&Bc; (A&B)|(A&C)|(B&C); (A&Bc&C) - 2*(Ac&B&C)]; DP = [1 0.8 0.65 0.3 0.1 0.05 0.65 0]; TV = [(A&B&Cc)|(A&Bc&C)|(Ac&B&C); Ac&Bc&C]; disp('Call for mincalc') >> mcalc01 % Call for data Call for mincalc % Prompt supplied in the data file >> mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.5500 2.0000 0.0500 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA >> disp(PMA) % Optional call for minterm probabilities 0 0 1.0000 0.0500 2.0000 0.1000 3.0000 0.0500 4.0000 0.2000 5.0000 0.1000 6.0000 0.4000 7.0000 0.1000 computer survey % file mcalc02.m Data for computer survey minvec3 DV = [A|Ac; A; B; C; A&B&C; A&C; (A&B)|(A&C)|(B&C); ... 2*(B&C) - (A&C)]; DP = 0.001*[1000 565 515 151 51 124 212 0]; TV = [A|B|C; Ac&Bc&C]; disp('Call for mincalc') >> mcalc02 Call for mincalc >> mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.9680 2.0000 0.0160 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA >> disp(PMA) 0 0.0320 1.0000 0.0160 2.0000 0.3760 3.0000 0.0110 4.0000 0.3640 5.0000 0.0730 6.0000 0.0770 7.0000 0.0510 % file mcalc03.m Data for opinion survey minvec4 DV = [A|Ac; A; B; C; D; A&(B|Cc)&Dc; A|((B&C)|Dc) ; Ac&B&Cc&D; ... A&B&C&D; A&Bc&C; Ac&Bc&Cc&D; Ac&B&C; Ac&Bc&Dc; A&Cc; A&C&Dc; A&B&Cc&Dc]; DP = 0.001*[1000 200 500 300 700 55 520 200 15 30 195 120 120 ... 140 25 20]; TV = [Ac&((B&Cc)|(Bc&C)); A|(B&Cc)]; disp('Call for mincalc') >> mincalc03 Call for mincalc >> mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.4000 2.0000 0.4800 The number of minterms is 16 The number of available minterms is 16 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA >> disp(minmap(pma)) % Display arranged as on minterm map 0.0850 0.0800 0.0200 0.0200 0.1950 0.2000 0.0500 0.0500 0.0350 0.0350 0.0100 0.0150 0.0850 0.0850 0.0200 0.0150 The procedure mincalct A useful modification, which we call mincalct, computes the available target probabilities, without checking and computing the minterm probabilities. This procedure assumes a data file similar to that for mincalc, except that it does not need the target matrix $TV$, since it prompts for target Boolean combination inputs. The procedure mincalct may be used after mincalc has performed its operations to calculate probabilities for additional target combinations. (continued) Additional target datum for the opinion survey Suppose mincalc has been applied to the data for the opinion survey and that it is desired to determine $P(AD \cup BD^c)$. It is not necessary to recalculate all the other quantities. We may simply use the procedure mincalct and input the desired Boolean combination at the prompt. >> mincalct Enter matrix of target Boolean combinations (A&D)|(B&Dc) Computable target probabilities 1.0000 0.2850 Repeated calls for mcalct may be used to compute other target probabilities.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/02%3A_Minterm_Analysis/2.02%3A_Minterms_and_MATLAB_Calculations.txt
Exercise \(1\) Consider the class \(\{A, B, C, D\}\) of events. Suppose the probability that at least one of the events \(A\) or \(C\) occurs is 0.75 and the probability that at least one of the four events occurs is 0.90. Determine the probability that neither of the events \(A\) or \(C\) but at least one of the events \(B\) or \(D\) occurs. Answer Use the pattern \(P(E \cup F) = P(E) + P(E^c F)\) and \((A \cup C)^c = A^c C^c\). \(P(A \cup C \cup B \cup D) = P(A \cup C) + P(A^c C^c (B \cup D))\), so that \(P(A^c C^c (B \cup D)) = 0.90 - 0.75 = 0.15\) Exercise \(2\) 1. Use minterm maps to show which of the following statements are true for any class \(\{A, B, C\}\): a. \(A \cup (BC)^c = A \cup B \cup B^c C^c\) b. \((A \cup B)^c = A^c C \cup B^c C\) c. \(A \subset AB \cup AC \cup BC\) 2. Repeat part (1) using indicator functions (evaluated on minterms). 3. Repeat part (1) using the m-procedure minvec3 and MATLAB logical operations. Answer We use the MATLAB procedure, which displays the essential patterns. ```minvec3 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. E = A|~(B&C); F = A|B|(Bc&Cc); disp([E;F]) 1 1 1 0 1 1 1 1 % Not equal 1 0 1 1 1 1 1 1 G = ~(A|B); H = (Ac&C)|(Bc&C); disp([G;H]) 1 1 0 0 0 0 0 0 % Not equal 0 1 0 1 0 1 0 0 K = (A&B)|(A&C)|(B&C); disp([A;K]) 0 0 0 0 1 1 1 1 % A not contained in K 0 0 0 1 0 1 1 1``` Exercise \(3\) Use (1) minterm maps, (2) indicator functions (evaluated on minterms), (3) the m-procedure minvec3 and MATLAB logical operations to show that a. \(A(B \cup C^c) \cup A^c BC \subset A (BC \cup C^c) \cup A^c B\) b. \(A \cup A^c BC = AB \cup BC \cup AC \cup AB^c C^c\) Answer We use the MATLAB procedure, which displays the essential patterns. ```minvec3 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. E = (A&(B|Cc))|(Ac&B&C); F = (A&((B&C)|Cc))|(Ac&B); disp([E;F]) 0 0 0 1 1 0 1 1 % E subset of F 0 0 1 1 1 0 1 1 G = A|(Ac&B&C); H = (A&B)|(B&C)|(A&C)|(A&Bc&Cc); disp([G;H]) 0 0 0 1 1 1 1 1 % G = H 0 0 0 1 1 1 1 1``` Exercise \(4\) Minterms for the events \(\{A, B, C, D\}\), arranged as on a minterm map are ``` 0.0168 0.0072 0.0252 0.0108 0.0392 0.0168 0.0588 0.0252 0.0672 0.0288 0.1008 0.0432 0.1568 0.0672 0.2352 0.1008``` What is the probability that three or more of the events occur on a trial? Of exactly two? Of two or fewer? Answer We use mintable(4) and determine positions with correct number(s) of ones (number of occurrences). An alternate is to use minvec4 and express the Boolean combinations which give the correct number(s) of ones. ```npr02_04 Minterm probabilities are in pm. Use mintable(4) a = mintable(4); s = sum(a); % Number of ones in each minterm position P1 = (s>=3)*pm' % Select and add minterm probabilities P1 = 0.4716 P2 = (s==2)*pm' P2 = 0.3728 P3 = (s<=2)*pm' P3 = 0.5284``` Exercise \(5\) Minterms for the events \(\{A, B, C, D, E\}\), arranged as on a minterm map are ``` 0.0216 0.0324 0.0216 0.0324 0.0144 0.0216 0.0144 0.0216 0.0144 0.0216 0.0144 0.0216 0.0096 0.0144 0.0096 0.0144 0.0504 0.0756 0.0504 0.0756 0.0336 0.0504 0.0336 0.0504 0.0336 0.0504 0.0336 0.0504 0.0224 0.0336 0.0224 0.0336``` What is the probability that three or more of the events occur on a trial? Of exactly four? Of three or fewer? Of either two or four? Answer We use mintable(5) and determine positions with correct number(s) of ones (number of occurrences). ```npr02_05 Minterm probabilities are in pm. Use mintable(5) a = mintable(5); s = sum(a); % Number of ones in each minterm position P1 = (s>=3)*pm' % Select and add minterm probabilities P1 = 0.5380 P2 = (s==4)*pm' P2 = 0.1712 P3 = (s<=3)*pm' P3 = 0.7952 P4 = ((s==2)|(s==4))*pm' P4 = 0.4784``` Exercise \(6\) Suppose \(P(A \cup B^c C) = 0.65\), \(P(AC) = 0.2\), \(P(A^c B) = 0.25\) \(P(A^c C^c) = 0.25\), \(P(BC) = 0.30\). Determine \(P((AC^c \cup A^c C) B^c)\). Then determine \(P((AB^c \cup A^c) C^c)\) and \(P(A^c(B \cup C^c))\), if possible. Answer ```% file npr02_06.m % Data file % Data for Exercise 2.3.6. minvec3 DV = [A|Ac; A|(Bc&C); A&C; Ac&B; Ac&Cc; B&Cc]; DP = [1 0.65 0.20 0.25 0.25 0.30]; TV = [((A&Cc)|(Ac&C))&Bc; ((A&Bc)|Ac)&Cc; Ac&(B|Cc)]; disp('Call for mincalc') npr02_06 % Call for data Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3000 % The first and third target probability 3.0000 0.3500 % is calculated. Check with minterm map. The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(7\) Suppose \(P((AB^c \cup A^cB)C) = 0.4\), \(P(AB) = 0.2\), \(P(A^cC^c) = 0.3\), \(P(A) = 0.6\), \(P(C) = 0.5\), and \(P(AB^cC^c) = 0.1\). Determine \(P(A^c C^c \cup AC)\), \(P(AB^c \cup A^c)C^c)\), and \(P(A^c(B \cup C^c))\), if possible. Answer ```% file npr02_07.m % Data for Exercise 2.3.7. minvec3 DV = [A|Ac; ((A&Bc)|(Ac&B))&C; A&B; Ac&Cc; A; C; A&Bc&Cc]; DP = [ 1 0.4 0.2 0.3 0.6 0.5 0.1]; TV = [(Ac&Cc)|(A&C); ((A&Bc)|Ac)&Cc; Ac&(B|Cc)]; disp('Call for mincalc') npr02_07 % Call for data Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.7000 % All target probabilities calculable 2.0000 0.4000 % even though not all minterms are available 3.0000 0.4000 The number of minterms is 8 The number of available minterms is 6 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(8\) Suppose \(P(A) = 0.6\), \(P(C) = 0.4\), \(P(AC) = 0.3\), \(P(A^cB) = 0.2\) and \(P(A^cB^cC^c) = 0.1\). Determine \(P((A \cup B)C^c\), \(P(AC^c \cup A^c C)\), and \(P(AC^c \cup A^cB)\), if possible. Answer ```% file npr02_08.m % Data for Exercise 2.3.8. minvec3 DV = [A|Ac; A; C; A&C; Ac&B; Ac&Bc&Cc]; DP = [ 1 0.6 0.4 0.3 0.2 0.1]; TV = [(A|B)&Cc; (A&Cc)|(Ac&C); (A&Cc)|(Ac&B)]; disp('Call for mincalc') npr02_08 % Call for data Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.5000 % All target probabilities calculable 2.0000 0.4000 % even though not all minterms are available 3.0000 0.5000 The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(9\) Suppose \(P(A) = 0.5\), \(P(AB) = P(AC) = 0.3\), and \(P(ABC^c) = 0.1\). Determine \(P(A(BC^c)^c\) and \(P(AB \cup AC \cup BC)\). Then repeat with additional data \(P(A^cB^cC^c) = 0.1\) and \(P(A^c BC) = 0.05\) Answer ```% file npr02_09.m % Data for Exercise 2.3.9. minvec3 DV = [A|Ac; A; A&B; A&C; A&B&Cc]; DP = [ 1 0.5 0.3 0.3 0.1]; TV = [A&(~(B&Cc)); (A&B)|(A&C)|(B&C)]; disp('Call for mincalc') % Modification for part 2 % DV = [DV; Ac&Bc&Cc; Ac&B&C]; % DP = [DP 0.1 0.05]; npr02_09 % Call for data Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.4000 % Only the first target probability calculable The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA DV = [DV; Ac&Bc&Cc; Ac&B&C]; % Modification of data DP = [DP 0.1 0.05]; mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.4000 % Both target probabilities calculable 2.0000 0.4500 % even though not all minterms are available The number of minterms is 8 The number of available minterms is 6 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(10\) Given \(P(A) = 0.6\), \(P(A^c B^c) = 0.2\), \(P(AC^c) = 0.4\), and \(P(ACD^c) = 0.1\). Determine \(P(A^c B \cup A(C^c \cup D))\). Answer ```% file npr02_10.m % Data for Exercise 2.3.10. minvec4 DV = [A|Ac; A; Ac&Bc; A&Cc; A&C&Dc]; DP = [1 0.6 0.2 0.4 0.1]; TV = [(Ac&B)|(A&(Cc|D))]; disp('Call for mincalc') npr02_10 Variables are A, B, C, D, Ac, Bc, Cc, Dc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.7000 % Checks with minterm map solution The number of minterms is 16 The number of available minterms is 0 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(11\) A survey of a represenative group of students yields the following information: • 52 percent are male • 85 percent live on campus • 78 percent are male or are active in intramural sports (or both) • 30 percent live on campus but are not active in sports • 32 percent are male, live on campus, and are active in sports • 8 percent are male and live off campus • 17 percent are male students inactive in sports 1. What is the probability that a randomly chosen student is male and lives on campus? 2. What is the probability of a male, on campus student who is not active in sports? 3. What is the probability of a female student active in sports? Answer ```% file npr02_11.m % Data for Exercise 2.3.11. % A = male; B = on campus; C = active in sports minvec3 DV = [A|Ac; A; B; A|C; B&Cc; A&B&C; A&Bc; A&Cc]; DP = [ 1 0.52 0.85 0.78 0.30 0.32 0.08 0.17]; TV = [A&B; A&B&Cc; Ac&C]; disp('Call for mincalc') npr02_11 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.4400 2.0000 0.1200 3.0000 0.2600 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(12\) A survey of 100 persons of voting age reveals that 60 are male, 30 of whom do not identify with a political party; 50 are members of a political party; 20 nonmembers of a party voted in the last election, 10 of whom are female. How many nonmembers of a political party did not vote? Suggestion Express the numbers as a fraction, and treat as probabilities. Answer ```% file npr02_12.m % Data for Exercise 2.3.12. % A = male; B = party member; C = voted last election minvec3 DV = [A|Ac; A; A&Bc; B; Bc&C; Ac&Bc&C]; DP = [ 1 0.60 0.30 0.50 0.20 0.10]; TV = [Bc&Cc]; disp('Call for mincalc') npr02_12 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3000 The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(13\) During a period of unsettled weather, let A be the event of rain in Austin, B be the event of rain in Houston, and C be the event of rain in San Antonio. Suppose: \(P(AB) = 0.35\), \(P(AB^c) = 0.15\), \(P(AC) = 0.20\), \(P(AB^c \cup A^cB) = 0.45\) \(P(BC) = 0.30\) \(P(B^c C) = 0.05\) \(P(A^c B^c C^c) = 0.15\) 1. What is the probability of rain in all three cities? 2. What is the probability of rain in exactly two of the three cities? 3. What is the probability of rain in exactly one of the cities? Answer ```% file npr02_13.m % Data for Exercise 2.3.13. % A = rain in Austin; B = rain in Houston; % C = rain in San Antonio minvec3 DV = [A|Ac; A&B; A&Bc; A&C; (A&Bc)|(Ac&B); B&C; Bc&C; Ac&Bc&Cc]; DP = [ 1 0.35 0.15 0.20 0.45 0.30 0.05 0.15]; TV = [A&B&C; (A&B&Cc)|(A&Bc&C)|(Ac&B&C); (A&Bc&Cc)|(Ac&B&Cc)|(Ac&Bc&C)]; disp('Call for mincalc') npr02_13 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.2000 2.0000 0.2500 3.0000 0.4000 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(14\) One hundred students are questioned about their course of study and plans for graduate study. Let \(A =\) the event the student is male; \(B =\) the event the student is studying engineering; \(C=\) the event the student plans at least one year of foreign language; \(D =\) the event the student is planning graduate study (including professional school). The results of the survey are: There are 55 men students; 23 engineering students, 10 of whom are women; 75 students will take foreign language classes, including all of the women; 26 men and 19 women plan graduate study; 13 male engineering students and 8 women engineering students plan graduate study; 20 engineering students will take a foreign language and plan graduate study; 5 non engineering students plan graduate study but no foreign language courses; 11 non engineering, women students plan foreign language study and graduate study. 1. What is the probability of selecting a student who plans foreign language classes and graduate study? 2. What is the probability of selecting a women engineer who does not plan graduate study? 3. What is the probability of selecting a male student who either studies a foreign language but does not intend graduate study or will not study a foreign language but plans graduate study? Answer ```% file npr02_14.m % Data for Exercise 2.3.14. % A = male; B = engineering; % C = foreign language; D = graduate study minvec4 DV = [A|Ac; A; B; Ac&B; C; Ac&C; A&D; Ac&D; A&B&D; ... Ac&B&D; B&C&D; Bc&Cc&D; Ac&Bc&C&D]; DP = [1 0.55 0.23 0.10 0.75 0.45 0.26 0.19 0.13 0.08 0.20 0.05 0.11]; TV = [C&D; Ac&Dc; A&((C&Dc)|(Cc&D))]; disp('Call for mincalc') npr02_14 Variables are A, B, C, D, Ac, Bc, Cc, Dc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3900 2.0000 0.2600 % Third target probability not calculable The number of minterms is 16 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(15\) A survey of 100 students shows that: 60 are men students; 55 students live on campus, 25 of whom are women; 40 read the student newspaper regularly, 25 of whom are women; 70 consider themselves reasonably active in student affairs—50 of these live on campus; 35 of the reasonably active students read the newspaper regularly; All women who live on campus and 5 who live off campus consider themselves to be active; 10 of the on-campus women readers consider themselves active, as do 5 of the off campus women; 5 men are active, off-campus, non readers of the newspaper. 1. How many active men are either not readers or off campus? 2. How many inactive men are not regular readers? Answer ```% file npr02_15.m % Data for Exercise 2.3.15. % A = men; B = on campus; C = readers; D = active minvec4 DV = [A|Ac; A; B; Ac&B; C; Ac&C; D; B&D; C&D; ... Ac&B&D; Ac&Bc&D; Ac&B&C&D; Ac&Bc&C&D; A&Bc&Cc&D]; DP = [1 0.6 0.55 0.25 0.40 0.25 0.70 0.50 0.35 0.25 0.05 0.10 0.05 0.05]; TV = [A&D&(Cc|Bc); A&Dc&Cc]; disp('Call for mincalc') npr02_15 Variables are A, B, C, D, Ac, Bc, Cc, Dc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3000 2.0000 0.2500 The number of minterms is 16 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(16\) A television station runs a telephone survey to determine how many persons in its primary viewing area have watched three recent special programs, which we call a, b, and c. Of the 1000 persons surveyed, the results are: 221 have seen at least a; 209 have seen at least b; 112 have seen at least c; 197 have seen at least two of the programs; 45 have seen all three; 62 have seen at least a and c; the number having seen at least a and b is twice as large as the number who have seen at least b and c. • (a) How many have seen at least one special? • (b) How many have seen only one special program? Answer ```% file npr02_16.m % Data for Exercise 2.3.16. minvec3 DV = [A|Ac; A; B; C; (A&B)|(A&C)|(B&C); A&B&C; A&C; (A&B)-2*(B&C)]; DP = [ 1 0.221 0.209 0.112 0.197 0.045 0.062 0]; TV = [A|B|C; (A&Bc&Cc)|(Ac&B&Cc)|(Ac&Bc&C)]; npr02_16 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3000 2.0000 0.1030 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(17\) An automobile safety inspection station found that in 1000 cars tested: • 100 needed wheel alignment, brake repair, and headlight adjustment • 325 needed at least two of these three items • 125 needed headlight and brake work • 550 needed at wheel alignment 1. How many needed only wheel alignment? 2. How many who do not need wheel alignment need one or none of the other items? Answer ```% file npr02_17.m % Data for Exercise 2.3.17. % A = alignment; B = brake work; C = headlight minvec3 DV = [A|Ac; A&B&C; (A&B)|(A&C)|(B&C); B&C; A ]; DP = [ 1 0.100 0.325 0.125 0.550]; TV = [A&Bc&Cc; Ac&(~(B&C))]; disp('Call for mincalc') npr02_17 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.2500 2.0000 0.4250 The number of minterms is 8 The number of available minterms is 3 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(18\) Suppose \(P(A(B \cup C)) = 0.3\), \(P(A^c) = 0.6\), and \(P(A^c B^c C^c) = 0.1\). Determine \(P(B \cup C)\), \(P((AB \cup A^c B^c)C^c \cup AC)\), and \(P(A^c (B \cup C^c))\), if possible. Repeat the problem with he additional data \(P(A^c BC) = 0.2\) and \(P(A^cB) = 0.3\). Answer ```% file npr02_18.m % Date for Exercise 2.3.18. minvec3 DV = [A|Ac; A&(B|C); Ac; Ac&Bc&Cc]; DP = [ 1 0.3 0.6 0.1]; TV = [B|C; (((A&B)|(Ac&Bc))&Cc)|(A&C); Ac&(B|Cc)]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&C; Ac&B]; % DP = [DP 0.2 0.3]; npr02_18 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.8000 2.0000 0.4000 The number of minterms is 8 The number of available minterms is 2 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA DV = [DV; Ac&B&C; Ac&B]; % Modified data DP = [DP 0.2 0.3]; mincalc % New calculation Data vectors are linearly independent Computable target probabilities 1.0000 0.8000 2.0000 0.4000 3.0000 0.4000 The number of minterms is 8 The number of available minterms is 5 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(19\) A computer store sells computers, monitors, printers. A customer enters the store. Let A, B, C be the respective events the customer buys a computer, a monitor, a printer. Assume the following probabilities: • The probability \(P(AB)\) of buying both a computer and a monitor is 0.49. • The probability \(P(ABC^c)\) of buying both a computer and a monitor but not a printer is 0.17. • The probability \(P(AC)\) of buying both a computer and a printer is 0.45. • The probability \(P(BC)\) of buying both a monitor and a printer is 0.39 • The probability \(P(AC^c \bigvee A^cC)\) of buying a computer or a printer, but not both is 0.50. • The probability \(P(AB^c \bigvee A^cB)\) of buying a computer or a monitor, but not both is 0.43. • The probability \(P(BC^c \bigvee B^c C)\) of buying a monitor or a printer, but not both is 0.43. 1. What is the probability \(P(A)\), \(P(B)\), or \(P(C)\) of buying each? 2. What is the probability of buying exactly two of the three items? 3. What is the probability of buying at least two? 4. What is the probability of buying all three? Answer ```% file npr02_19.m % Data for Exercise 2.3.19. % A = computer; B = monitor; C = printer minvec3 DV = [A|Ac; A&B; A&B&Cc; A&C; B&C; (A&Cc)|(Ac&C); ... (A&Bc)|(Ac&B); (B&Cc)|(Bc&C)]; DP = [1 0.49 0.17 0.45 0.39 0.50 0.43 0.43]; TV = [A; B; C; (A&B&Cc)|(A&Bc&C)|(Ac&B&C); (A&B)|(A&C)|(B&C); A&B&C]; disp('Call for mincalc') npr02_19 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.8000 2.0000 0.6100 3.0000 0.6000 4.0000 0.3700 5.0000 0.6900 6.0000 0.3200 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(20\) Data are \(P(A) = 0.232\), \(P(B) = 0.228\), \(P(ABC) = 0.045\), \(P(AC) = 0.062\), \(P(AB \cup AC \cup BC) = 0.197\) and \(P(BC0 = 2P(AC)\). Determine \(P(A \cup B \cup C)\) and \(P(A^c B^c C)\), if possible. Repeat, with the additional data \(P(C) = 0.230\). Answer ```% file npr02_20.m % Data for Exercise 2.3.20. minvec3 DV = [A|Ac; A; B; A&B&C; A&C; (A&B)|(A&C)|(B&C); B&C - 2*(A&C)]; DP = [ 1 0.232 0.228 0.045 0.062 0.197 0]; TV = [A|B|C; Ac&Bc&Cc]; disp('Call for mincalc') % Modification % DV = [DV; C]; % DP = [DP 0.230 ]; npr02_20 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. mincalc Data vectors are linearly independent Data probabilities are INCONSISTENT The number of minterms is 8 The number of available minterms is 6 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA disp(PMA) 2.0000 0.0480 3.0000 -0.0450 % Negative minterm probabilities indicate 4.0000 -0.0100 % inconsistency of data 5.0000 0.0170 6.0000 0.1800 7.0000 0.0450 DV = [DV; C]; DP = [DP 0.230]; mincalc Data vectors are linearly independent Data probabilities are INCONSISTENT The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(21\) Data are: \(P(A) = 0.4\), \(P(AB) = 0.3\), \(P(ABC) = 0.25\), \(P(C) = 0.65\), \(P(A^cC^c) = 0.3\). Determine available minterm probabilities and the following, if computable: \(P(AC^c \cup A^c C)\), \(P(A^cB^c)\), \(P(A \cup B)\), \(P(AB^c)\) With only six items of data (including \(P(\Omega) = P(A \bigvee A^c) = 1\), not all minterms are available. Try the additional data \(P(A^cB C^c) = 0.1\) and \(P(A^cB^c) = 0.3\). Are these consistent and linearly independent? Are all minterm probabilities available? Answer ```% file npr02_21.m % Data for Exercise 2.3.21. minvec3 DV = [A|Ac; A; A&B; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.3 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; ``` ```npr02_21 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3500 4.0000 0.1000 The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA DV = [DV; Ac&B&Cc; Ac&Bc]; DP = [DP 0.1 0.3 ]; mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.3500 2.0000 0.3000 3.0000 0.7000 4.0000 0.1000 The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA``` Exercise \(22\) Repeat Exercise with \(P(AB)\) changed from 0.3 to 0.5. What is the result? Explain the reason for this result. Answer ```% file npr02_22.m % Data for Exercise 2.3.22. minvec3 DV = [A|Ac; A; A&B; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.5 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; ``` ```npr02_22 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Data probabilities are INCONSISTENT The number of minterms is 8 The number of available minterms is 4 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA disp(PMA) 4.0000 -0.2000 5.0000 0.1000 6.0000 0.2500 7.0000 0.2500 DV = [DV; Ac&B&Cc; Ac&Bc]; DP = [DP 0.1 0.3 ]; mincalc Data vectors are linearly independent Data probabilities are INCONSISTENT The number of minterms is 8 The number of available minterms is 8 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA disp(PMA) 0 0.2000 1.0000 0.1000 2.0000 0.1000 3.0000 0.2000 4.0000 -0.2000 5.0000 0.1000 6.0000 0.2500 7.0000 0.2500``` Exercise \(23\) Repeat Exercise with the original data probability matrix, but with \(AB\) replaced by \(AC\) in the data vector matrix. What is the result? Does mincalc work in this case? Check results on a minterm map. Answer ```% file npr02_23.m % Data for Exercise 2.3.23. minvec3 DV = [A|Ac; A; A&C; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.3 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; npr02_23 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are NOT linearly independent Warning: Rank deficient, rank = 5 tol = 5.0243e-15 Computable target probabilities 1.0000 0.4500 The number of minterms is 8 The number of available minterms is 2 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA DV = [DV; Ac&B&Cc; Ac&Bc]; DP = [DP 0.1 0.3 ]; mincalc Data vectors are NOT linearly independent Warning: Matrix is singular to working precision. Computable target probabilities 1 Inf % Note that p(4) and p(7) are given in data 2 Inf 3 Inf The number of minterms is 8 The number of available minterms is 6 Available minterm probabilities are in vector pma To view available minterm probabilities, call for PMA```
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/02%3A_Minterm_Analysis/2.03%3A_Problems_on_Minterm_Analysis.txt
The probability P(A) of an event A is a measure of the likelihood that the event will occur on any trial. Sometimes partial information determines that an event C has occurred. Given this information, it may be necessary to reassign the likelihood for each event A. This leads to the notion of conditional probability. For a fixed conditioning event C, this assignment to all events constitutes a new probability measure which has all the properties of the original probability measure. In addition, because of the way it is derived from the original, the conditional probability measure has a number of special properties which are important in applications. 03: Conditional Probability The original or prior probability measure utilizes all available information to make probability assignments $P(A)$, $P(B)$, etc., subject to the defining conditions (P1), (P2), and (P3). The probability $P(A)$ indicates the likelihood that event A will occur on any trial. Frequently, new information is received which leads to a reassessment of the likelihood of event A. For example • An applicant for a job as a manager of a service department is being interviewed. His résumé shows adequate experience and other qualifications. He conducts himself with ease and is quite articulate in his interview. He is considered a prospect highly likely to succeed. The interview is followed by an extensive background check. His credit rating, because of bad debts, is found to be quite low. With this information, the likelihood that he is a satisfactory candidate changes radically. • A young woman is seeking to purchase a used car. She finds one that appears to be an excellent buy. It looks “clean,” has reasonable mileage, and is a dependable model of a well known make. Before buying, she has a mechanic friend look at it. He finds evidence that the car has been wrecked with possible frame damage that has been repaired. The likelihood the car will be satisfactory is thus reduced considerably. • A physician is conducting a routine physical examination on a patient in her seventies. She is somewhat overweight. He suspects that she may be prone to heart problems. Then he discovers that she exercises regularly, eats a low fat, high fiber, variagated diet, and comes from a family in which survival well into their nineties is common. On the basis of this new information, he reassesses the likelihood of heart problems. New, but partial, information determines a conditioning event $C$, which may call for reassessing the likelihood of event $A$. For one thing, this means that $A$ occurs iff the event $AC$ occurs. Effectively, this makes $C$ a new basic space. The new unit of probability mass is $P(C)$. How should the new probability assignments be made? One possibility is to make the new assignment to $A$ proportional to the probability $P(AC)$. These considerations and experience with the classical case suggests the following procedure for reassignment. Although such a reassignment is not logically necessary, subsequent developments give substantial evidence that this is the appropriate procedure. Definition If $C$ is an even having prositive probabilty, the conditional probability of $A$, given $C$ is $P(A|C) = \dfrac{P(AC)}{P(C)}$ For a fixed conditioning event $C$, we have a new likelihood assignment to the event $A$. Now $P(A|C) \ge 0$, $P(\Omega |C) = 1$, and $P(\bigvee_j A_j | C) = \dfrac{P(\bigvee_j A_j C}{P(C)} = \sum_j P(A_j C)/P(C) = \sum_j P(A_j | C)$ Thus, the new function $P(\cdot | C)$ satisfies the three defining properties (P1), (P2), and (P3) for probability, so that for fixed C, we have a new probability measure, with all the properties of an ordinary probability measure. Remark. When we write $P(A|C)$ we are evaluating the likelihood of event $A$ when it is known that event $C$ has occurred. This is not the probability of a conditional event $A|C$. Conditional events have no meaning in the model we are developing. Example $1$ Conditional probabilities from joint frequency data A survey of student opinion on a proposed national health care program included 250 students, of whom 150 were undergraduates and 100 were graduate students. Their responses were categorized Y (affirmative), N (negative), and D (uncertain or no opinion). Results are tabulated below. Y N D U 60 40 50 G 70 20 10 Suppose the sample is representative, so the results can be taken as typical of the student body. A student is picked at random. Let Y be the event he or she is favorable to the plan, N be the event he or she is unfavorable, and D is the event of no opinion (or uncertain). Let U be the event the student is an undergraduate and G be the event he or she is a graduate student. The data may reasonably be interpreted $P(G) = 100/250$, $P(U) = 150/250$, $P(Y) = (60 + 70)/250$, $P(YU) = 60/250$, etc. Then $P(Y|U) = \dfrac{P(YU)}{P(U)} = \dfrac{60/250}{150/250} = \dfrac{60}{150}$ Similarly, we can calculate $P(N|U) = 40/150$, $P(D|U) = 50/150$, $P(Y|G) = 70/100$, $P(N|G) = 20/100$, $P(D|G) = 10/100$ We may also calculate directly $P(U|Y) = 60/130$, $P(G|N) = 20/60$, etc. Conditional probability often provides a natural way to deal with compound trials carried out in several steps. Example $2$ Jet aircraft with two engines An aircraft has two jet engines. It will fly with only one engine operating. Let $F_1$ be the event one engine fails on a long distance flight, and $F_2$ the event the second fails. Experience indicates that $P(F_1) = 0.0003$. Once the first engine fails, added load is placed on the second, so that $P(F_2|F_1) = 0.001$. Now the second engine can fail only if the other has already failed. Thus $F_2 \subset F_1$ so that $P(F_2) = P(F_1 F_2) = P(F_1) P(F_2|F_1) = 3 \times 10^{-7}$ Thus reliability of any one engine may be less than satisfactory, yet the overall reliability may be quite high. The following example is taken from the UMAP Module 576, by Paul Mullenix, reprinted in UMAP Journal, vol 2, no. 4. More extensive treatment of the problem is given there. Example $3$ Responses to a sensitive question on a survey In a survey, if answering “yes” to a question may tend to incriminate or otherwise embarrass the subject, the response given may be incorrect or misleading. Nonetheless, it may be desirable to obtain correct responses for purposes of social analysis. The following device for dealing with this problem is attributed to B. G. Greenberg. By a chance process, each subject is instructed to do one of three things: 1. Respond with an honest answer to the question. 2. Respond “yes” to the question, regardless of the truth in the matter. 3. Respond “no” regardless of the true answer. Let A be the event the subject is told to reply honestly, B be the event the subject is instructed to reply “yes,” and C be the event the answer is to be “no.” The probabilities $P(A)$, $P(B)$, and $P(C)$ are determined by a chance mechanism (i.e., a fractio $P(A)$ selected randomly are told to answer honestly, etc.). Let $E$ be the event the reply is “yes.” We wish to calculate $P(E|A)$, the probability the answer is “yes” given the response is honest. Solution Since $E = EA \bigvee B$, we have $P(E) = P(EA) + P(B) = P(E|A) P(A) + P(B)$ which may be solved algebraically to give $P(E|A) = \dfrac{P(E) - P(B)}{P(A)}$ Suppose there are 250 subjects. The chance mechanism is such that $P(A) = 0.7$, $P(B) = 0.4$ and $P(C) = 0.16$. There are 62 responses “yes,” which we take to mean $P(E) = 62/250$. According to the pattern above $P(E|A) = \dfrac{62/250 - 14/100}{70/100} = \dfrac{27}{175} \approx 0.154$ The formulation of conditional probability assumes the conditioning event C is well defined. Sometimes there are subtle difficulties. It may not be entirely clear from the problem description what the conditioning event is. This is usually due to some ambiguity or misunderstanding of the information provided. Example $4$ What is the conditioning event? Five equally qualified candidates for a job, Jim, Paul, Richard, Barry, and Evan, are identified on the basis of interviews and told that they are finalists. Three of these are to be selected at random, with results to be posted the next day. One of them, Jim, has a friend in the personnel office. Jim asks the friend to tell him the name of one of those selected (other than himself). The friend tells Jim that Richard has been selected. Jim analyzes the problem as follows. Analysis Let $A_i$, $1 \le i \le 5$ be the event the $i$th of these is hired ($A_1$ is the event Jim is hired, $A_3$ is the event Richard is hired, etc.). Now $P(A_i)$ (for each $i$) is the probability that finalist $i$ is in one of the combinations of three from five. Thus, Jim's probability of being hired, before receiving the information about Richard, is $P(A_1) = \dfrac{1 \times C(4,2)}{C(5,3)} = \dfrac{6}{10} = P(A_i)$, $1 \le i \le 5$ The information that Richard is one of those hired is information that the event $A_3$ has occurred. Also, for any pair $i \ne j$ the number of combinations of three from five including these two is just the number of ways of picking one from the remaining three. Hence, $P(A_1 A_3) = \dfrac{C(3,1)}{C(5,3)} = \dfrac{3}{10} = P(A_i A_j), i \ne j$ The conditional probability $P(A_1 | A_3) = \dfrac{P(A_1A_3)}{P(A_3)} = \dfrac{3/10}{6/10} = 1/2$ This is consistent with the fact that if Jim knows that Richard is hired, then there are two to be selected from the four remaining finalists, so that $P(A_1 | A_3) = \dfrac{1 \times C(3,1)}{C(4,2)} = \dfrac{3}{6} = 1/2$ Discussion Although this solution seems straightforward, it has been challenged as being incomplete. Many feel that there must be information about how the friend chose to name Richard. Many would make an assumption somewhat as follows. The friend took the three names selected: if Jim was one of them, Jim's name was removed and an equally likely choice among the other two was made; otherwise, the friend selected on an equally likely basis one of the three to be hired. Under this assumption, the information assumed is an event B3 which is not the same as A3. In fact, computation (see Example 5, below) shows $P(A_1|B_3) = \dfrac{6}{10} = P(A_1) \ne P(A_1|A_3)$ Both results are mathematically correct. The difference is in the conditioning event, which corresponds to the difference in the information given (or assumed). Some properties In addition to its properties as a probability measure, conditional probability has special properties which are consequences of the way it is related to the original probability measure $P(\cdot)$. The following are easily derived from the definition of conditional probability and basic properties of the prior probability measure, and prove useful in a variety of problem situations. (CP1) Product rule If $P(ABCD) > 0$, then $P(ABCD) = P(A) P(B|A) P(C|AB) P(D|ABC).$ Derivation The defining expression may be written in product form: $P(AB) = P(A) P(B|A)$. Likewise $P(ABC) = P(A) \dfrac{P(AB)}{P(A)} \cdot \dfrac{P(ABC)}{P(AB)} = P(A) P(B|A) P(C|AB)$ and $P(ABCD) = P(A) \dfrac{P(AB)}{P(A)} \cdot \dfrac{P(ABC)}{P(AB)} \cdot \dfrac{P(ABCD)}{P(ABC)} = P(A) P(B|A) P(C|AB) P(D|ABC)$ This pattern may be extended to the intersection of any finite number of events. Also, the events may be taken in any order. — □ Example $5$ Selection of items from a lot An electronics store has ten items of a given type in stock. One is defective. Four successive customers purchase one of the items. Each time, the selection is on an equally likely basis from those remaining. What is the probability that all four customes get good items? Solution Let $E_i$ be the event the $i$th customer receives a good item. Then the first chooses one of the nine out of ten good ones, the second chooses one of the eight out of nine goood ones, etc., so that $P(E_1E_2E_3E_4) = P(E_1)P(E_2|E_1)P(E_3|E_1E_2)P(E_4|E_1E_2E_3) = \dfrac{9}{10} \cdot \dfrac{8}{9} \cdot \dfrac{7}{8} \cdot \dfrac{6}{7} = \dfrac{6}{10}$ Note that this result could be determined by a combinatorial argument: under the assumptions, each combination of four of ten is equally likely; the number of combinations of four good ones is the number of combinations of four of the nine. Hence $P(E_1E_2E_3E_4) = \dfrac{C(9,4)}{C(10,4)} = \dfrac{126}{210} = 3/5$ Example $6$ A selection problem Three items are to be selected (on an equally likely basis at each step) from ten, two of which are defective. Determine the probability that the first and third selected are good. Solution Let $G_i$, $1 \le i \le 3$ be the even the $i$th unit selected is good. Then $G_1 G_3 = G_1 G_2 G_3 \bigvee G_1 G_2^c G_3$. By the product rule $P(G_1 G_3) = P(G_1) P(G_2|G_1) P(G_3|G_1 G_2) + P(G_1) P(G_2^c | G_1) P(G_3|G_1 G_2^c) = \dfrac{8}{10} \cdot \dfrac{7}{9} \cdot \dfrac{6}{8} + \dfrac{8}{10} \cdot \dfrac{2}{9} \cdot \dfrac{7}{8} = \dfrac{28}{45} \approx 0.6$ (CP2) Law of total probability Suppose the class $\{A_i: 1 \le i \le n\}$ of events is mutually exclusive and every outcome in E is in one of these events. Thus, $E = A_1 E \bigvee A_2 E \bigvee \cdot \cdot \cdot \bigvee A_n E$, a disjoint union. Then $P(E) = P(E|A_1) P(A_1) + P(E|A_2) P(A_2) + \cdot \cdot \cdot + P(E|A_n) P(A_n)$ Example $7$ a compound experiment Five cards are numbered one through five. A two-step selection procedure is carried out as follows. 1. Three cards are selected without replacement, on an equally likely basis. • If card 1 is drawn, the other two are put in a box • If card 1 is not drawn, all three are put in a box 2. One of cards in the box is drawn on an equally likely basis (from either two or three) Let $A_i$ be the event the $i$th card is drawn on the first selection and let $B_i$ be the event the card numbered $i$ is drawn on the second selection (from the box). Determine $P(B_5)$, $P(A_1B_5)$, and $P(A_1|B_5)$. Solution From Example 3.1.4, we have $P(A_i) = 6/10$ and $P(A_iA_j) = 3/10$. This implies $P(A_i A_j^c) = P(A_i) - P(A_i A_j) = 3/10$ Now we can draw card five on the second selection only if it is selected on the first drawing, so that $B_5 \subset A_5$. Also $A_5 = A_1 A_5 \bigvee A_1^c A_5$. We therefore have $B_5 = B_5 A_5 = B_5 A_1 A_5 \bigvee B_5 A_1^c A_5$. By the law of total probability (CP2), $P(B_5) = P(B_5|A_1A_5) P(A_1A_5) + P(B_5|A_1^cA_5) P(A_1^c A_5) = \dfrac{1}{2} \cdot \dfrac{3}{10} + \dfrac{1}{3} \cdot \dfrac{3}{10} = \dfrac{1}{4}$ Also, since $A_1B_5 = A_1A_5B_5$, $P(A_1B_5) = P(A_1A_5B_50 = P(A_1A_5)P(B_5|A_1A_5) = \dfrac{3}{10} \cdot \dfrac{1}{2} = \dfrac{3}{20}$ We thus have $P(A_1|B_5) = \dfrac{3/20}{5/20} = \dfrac{6}{10} = P(A_1)$ Occurrence of event $B_1$ has no affect on the likelihood of the occurrence of $A_1$. This condition is examined more thoroughly in the chapter on "Independence of Events". Often in applications data lead to conditioning with respect to an event but the problem calls for “conditioning in the opposite direction.” Example $8$ Reversal of conditioning Students in a freshman mathematics class come from three different high schools. Their mathematical preparation varies. In order to group them appropriately in class sections, they are given a diagnostic test. Let $H_i$ be the event that a student tested is from high school $i$, $1 \le i \le 3$. Let F be the event the student fails the test. Suppose data indicate $P(H_1) = 0.2$, $P(H_2) = 0.5$, $P(H_3) = 0.3$, $P(F|H_1) = 0.10$, $P(F|H_2) = 0.02$, $P(F|H_3) = 0.06$ A student passes the exam. Determine for each $i$ the conditional probability $P(H_i|F^c)$ that the student is from high school $i$. Solution $P(F^c) = P(F^c|H_1) P(H_1) + P(F^c|H_2) P(H_2) + P(F^c|H_3) P(H_3) = 0.90 \cdot 0.2 + 0.98 \cdot 0.5 + 0.94 \cdot 0.3 = 0.952$ Then $P(H_1|F^c) = \dfrac{P(F^c H_1)}{P(F^c)} = \dfrac{P(F^c|H_1) P(H_1)}{P(F^c)} = \dfrac{180}{952} = 0.1891$ Similarly, $P(H_2|F^c) = \dfrac{P(F^c|H_2)P(H_2)}{P(F^c)} = \dfrac{590}{952} = 0.5147$ and $P(H_3|F^c) = \dfrac{P(F^c|H_3) P(H_3)}{P(F^c)} = \dfrac{282}{952} = 0.2962$ The basic pattern utilized in the reversal is the following. (CP3) Bayes' rule If $E \subset \bigvee_{i = 1}^{n} A_i$ (as in the law of total probability), then $P(A_i |E) = \dfrac{P(A_i E)}{P(E)} = \dfrac{P(E|A_i) P(A_i)}{P(E)}$ $1 \le i \le n$ The law of total probabilty yields $P(E)$ Such reversals are desirable in a variety of practical situations. Example $9$ A compound selection and reversal Begin with items in two lots: 1. Three items, one defective. 2. Four items, one defective. One item is selected from lot 1 (on an equally likely basis); this item is added to lot 2; a selection is then made from lot 2 (also on an equally likely basis). This second item is good. What is the probability the item selected from lot 1 was good? Solution Let $G_1$ be the event the first item (from lot 1) was good, and $G_2$ be the event the second item (from the augmented lot 2) is good. We want to determine $P(G_1|G_2)$. Now the data are interpreted as $P(G_1) = 2/3$, $P(G_2|G_1) = 4/5$, $P(G_2|G_1^c) = 3/5$ By the law of total probability (CP2), $P(G_2) = P(G_1) P(G_2|G_1) + P(G_1^c)P(G_2|G_1^c) = \dfrac{2}{3} \cdot \dfrac{4}{5} + \dfrac{1}{3} \cdot \dfrac{3}{5} = \dfrac{11}{15}$ By Bayes' rule (CP3), $P(G_1|G_2) = \dfrac{P(G_2|G_1) P(G_1)}{P(G_2)} = \dfrac{4/5 \times 2/3}{11/15} = \dfrac{8}{11} \approx 0.73$ Example $10$ Additional problems requiring reversals • Medical tests. Suppose D is the event a patient has a certain disease and T is the event a test for the disease is positive. Data are usually of the form: prior probability $P(D)$ (or prior odds $P(D)/P(D^c)$), probability $P(T|D^c)$ of a false positive, and probability $P(T^c|D)$ of a false negative. The desired probabilities are $P(D|T)$ and $P(D^c|T^c)$. • Safety alarm. If D is the event a dangerous condition exists (say a steam pressure is too high) and T is the event the safety alarm operates, then data are usually of the form $P(D)$, $P(T|D^c)$, and $P(T^c|D)$, or equivalently (e.g., $P(T^c|D^c)$ and $P(T|D)$). Again, the desired probabilities are that the safety alarms signals correctly, $P(D|T)$ and $P(D^c|T^c)$. • Job success. If H is the event of success on a job, and E is the event that an individual interviewed has certain desirable characteristics, the data are usually prior $P(H)$ and reliability of the characteristics as predictors in the form $P(H)$ and $P(E|H^c)$. The desired probability is $P(H|E)$. • Presence of oil. If H is the event of the presence of oil at a proposed well site, and E is the event of certain geological structure (salt dome or fault), the data are usually $P(H)$ (or the odds), $P(E|H)$, and $P(E|H^c)$. The desired probability is $P(H|E)$. • Market condition. Before launching a new product on the national market, a firm usually examines the condition of a test market as an indicator of the national market. If H is the event the national market is favorable and E is the event the test market is favorable, data are a prior estimate $P(H)$ of the likelihood the national market is sound, and data $P(E|H)$ and $P(E|H^c)$ indicating the reliability of the test market. What is desired is $P(H|E)$, the likelihood the national market is favorable, given the test market is favorable. The calculations, as in Example 3.8, are simple but can be tedious. We have an m-procedure called bayes to perform the calculations easily. The probabilities $P(A_i)$ are put into a matrix PA and the conditional probabilities $P(E|A_i)$ are put into matrix PEA. The desired probabilities $P(A_i|E)$ and $PA_i|E^c)$ are calculated and displayed Example $11$ matlab calculations for >> PEA = [0.10 0.02 0.06]; >> PA = [0.2 0.5 0.3]; >> bayes Requires input PEA = [P(E|A1) P(E|A2) ... P(E|An)] and PA = [P(A1) P(A2) ... P(An)] Determines PAE = [P(A1|E) P(A2|E) ... P(An|E)] and PAEc = [P(A1|Ec) P(A2|Ec) ... P(An|Ec)] Enter matrix PEA of conditional probabilities PEA Enter matrix PA of probabilities PA P(E) = 0.048 P(E|Ai) P(Ai) P(Ai|E) P(Ai|Ec) 0.1000 0.2000 0.4167 0.1891 0.0200 0.5000 0.2083 0.5147 0.0600 0.3000 0.3750 0.2962 Various quantities are in the matrices PEA, PA, PAE, PAEc, named above The procedure displays the results in tabular form, as shown. In addition, the various quantities are in the workspace in the matrices named, so that they may be used in further calculations without recopying. The following variation of Bayes' rule is applicable in many practical situations. (CP3*) Ratio form of Bayes' rule $\dfrac{P(A|C)}{P(B|C)} = \dfrac{P(AC)}{P(BC)} = \dfrac{P(C|A)}{P(C|B)} \cdot \dfrac{P(A)}{P(B)}$ The left hand member is called the posterior odds, which is the odds after knowledge of the occurrence of the conditioning event. The second fraction in the right hand member is the prior odds, which is the odds before knowledge of the occurrence of the conditioning event $C$. The first fraction in the right hand member is known as the likelihood ratio. It is the ratio of the probabilities (or likelihoods) of $C$ for the two different probability measures $P(\cdot |A)$ and $P(\cdot |B)$. Example $12$ A performance test As a part of a routine maintenance procedure, a computer is given a performance test. The machine seems to be operating so well that the prior odds it is satisfactory are taken to be ten to one. The test has probability 0.05 of a false positive and 0.01 of a false negative. A test is performed. The result is positive. What are the posterior odds the device is operating properly? Solution Let $S$ be the event the computer is operating satisfactorily and let $T$ be the event the test is favorable. The data are $P(S)/P(S^c) = 10$, $P(T|S^c) = 0.05$, and $P(T^c|S) = 0.01$.Then by the ratio form of Bayes' rule $\dfrac{P(S|T)}{P(S^c|T)} = \dfrac{P(T|S)}{P(T|S^c} \cdot \dfrac{P(S)}{P(S^c)} = \dfrac{0.99}{0.05} \cdot 10 = 198$ so that $P(S|T) = \dfrac{198}{199} = 0.9950$ The following property serves to establish in the chapters on "Independence of Events" and "Conditional Independence" a number of important properties for the concept of independence and of conditional independence of events. (CP4) Some equivalent conditions If $0 < P(A) < 1$ and $0 < P(B) < 1$, then $P(A|B) * P(A)$ iff $P(B|A) * P(B)$ iff $P(AB) * P(A) P(B)$ and $P(AB) *P(A) P(B)$ iff $P(A^cB^c) * P(A^c) P(B^c)$ iff $P(AB^c) \diamond P(A) P(B^c)$ where * is $<, \le, =, \ge,$ or $>$ and $\diamond$ is $>, \ge, =, \le,$ or $<$, respectively. Because of the role of this property in the theory of independence and conditional independence, we examine the derivation of these results. VERIFICATION of (CP4) $P(AB) * P(A) P(B)$ iff $P(A|B) * P(A)$ (divide by $P(B)$ - may exchange $A$ and $A^c$ $P(AB) * P(A) P(B)$ iff $P(B|A) * P(B)$ (divide by $P(A)$ - may exchange $B$ and $B^c$ $P(AB) * P(A) P(B)$ iff $[P(A) - P(AB^c)] * P(A)[1 - P(B^c)]$ iff $-P(AB^c) * - P(A)P(B^c)$ iff $P(AB^c) \diamond P(A) P(B^c)$ we may use c to get $P(AB) * P(A) P(B)$ iff $P(AB^C) \diamond P(A)P(B^c)$ iff $P(A^cB^c)*P(A^c) P(B^c)$ — □ A number of important and useful propositons may be derived from these. $P(A|B) + P(A^c|B) = 1$, but, in general, $P(A|B) + P(A|B^c) \ne 1$. $P(A|B) > P(A)$ iff $P(A|B^c) < P(A)$. $P(A^c|B) > P(A^c)$ iff $P(A|B) < P(A)$. $P(A|B) > P(A)$ iff $P(A^c|B^c) > P(A^c)$. VERIFICATION — Exercises (see problem set) — □ Repeated conditioning Suppose conditioning by the event $C$ has occurred. Additional information is then received that event D has occurred. We have a new conditioning event $CD$. There are two possibilities: Reassign the conditional probabilities. $P_C(A)$ becomes $P_C(A|D) = \dfrac{P_C(AD)}{P_C(D)} = \dfrac{P(ACD)}{P(CD)}$ Reassign the total probabilities: $P(A)$ becomes $P_{CD}(A) = P(A|CD) = \dfrac{P(ACD)}{P(CD)}$ Basic result: $P_C(A|D) = P(A|CD) = P_D(A|C)$. Thus repeated conditioning by two events may be done in any order, or may be done in one step. This result extends easily to repeated conditioning by any finite number of events. This result is important in extending the concept of "Independence of Events" to "Conditional Independence". These conditions are important for many problems of probable inference.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/03%3A_Conditional_Probability/3.01%3A_Conditional_Probability.txt
Exercise $1$ Given the following data: $P(A) = 0.55$, $P(AB) = 0.30$, $P(BC) = 0.20$, $P(A^c \cup BC) = 0.55$, $P(A^c BC^c) = 0.15$ Determine, if possible, the conditional probability $P(A^c|B) = P(A^cB)/P(B)$. Answer % file npr03_01.m % Data for Exercise 3.2.1. minvec3 DV = [A|Ac; A; A&B; B&C; Ac|(B&C); Ac&B&Cc]; DP = [ 1 0.55 0.30 0.20 0.55 0.15 ]; TV = [Ac&B; B]; disp('Call for mincalc') npr03_01 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. Call for mincalc mincalc Data vectors are linearly independent Computable target probabilities 1.0000 0.2500 2.0000 0.5500 The number of minterms is 8 The number of available minterms is 4 - - - - - - - - - - - - P = 0.25/0.55 P = 0.4545 Exercise $2$ In Exercise 11 from "Problems on Minterm Analysis," we have the following data: A survey of a represenative group of students yields the following information: • 52 percent are male • 85 percent live on campus • 78 percent are male or are active in intramural sports (or both) • 30 percent live on campus but are not active in sports • 32 percent are male, live on campus, and are active in sports • 8 percent are male and live off campus • 17 percent are male students inactive in sports Let A = male, B = on campus, C = active in sports. 1. A student is selected at random. He is male and lives on campus. What is the (conditional) probability that he is active in sports? 2. A student selected is active in sports. What is the(conditional) probability that she is a female who lives on campus? Answer npr02_11 - - - - - - - - - - - - mincalc - - - - - - - - - - - - mincalct Enter matrix of target Boolean combinations [A&B&C; A&B; Ac&B&C; C] Computable target probabilities 1.0000 0.3200 2.0000 0.4400 3.0000 0.2300 4.0000 0.6100 PC_AB = 0.32/0.44 PC_AB = 0.7273 PAcB_C = 0.23/0.61 PAcB_C = 0.3770 Exercise $3$ In a certain population, the probability a woman lives to at least seventy years is 0.70 and is 0.55 that she will live to at least eighty years. If a woman is seventy years old, what is the conditional probability she will survive to eighty years? Note that if $A \subset B$ then $P(AB) = P(A)$. Answer Let $A=$ event she lives to seventy and $B=$ event she lives to eighty. Since $B \subset A$, $P(B|A) = P(AB)/P(A) = P(B)/P(A) = 55/70$. Exercise $4$ From 100 cards numbered 00, 01, 02, $\cdot\cdot\cdot$, 99, one card is drawn. Suppose Ai is the event the sum of the two digits on a card is $i$, $0 \le i \le 18$, and $B_j$ is the event the product of the two digits is $j$. Determine $P(A_i|B_0)$ for each possible $i$. Answer $B_0$ is the event one of the first ten is draw. $A_i B_0$ is the event that the card with numbers $0i$ is drawn. $P(a_i|B_0) = (1/100)/(1/10) = 1/10$ for each $i$, 0 through 9. Exercise $5$ Two fair dice are rolled. 1. What is the (conditional) probability that one turns up two spots, given they show different numbers? 2. What is the (conditional) probability that the first turns up six, given that the sum is $k$, for each $k$ from two through 12? 3. What is the (conditional) probability that at least one turns up six, given that the sum is $k$, for each $k$ from two through 12? Answer a. There are $6 \times 5$ ways to choose all different. There are $2 \times 5$ ways that they are different and one turns up two spots. The conditional probability is 2/6. b. Let $A_6$ = event first is a six and $S_k =$ event the sum is $k$. Now $A_6S_k = \emptyset$ for $k \le 6$. A table of sums shows $P(A_6S_k) = 1/36$ and $P(S_k) = 6/36, 5/36, 4/36, 3/36, 2/36, 1/36$ for $k = 7$ through 12, respectively. Hence $P(A_6|S_k) = 1/6, 1/5. 1/4, 1/3. 1/2, 1$, respectively. c. If $AB_6$ is the event at least one is a six, then $AB_6S_k) = 2/36$ for $k = 7$ through 11 and $P(AB_6S_12) = 1/36$. Thus, the conditional probabilities are 2/6, 2/5, 2/4, 2/3, 1, 1, respectively. Exercise $6$ Four persons are to be selected from a group of 12 people, 7 of whom are women. 1. What is the probability that the first and third selected are women? 2. What is the probability that three of those selected are women? 3. What is the (conditional) probability that the first and third selected are women, given that three of those selected are women? Answer $P(W_1W_3) = P(W_1W_2W_3) + P(W_1W_2^c W_3) = \dfrac{7}{12} \cdot \dfrac{6}{11} \cdot \dfrac{5}{10} + \dfrac{7}{12} \cdot \dfrac{5}{11} \cdot \dfrac{6}{10} = \dfrac{7}{22}$ Exercise $7$ Twenty percent of the paintings in a gallery are not originals. A collector buys a painting. He has probability 0.10 of buying a fake for an original but never rejects an original as a fake, What is the (conditional) probability the painting he purchases is an original? Answer Let $B=$ the event the collector buys, and $G=$ the event the painting is original. Assume $P(B|G) = 1$ and $P(B|G^c) = 0.1$. If $P(G) = 0.8$, then $P(G|B) = \dfrac{P(GB)}{P(B)} = \dfrac{P(B|G) P(G)}{P(B|G)P(G) + P(B|G^c)P(G^c)} = \dfrac{0.8}{0.8 + 0.1 \cdot 0.2} = \dfrac{40}{41}$ Exercise $8$ Five percent of the units of a certain type of equipment brought in for service have a common defect. Experience shows that 93 percent of the units with this defect exhibit a certain behavioral characteristic, while only two percent of the units which do not have this defect exhibit that characteristic. A unit is examined and found to have the characteristic symptom. What is the conditional probability that the unit has the defect, given this behavior? Answer Let $D=$ the event the unit is defective and $C=$ the event it has the characteristic. Then $P(D) = 0.05$, $P(C|D) = 0.93$, and $P(C|D^c) = 0.02$. $P(D|C) = \dfrac{P(C|D) P(D)}{P(C|D) P(D) + P(C|D^c) P(D^c)} = \dfrac{0.93 \cdot 0.05}{0.93 \cdot 0.05 + 0.02 \cdot 0.95} = \dfrac{93}{131}$ Exercise $9$ A shipment of 1000 electronic units is received. There is an equally likely probability that there are 0, 1, 2, or 3 defective units in the lot. If one is selected at random and found to be good, what is the probability of no defective units in the lot? Answer Let $D_k =$ the event of $k$ defective and $G$ be the event a good one is chosen. $P(D_0|G) = \dfrac{P(G|D_0) P(D_0)}{P(G|D_0) P(D_0) + P(G|D_1) P(D_1) + P(G|D_2) P(D_2) + P(G|D_3) P(D_3)}$ $= \dfrac{1 \cdot 1/4}{(1/4)(1 + 999/1000 + 998/1000 + 997/1000)} = \dfrac{1000}{3994}$ Exercise $10$ Data on incomes and salary ranges for a certain population are analyzed as follows. $S_1$= event annual income is less than $25,000; $S_2$= event annual income is between$25,000 and $100,000; $S_3$= event annual income is greater than$100,000. $E_1$= event did not complete college education; $E_2$= event of completion of bachelor's degree; $E_3$= event of completion of graduate or professional degree program. Data may be tabulated as follows: $P(E_1) = 0.65$, $P(E_2) = 0.30$ and $P(E_3) = 0.05$. $P(S_i|E_j)$ $S_1$ $S_2$ $S_3$ $E_1$ 0.85 0.10 0.05 $E_2$ 0.10 0.80 0.10 $E_3$ 0.05 0.50 0.45 $P(S_i)$ 0.50 0.40 0.10 1. Determine $P(E_3 S_3)$. 2. Suppose a person has a university education (no graduate study). What is the (conditional) probability that he or she will make \$25,000 or more? 3. Find the total probability that a person's income category is at least as high as his or her educational level. Answer a. $P(E_3S_3) = P(S_3|E_3)P(E_3) = 0.45 \cdot 0.05 = 0.0225$ b. $P(S_2 \vee S_3|E_2) = 0.80 + 0.10 = 0.90$ c. $p = (0.85 + 0.10 + 0.05) \cdot 0.65 + (0.80 + 0.10) \cdot 0.30 + 0.45 \cdot 0.05 = 0.9425$ Exercise $11$ In a survey, 85 percent of the employees say they favor a certain company policy. Previous experience indicates that 20 percent of those who do not favor the policy say that they do, out of fear of reprisal. What is the probability that an employee picked at random really does favor the company policy? It is reasonable to assume that all who favor say so. Answer $P(S) = 0.85$, $P(S|F^c) = 0.20$. Also, reasonable to assume $P(S|F) = 1$. $P(S) = P(S|F) P(F) + P(S|F^c) [1 - P(F)]$ implies $P(F) = \dfrac{P(S) - P(S|F^c)}{1 - P(S|F^c)} = \dfrac{13}{16}$ Exercise $12$ A quality control group is designing an automatic test procedure for compact disk players coming from a production line. Experience shows that one percent of the units produced are defective. The automatic test procedure has probability 0.05 of giving a false positive indication and probability 0.02 of giving a false negative. That is, if $D$ is the event a unit tested is defective, and $T$ is the event that it tests satisfactory, then $P(T|D) = 0.05$ and $P(T^c|D^c) = 0.02$. Determine the probability $P(D^c|T)$ that a unit which tests good is, in fact, free of defects. Answer $\dfrac{D^c|T}{P(D|T)} = \dfrac{P(T|D^c)P(D^c)}{P(T|D)P(D)} = \dfrac{0.98 \cdot 0.99}{0.05 \cdot 0.01} = \dfrac{9702}{5}$ $P(D^c|T) = \dfrac{9702}{9707} = 1 - \dfrac{5}{9707}$ Exercise $13$ Five boxes of random access memory chips have 100 units per box. They have respectively one, two, three, four, and five defective units. A box is selected at random, on an equally likely basis, and a unit is selected at random therefrom. It is defective. What are the (conditional) probabilities the unit was selected from each of the boxes? Answer $H_i =$ the event from box $i$. $P(H_i) = 1/5$ and $P(D|H_i) = i/100$. $P(H_i|D) = \dfrac{P(D|H_i) P(H_i)}{\sum P(D|H_i) P(H_j)} = i/15$, $1 \le i \le 5$ Exercise $14$ Two percent of the units received at a warehouse are defective. A nondestructive test procedure gives two percent false positive indications and five percent false negative. Units which fail to pass the inspection are sold to a salvage firm. This firm applies a corrective procedure which does not affect any good unit and which corrects 90 percent of the defective units. A customer buys a unit from the salvage firm. It is good. What is the (conditional) probability the unit was originally defective? Answer Let $T$ = event test indicates defective, $D$ = event initially defective, and $G =$ event unit purchased is good. Data are $P(D) = 0.02$, $P(T^c|D) = 0.02$, $P(T|D^c) = 0.05$, $P(GT^c) = 0$, $P(G|DT) = 0.90$, $P(G|D^cT) = 1$ $P(D|G) = \dfrac{P(GD)}{P(G)}$, $P(GD) = P(GTD) = P(D) P(T|D) P(G|TD)$ $P(G) = P(GT) = P(GDT) + P(GD^c T) = P(D) P(T|D) P(G|TD) + P(D^c) P(T|D^c) P(G|TD^c)$ $P(D|G) = \dfrac{0.02 \cdot 0.98 \cdot 0.90}{0.02 \cdot 0.98 \cdot 0.90 + 0.98 \cdot 0.05 \cdot 1.00} = \dfrac{441}{1666}$ Exercise $15$ At a certain stage in a trial, the judge feels the odds are two to one the defendent is guilty. It is determined that the defendent is left handed. An investigator convinces the judge this is six times more likely if the defendent is guilty than if he were not. What is the likelihood, given this evidence, that the defendent is guilty? Answer Let $G$ = event the defendent is guilty, $L$ = the event the defendent is left handed. Prior odds: $P(G)/P(G^c) = 2$. Result of testimony: $P(L|G)/P(L|G^c) = 6$. $\dfrac{P(G|L)}{P(G^c|L)} = \dfrac{P(G)}{P(G^c)} \cdot \dfrac{P(L|G)}{P(L|G^c)} = 2 \cdot 6 = 12$ $P(G|L) = 12/13$ Exercise $16$ Show that if $P(A|C) > P(B|C)$ and $P(A|C^c) > P(B|C^c)$, then $P(A) > P(B)$. Is the converse true? Prove or give a counterexample. Answer $P(A) = P(A|C) P(C) + P(A|C^c) P(C^c) > P(B|C) P(C) + P(B|C^c) P(C^c) = P(B)$. The converse is not true. Consider $P(C) = P(C^c) = 0.5$, $P(A|C) = 1/4$. $P(A|C^c) = 3/4$, $P(B|C) = 1/2$, and $P(B|C^c) = 1/4$. Then $1/2 = P(A) = \dfrac{1}{2} (1/4 + 3/4) > \dfrac{1}{2} (1/2 + 1/4) = P(B) = 3/8$ But $P(A|C) < P(B|C)$. Exercise $17$ Since $P(\cdot |B)$ is a probability measure for a given $B$, we must have $P(A|B) + P(A^c|B) = 1$. Construct an example to show that in general $P(A|B) + P(A|B^c) \ne 1$. Answer Suppose $A \subset B$ with $P(A) < P(B)$. Then $P(A|B) = P(A)/P(B) < 1$ and $P(A|B^c) = 0$ so the sum is less than one. Exercise $18$ Use property (CP4) to show a. $P(A|B) > P(A)$ iff $P(A|B^c) < P(A)$ b. $P(A^c|B) > P(A^c)$ iff $P(A|B) < P(A)$ c. $P(A|B) > P(A)$ iff $P(A^c|B^c) > P(A^c)$ Answer a. $P(A|B) > P(A)$ iff $P(AB) > P(A) P(B)$ iff $P(AB^c) < P(A) P(B^c)$ iff $P(A|B^c) < P(A)$ b. $P(A^c|B) > P(A^c)$ iff $P(A^c B) > P(A^c) P(B)$ iff $P(AB) < P(A) P(B)$ iff $P(A|B) < P(A)$ c. $P(A|B) > P(A)$ iff $P(AB) > P(A) P(B)$ iff $P(A^c B^c) > P(A^c) P(B^c)$ iff $P(A^c|B^c) > P(A^c)$ Exercise $19$ Show that $P(A|B) \ge (P(A) + P(B) - 1)/P(B)$. Answer $1 \ge P(A \cup B) = P(A) + P(B) - P(AB) = P(A) + P(B) - P(A|B) P(B)$. Simple algebra gives the desired result. Exercise $20$ Show that $P(A|B) = P(A|BC) P(C|B) + P(A|BC^c) P(C^c|B)$. Answer $P(A|B) = \dfrac{P(AB)}{P(B)} = \dfrac{P(ABC) + P(ABC^c)}{P(B)}$ $= \dfrac{P(A|BC) P(BC) + P(A|BC^c) P(BC^c)}{P(B)} = P(A|BC) P(C|B) + P(A|BC^c) P(C^c|B)$ Exercise $21$ An individual is to select from among $n$ alternatives in an attempt to obtain a particular one. This might be selection from answers on a multiple choice question, when only one is correct. Let $A$ be the event he makes a correct selection, and $B$ be the event he knows which is correct before making the selection. We suppose $P(B) = p$ and $P(A|B^c) = 1/n$. Determine $P(B|A)$; show that $P(B|A) \ge P(B)$ and $P(B|A)$ increases with $n$ for fixed $p$. Answer $P(A|B) = 1$, $P(A|B^c) = 1/n$, $P(B) = p$ $P(B|A) = \dfrac{P(A|B) P(B)}{P(A|B) P(B) +P(A|B^c) P(B^c)} = \dfrac{p}{p + \dfrac{1}{n} (1 - p)} = \dfrac{np}{(n - 1) p + 1}$ $\dfrac{P(B|A)}{P(B)} = \dfrac{n}{np + 1 - p}$ increases from 1 to $1/p$ as $n \to \infty$ Exercise $22$ Polya's urn scheme for a contagious disease. An urn contains initially $b$ black balls and $r$ red balls $(r + b = n)$. A ball is drawn on an equally likely basis from among those in the urn, then replaced along with $c$ additional balls of the same color. The process is repeated. There are $n$ balls on the first choice, $n + c$ balls on the second choice, etc. Let $B_k$ be the event of a black ball on the $k$th draw and $R_k$ be the event of a red ball on the $k$th draw. Determine a. $P(B_2|R_1)$ b. $P(B_1B_2)$ c. $P(R_2)$ d. $P(B_1|R_2)$ Answer a. $P(B_2|R_1) = \dfrac{b}{n + c}$ b. $P(B_1B_2) = P(B_2) P(B_2|B_1) = \dfrac{b}{n} \cdot \dfrac{b + c}{n + c}$ c. $P(R_2) P(R_2|R_1) P(R_1) + P(R_2|B_1) P(B_1)$ $= \dfrac{r + c}{n + c} \cdot \dfrac{r}{n} + \dfrac{r}{n + c} \cdot \dfrac{b}{n} = \dfrac{r(r + c + b)}{n(n + c)}$ d. $P(B_1|R_2) = \dfrac{P(R_2|B_1) P(B_1)}{P(R_2)}$ with $P(R_2|B_1) P(B_1) = \dfrac{r}{n + c} \cdot \dfrac{b}{n}$. Using (c), we have $P(B_1|R_2) = \dfrac{b}{r + b + c} = \dfrac{b}{n + c}$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/03%3A_Conditional_Probability/3.02%3A_Problems_on_Conditional_Probability.txt
Historically, the notion of independence has played a prominent role in probability. If events form an independent class, much less information is required to determine probabilities of Boolean combinations and calculations are correspondingly easier. In this unit, we give a precise formulation of the concept of independence in the probability sense. As in the case of all concepts which attempt to incorporate intuitive notions, the consequences must be evaluated for evidence that these ideas have been captured successfully. Independence as lack of conditioning There are many situations in which we have an “operational independence.” • Supose a deck of playing cards is shuffled and a card is selected at random then replaced with reshuffling. A second card picked on a repeated try should not be affected by the first choice. • If customers come into a well stocked shop at different times, each unaware of the choice made by the others, the the item purchased by one should not be affected by the choice made by the other. • If two students are taking exams in different courses, the grade one makes should not affect the grade made by the other. The list of examples could be extended indefinitely. In each case, we should expect to model the events as independent in some way. How should we incorporate the concept in our developing model of probability? We take our clue from the examples above. Pairs of events are considered. The “operational independence” described indicates that knowledge that one of the events has occured does not affect the likelihood that the other will occur. For a pair of events {$A$, $B$}, this is the condition $P(A|B) = P(A)$ Occurrence of the event $A$ is not “conditioned by” occurrence of the event $B$. Our basic interpretation is that $P(A)$ indicates of the likelihood of the occurrence of event $A$. The development of conditional probability in the module Conditional Probability, leads to the interpretation of $P(A|B)$ as the likelihood that $A$ will occur on a trial, given knowledge that $B$ as occurred. If such knowledge of the occurrence of $B$ does not affect the likelihood of the occurrence of $A$, we should be inclined to think of the events $A$ and $B$ as being independent in a probability sense. Independent pairs We take our clue from the condition $P(A|B) = P(A)$. Property (CP4) for conditional probability (in the case of equality) yields sixteen equivalent conditions as follows. $P(A|B) = P(A)$ $P(B|A) = P(B)$ $P(AB) = P(A) P(B)$ $P(A|B^c) = P(A)$ $P(B^c|A) = P(B^c)$ $P(AB^c) = P(A) P(B^c)$ $P(A^c|B) = P(A^c)$ $P(B|A^c) = P(B)$ $P(A^c B) = P(A^c)P(B)$ $P(A^c|B^c) = P(A^c)$ $P(B^c|A^c) = P(B^c)$ $P(A^cB^c) = P(A^c) P(B^c)$ $P(A|B) = P(A|B^c)$ $P(A^c|B) = P(A^c|B^c)$ $P(B|A) = P(B|A^c)$ $P(B^c|A) = P(B^c|A^c)$ These conditions are equivalent in the sense that if any one holds, then all hold. We may chose any one of these as the defining condition and consider the others as equivalents for the defining condition. Because of its simplicity and symmetry with respect to the two events, we adopt the product rule in the upper right hand corner of the table. Definition. The pair {$A$, $B$} of events is said to be (stochastically) independent iff the following product rule holds: $P(AB) = P(A) P(B)$ Remark. Although the product rule is adopted as the basis for definition, in many applications the assumptions leading to independence may be formulated more naturally in terms of one or another of the equivalent expressions. We are free to do this, for the effect of assuming any one condition is to assume them all. The equivalences in the right-hand column of the upper portion of the table may be expressed as a replacement rule, which we augment and extend below: If the pair {$A$, $B$} independent, so is any pair obtained by taking the complement of either or both of the events. We note two relevant facts • Suppose event $N$ has probability zero (is a null event). Then for any event $A$, we have $0 \le P(AN) \le P(N) = 0 = P(A)P(N)$, so that the product rule holds. Thus {$N$, $A$} is an independent pair for any event $A$. • If event $S$ has probability one (is an almost sure event), then its complement $S^c$ is a null event. By the replacement rule and the fact just established, ${S^c$, $A$} is independent, so {$S$, $A$} is independent. The replacement rule may thus be extended to: Replacement Rule If the pair {$A$, $B$} independent, so is any pair obtained by replacing either or both of the events by their complements or by a null event or by an almost sure event. CAUTION 1. Unless at least one of the events has probability one or zero, a pair cannot be both independent and mutually exclusive. Intuitively, if the pair is mutually exclusive, then the occurrence of one requires that the other does not occur. Formally: Suppose $0 < P(A) < 1$ and $0 < P(B) < 1$. {$A$, $B$} mutually exclusive implies $P(AB) = P(\emptyset) = 0 \ne P(A) P(B)$. {$A$, $B$} independent implies $P(AB) = P(A) P(B) > 0 = P(\emptyset)$ 2. Independence is not a property of events. Two non mutually exclusive events may be independent under one probability measure, but may not be independent for another. This can be seen by considering various probability distributions on a Venn diagram or minterm map. Independent classes Extension of the concept of independence to an arbitrary class of events utilizes the product rule. Definition. A class of events is said to be (stochastically) independent iff the product rule holds for every finite subclass of two or more events in the class. A class {$A$, $B$, $C$} is independent iff all four of the following product rules hold $P(AB) = P(A) P(B)$ $P(AC) = P(A) P(C)$ $P(BC) = P(B) P(C)$ $P(ABC) = P(A) P(B) P(C)$ If any one or more of these product expressions fail, the class is not independent. A similar situation holds for a class of four events: the product rule must hold for every pair, for every triple, and for the whole class. Note that we say “not independent” or “nonindependent” rather than dependent. The reason for this becomes clearer in dealing with independent random variables. We consider some classical exmples of nonindependent classes SOME NONINDEPENDENT CLASSES 1. Suppose {$A_1$, $A_2$, $A_3$, $A_4$} is a partition, with each $P(A_i) = 1/4$. Let $A = A_1 \bigvee A_2 B = A_1 \bigvee A_3 C = A_1 \bigvee A_4$ Then the class {$A$, $B$, $C$} has $P(A) = P(B) = P(C) = 1/2$ and is pairwise independent, but not independent, since $P(AB) = P(A_1) = 1/4 = P(A) P(B)$ and similarly for the other pairs, but $P(ABC) = P(A_1) = 1/4 \ne P(A)P(B)P(C)$ 2. Consider the class {$A$, $B$, $C$, $D$} with $AD = BD = \emptyset$, $C = AB \bigvee D$, $P(A) = P(B) = 1/4$, $P(AB) = 1/64$, and $P(D) = 15/64$. Use of a minterm maps shows these assignments are consistent. Elementary calculations show the product rule applies to the class {$A$, $B$, $C$} but no two of these three events forms an independent pair. As noted above, the replacement rule holds for any pair of events. It is easy to show, although somewhat cumbersome to write out, that if the rule holds for any finite number $k$ of events in an independent class, it holds for any $k + 1$ of them. By the principle of mathematical induction, the rule must hold for any finite subclass. We may extend the replacement rule as follows. General Replacement Rule If a class is independent, we may replace any of the sets by its complement, by a null event, or by an almost sure event, and the resulting class is also independent. Such replacements may be made for any number of the sets in the class. One immediate and important consequence is the following. Minterm Probabilities If {$A_i: 1 \le i \le n$} is an independent class and the the class {$P(A_i):1 \le i \le n$} of individual probabilities is known, then the probability of every minterm may be calculated. Minterm probabilities for an independent class Suppose the class {$A$, $B$, $C$} is independent with respective probabilities $P(A) = 0.3$, $P(B) = 0.6$, and $P(C) = 0.5$. Then {$A^c$, $B^c$, $C^c$} is independent and $P(M_0) = P(A^c)P(B^c)P(C^c) = 0.14$ {$A^c$, $B^c$, $C$} is independent and $P(M_1) = P(A^c)P(B^c)P(C) = 0.14$ Similarly, the probabilities of the other six minterms, in order, are 0.21, 0.21, 0.06, 0.06, 0.09, and 0.09. With these minterm probabilities, the probability of any Boolean combination of $A$, $B$, and $C$ may be calculated In general, eight appropriate probabilities must be specified to determine the minterm probabilities for a class of three events. In the independent case, three appropriate probabilities are sufficient. Three probabilities yield the minterm probabilities Suppose {$A$, $B$, $C$} is independent with $P(A \cup BC) = 0.51$, $P(AC^c) = 0.15$, and $P(A) = 0.30$. Then $P(C^c) = 0.15/0.3 = 0.5 = P(C)$ and $P(A) + P(A^c) P(B) P(C) = 0.51$ so that $P(B) = \dfrac{0.51 - 0.30}{0.7 \times 0.5} = 0.6$ With each of the basic probabilities determined, we may calculate the minterm probabilities, hence the probability of any Boolean combination of the events. MATLAB and the product rule Frequently we have a large enough independent class {$E_1$, $E_2$, $\cdot \cdot\ cdot$, $E_n$} that it is desirable to use MATLAB (or some other computational aid) to calculate the probabilities of various “and” combinations (intersections) of the events or their complements. Suppose the independent class {$E_1$, $E_2$, $\cdot \cdot\ cdot$, $E_{10}$} has respective probabilities 0.13 0.37 0.12 0.56 0.33 0.71 0.22 0.43 0.57 0.31 It is desired to calculate (a) $P(E_1 E_2 E_3^c E_4 E_5^c E_6^c E_7)$, and (b) $P(E_1^c E_2 E_3^c E_4 E_5^c E_6^c E_7 E_8 E_9^c E_{10})$. We may use the MATLAB function prod and the scheme for indexing a matrix. >> p = 0.01*[13 37 12 56 33 71 22 43 57 31]; >> q = 1-p; >> % First case >> e = [1 2 4 7]; % Uncomplemented positions >> f = [3 5 6]; % Complemented positions >> P = prod(p(e))*prod(q(f)) % p(e) probs of uncomplemented factors P = 0.0010 % q(f) probs of complemented factors >> % Case of uncomplemented in even positions; complemented in odd positions >> g = find(rem(1:10,2) == 0); % The even positions >> h = find(rem(1:10,2) ~= 0); % The odd positions >> P = prod(p(g))*prod(q(h)) P = 0.0034 In the unit on MATLAB and Independent Classes, we extend the use of MATLAB in the calculations for such classes.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/04%3A_Independence_of_Events/4.01%3A_Independence_of_Events.txt
MATLAB and Independent Classes In the unit on Minterms, we show how to use minterm probabilities and minterm vectors to calculate probabilities of Boolean combinations of events. In Independence of Events we show that in the independent case, we may calculate all minterm probabilities from the probabilities of the basic events. While these calculations are straightforward, they may be tedious and subject to errors. Fortunately, in this case we have an m-function minprob which calculates all minterm probabilities from the probabilities of the basic or generating sets. This function uses the m-function mintable to set up the patterns of $p$'s and $q$'s for the various minterms and then takes the products to obtain the set of minterm probabilities. Example $1$ >> pm = minprob(0.1*[4 7 6]) pm = 0.0720 0.1080 0.1680 0.2520 0.0480 0.0720 0.1120 0.1680 It may be desirable to arrange these as on a minterm map. For this we have an m-function minmap which reshapes the row matrix $pm$, as follows: >> t = minmap(pm) t = 0.0720 0.1680 0.0480 0.1120 0.1080 0.2520 0.0720 0.1680 Probability of occurrence of k of n independent events In Example 2, we show how to use the m-functions mintable and csort to obtain the probability of the occurrence of $k$ of $n$ events, when minterm probabilities are available. In the case of an independent class, the minterm probabilities are calculated easily by minprob, It is only necessary to specify the probabilities for the $n$ basic events and the numbers $k$ of events. The size of the class, hence the mintable, is determined, and the minterm probabilities are calculated by minprob. We have two useful m-functions. If $P$ is a matrix of the $n$ individual event probabilities, and $k$ is a matrix of integers less than or equal to $n$, then function $y = \text{ikn}(P, k)$ calculates individual probabilities that $k$ of $n$ occur function $y = \text{ckn}(P, k)$ calculates the probabilities that $k$ or more occur Example $2$ >> p = 0.01*[13 37 12 56 33 71 22 43 57 31]; >> k = [2 5 7]; >> P = ikn(p,k) P = 0.1401 0.1845 0.0225 % individual probabilities >> Pc = ckn(p,k) Pc = 0.9516 0.2921 0.0266 % cumulative probabilities Reliability of systems with independent components Suppose a system has $n$ components which fail independently. Let $E_i$ be the event the $i$th component survives the designated time period. Then $R_i = P(E_i)$ is defined to be the reliability of that component. The reliability $R$ of the complete system is a function of the component reliabilities. There are three basic configurations. General systems may be decomposed into subsystems of these types. The subsystems become components in the larger configuration. The three fundamental configurations are: Series. The system operates iff all n components operate: $R = \prod_{i = 1}^n R_i$ Parallel. The system operates iff not all components fail: $R = 1 - \prod_{i = 1}^{n} (1 - R_i)$ k of n. The system operates iff $k$ or more components operate. $R$ may be calculated with the m-function ckn. If the component probabilities are all the same, it is more efficient to use the m-function cbinom (see Bernoulli trials and the binomial distribution, below). MATLAB solution. Put the component reliabilities in matrix $RC = [R_1\ R_2\ \cdot\cdot\cdot \ R_n]$ Series Configuration >> R = prod(RC) % prod is a built in MATLAB function Parallel Configuration >> R = parallel(RC) % parallel is a user defined function k of n Configuration >> R = ckn(RC,k) % ckn is a user defined function (in file ckn.m). Example $3$ There are eight components, numbered 1 through 8. Component 1 is in series with a parallel combination of components 2 and 3, followed by a 3 of 5 combination of components 4 through 8 (see Figure 1 for a schematic representation). Probabilities of the components in order are 0.95 0.90 0.92 0.80 0.83 0.91 0.85 0.85 The second and third probabilities are for the parallel pair, and the last five probabilities are for the 3 of 5 combination. >> RC = 0.01*[95 90 92 80 83 91 85 85]; % Component reliabilities >> Ra = RC(1)*parallel(RC(2:3))*ckn(RC(4:8),3) % Solution Ra = 0.9172 Figure 4.2.1. Schematic representation of the system in Example Example $4$ >> RC = 0.01*[95 90 92 80 83 91 85 85]; % Component reliabilities 1--8 >> Rb = prod(RC(1:2))*parallel([RC(3),ckn(RC(4:8),3)]) % Solution Rb = 0.8532 Figure 4.2.2. Schematic representation of the system in Example A test for independence It is difficult to look at a list of minterm probabilities and determine whether or not the generating events form an independent class. The m-function imintest has as argument a vector of minterm probabilities. It checks for feasible size, determines the number of variables, and performs a check for independence. Example $5$ >> pm = 0.01*[15 5 2 18 25 5 18 12]; % An arbitrary class >> disp(imintest(pm)) The class is NOT independent Minterms for which the product rule fails 1 1 1 0 1 1 1 0 Example $6$ >> pm = [0.10 0.15 0.20 0.25 0.30]: %An improper number of probabilities >> disp(imintest(pm)) The number of minterm probabilities incorrect Example $7$ >> pm = minprob([0.5 0.3 0.7]); >> disp(imintest(pm)) The class is independent Probabilities of Boolean combinations As in the nonindependent case, we may utilize the minterm expansion and the minterm probabilities to calculate the probabilities of Boolean combinations of events. However, it is frequently more efficient to manipulate the expressions for the Boolean combination to be a disjoint union of intersections. Example $8$ A simple Boolean combination Suppose the class {$A$, $B$, $C$} is independent, with respective probabilities 0.4, 0.6, 0.8. Determine $P(A \cup BC)$. The minterm expansion is $A \cup BC = M(3, 4, 5, 6, 7)$, so that $P(A \cup BC) = p(3, 4, 5, 6, 7)$ It is not difficult to use the product rule and the replacement theorem to calculate the needed minterm probabilities. Thus $p(3) = P(A^c) P(B) = P(C) = 0.6 \cdot 0.6 \cdot 0.8 = 0.2280$. Similarly $p(4) = 0.0320$, $p(5) = 0.1280$, $p(6) = 0.0480$, $p(7) = 0.1920$. The desired probability is the sum of these, 0.6880. As an alternate approach, we write $A \cup BC = A \bigvee A^c BC$, so that $P(A \cup BC) = 0.4 + 0.6 \cdot 0.6 \cdot 0.8 = 0.6880$ Considerbly fewer arithmetic operations are required in this calculation. In larger problems, or in situations where probabilities of several Boolean combinations are to be determined, it may be desirable to calculate all minterm probabilities then use the minterm vector techniques introduced earlier to calculate probabilities for various Boolean combinations. As a larger example for which computational aid is highly desirable, consider again the class and the probabilities utilized in Example 4.2.2, above. Example $9$ Consider again the independent class {$E_1, E_2, \cdot\cdot\cdot E_{10}$} with respective probabilities [0.13 0.37 0.12 0.56 0.33 0.71 0.22 0.43 0.57 0.31]. We wish to calculate $P(F) = P(E_1 \cup E_3 (E_4 \cup E_7^c) \cup E_2 (E_5^c \cup E_6 E_8) \cup E_9 E_{10}^c)$ There are $2^{10} = 1024$ minterm probabilities to be calculated. Each requires the multiplication of ten numbers. The solution with MATLAB is easy, as follows: >> P = 0.01*[13 37 12 56 33 71 22 43 57 31]; >> minvec10 Vectors are A1 thru A10 and A1c thru A10c They may be renamed, if desired. >> F = (A1|(A3&(A4|A7c)))|(A2&(A5c|(A6&A8)))|(A9&A10c); >> pm = minprob(P); >> PF = F*pm' PF = 0.6636 Writing out the expression for $F$ is tedious and error prone. We could simplify as follows: >> A = A1|(A3&(A4|A7c)); >> B = A2&(A5c|(A6&A8)); >> C = A9&A10c; >> F = A|B|C; % This minterm vector is the same as for F above This decomposition of the problem indicates that it may be solved as a series of smaller problems. First, we need some central facts about independence of Boolean combinations. Independent Boolean combinations Suppose we have a Boolean combination of the events in the class {$A_i: 1 \le i \le n$} and a second combination the events in the class {$B_j: 1 \le j \le m$}. If the combined class {$A_i, B_j: 1 \le i \le n, 1 \le j \le m$} is independent, we would expect the combinations of the subclasses to be independent. It is important to see that this is in fact a consequence of the product rule, for it is further evidence that the product rule has captured the essence of the intuitive notion of independence. In the following discussion, we exhibit the essential structure which provides the basis for the following general proposition. Proposition. Consider $n$ distinct subclasses of an independent class of events. If for each $i$ the event $A_i) is a Boolean (logical) combination of members of the \(i$th subclass, then the class {$A_1, A_2, \cdot\cdot\cdot, A_n} is an independent class. Verification of this far reaching result rests on the minterm expansion and two elementary facts about the disjoint subclasses of an independent class. We state these facts and consider in each case an example which exhibits the essential structure. Formulation of the general result, in each case, is simply a matter of careful use of notation. A class each of whose members is a minterm formed by members of a distinct subclass of an independent class is itself an independent class. Example \(10$ Consider the independent class {$A_1, A_2, A_3, B_1, B_2, B_3, B_4$}, with respective probabilities 0.4, 0.7, 0.3, 0.5, 0.8, 0.3, 0.6. Consider $M_3$, minterm three for the class {$A_1, A_2, A_3$}, and $N_5$, minterm five for the class {$B_1$, $B_2$, $B_3$, $B_4$}. Then $P(M_3) = P(A_1^c A_2 A_3) = 0.6 \cdot 0.7 \cdot 0.3 = 0.126$ and $P(N_5) = P(B_1^c B_2 B_3^c B_4) = 0.5 \cdot 0.8 \cdot 0.7 \cdot 0.6 = 0.168$ Also $P(M_3 N_5) = P(A_1^c A_2 A_3 B_1^c B_2 B_3^c B_40 = 0.6 \cdot 0.7 \cdot 0.3 \cdot 0.5 \cdot 0.8 \cdot 0.7 \cdot 0.6$ $=(0.6 \cdot 0.7 \cdot 0.3) \cdot (0.5 \cdot 0.8 \cdot 0.7 \cdot 0.6) = P(M_3)P(N_5) = 0.0212$ The product rule shows the desired independence. Again, it should be apparent that the result holds for any number of $A_i$ and $B_j$; and it can be extended to any number of distinct subclasses of an independent class. Suppose each member of a class can be expressed as a disjoint union. If each auxiliary class formed by taking one member from each of the disjoint unions is an independent class, then the original class is independent. Example $11$ Suppose $A = A_1 \bigvee A_2 \bigvee A_3$ and $B = B_1 \bigvee B_2$, with {$A_i$, $A_j$} independent for each pair $i, j$. Suppose $P(A_1) = 0.3$, $P(A_2) = 0.4$, $P(A_3) = 0.1$, $P(B_1) = 0.2$, $P(B_2) = 0.5$ We wish to show that the pair {$A$, $B$} is independent; i.e., the product rule $P(AB) = P(A)P(B)$ holds. COMPUTATION $P(A) = P(A_1) + P(A_2) + P(A_3) = 0.3 + 0.4 + 0.1 = 0.8$ and $P(B) = P(B_1) + P(B_2) = 0.2 + 0.5 = 0.7$ Now $AB = (A_1 \bigvee A_2 \bigvee A_3) (B_1 \bigvee B_2) = A_1B_1 \bigvee A_1 B_2 \bigvee A_2 B_1 \bigvee A_2 B_2 \bigvee A_3 B_1 \bigvee A_3 B_2$ By additivity and pairwise independence, we have $P(AB) = P(A_1) P(B_1) + P(A_1) P(B_2) + P(A_2) P(B_1) + P(A_2)P(B_2) + P(A_3) P(B_1) + P(A_3) P(B_2)$ $= 0.3 \cdot 0.2 + 0.3 \cdot 0.5 + 0.4 \cdot 0.2 + 0.4 \cdot 0.5 + 0.1 \cdot 0.2 + 0.1 \cdot 0.5 = 0.56 = P(A) P(B)$ The product rule can also be established algebraically from the expression for $P(AB)$, as follows: $P(AB) = P(A_1)[P(B_1) + P(B_2)] + P(A_2) [P(B_1) + P(B_2)] + P(A_3) [P(B_1) + P(B_2)]$ $= [P(A_1) + P(A_2) + P(A_3)][P(B_1) + P(B_2)] = P(A) P(B)$ It should be clear that the pattern just illustrated can be extended to the general case. If $A = \bigvee_{i = 1}^{n} A_i$ and $B = \bigvee_{j = 1}^{m} B_j$, with each pair {$A_i, B_j$} independent then the pair {$A, B$} is independent. Also, we may extend this rule to the triple {$A, B, C$} $A = \bigvee_{i = 1}^{n} A_i$, $B = \bigvee_{j = 1}^{m} B_j$, and $C = \bigvee_{k = 1}^{r} C_k$, with each class {$A_i, B_j, C_k$} independent and similarly for any finite number of such combinations, so that the second proposition holds. Begin with an independent class of $n$ events. Select $m$ distinct subclasses and form Boolean combinations for each of these. Use of the minterm expansion for each of these Boolean combinations and the two propositions just illustrated shows that the class of Boolean combinations is independent To illustrate, we return to Example 4.2.9, which involves an independent class of ten events. Example $12$ A hybrid approach Consider again the independent class {$E_1, E_2, \cdot\cdot\cdot, E_{10}$} with respective probabilities {0.13 0.37 0.12 0.56 0.33 0.71 0.22 0.43 0.57 0.31}. We wish to calculate $P(F) = P(E_1 \cup E_3 (E_4 \cup E_7^c) \cup E_2 (E_5^c \cup E_6 E_8) \cup E_9 E_10^c)$ In the previous solution, we use minprob to calculate the $2^{10}=1024$ minterms for all ten of the $E_i$ and determine the minterm vector for $F$. As we note in the alternate expansion of $F$, $F= A \cup B \cup C$, when $A = E_1 \cup E_3 (E_4 \cup E_7^c)$ $B = E_2 (E_5^c \cup E_6 E_8)$ $C = E_9 E_{10}^c$ We may calculate directly $P(C) = 0.57 \cdot 0.69 = 0.3933$. Now $A$ is a Boolean combination of {$E_1, E_3, E_4, E_7$} and B is a combination of {$E_2, E_5, E_6 E_8$}. By the result on independence of Boolean combinations, the class {$A, B, C$} is independent. We use the m-procedures to calculate $P(A)$ and $P(B)$. Then we deal with the independent class {$A, B, C$} to obtain the probability of $F$. >> p = 0.01*[13 37 12 56 33 71 22 43 57 31]; >> pa = p([1 3 4 7]); % Selection of probabilities for A >> pb = p([2 5 6 8]); % Selection of probabilities for B >> pma = minprob(pa); % Minterm probabilities for calculating P(A) >> pmb = minprob(pb); % Minterm probabilities for calculating P(B) >> minvec4; >> a = A|(B&(C|Dc)); % A corresponds to E1, B to E3, C to E4, D to E7 >> PA = a*pma' PA = 0.2243 >> b = A&(Bc|(C&D)); % A corresponds to E2, B to E5, C to E6, D to E8 >> PB = b*pmb' PB = 0.2852 >> PC = p(9)*(1 - p(10)) PC = 0.3933 >> pm = minprob([PA PB PC]); >> minvec3 % The problem becomes a three variable problem >> F = A|B|C; % with {A,B,C} an independent class >> PF = F*pm' PF = 0.6636 % Agrees with the result of Example 4.2.7
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/04%3A_Independence_of_Events/4.02%3A_MATLAB_and_Independent_Classes.txt
Composite trials and component events Often a trial is a composite one. That is, the fundamental trial is completed by performing several steps. In some cases, the steps are carried out sequentially in time. In other situations, the order of performance plays no significant role. Some of the examples in the unit on Conditional Probability involve such multistep trials. We examine more systematically how to model composite trials in terms of events determined by the components of the trials. In the subsequent section, we illustrate this approach in the important special case of Bernoulli trials, in which each outcome results in a success or failure to achieve a specified condition. We call the individual steps in the composite trial component trials. For example, in the experiment of flipping a coin ten times, we refer the $i$th toss as the $i$th component trial. In many cases, the component trials will be performed sequentially in time. But we may have an experiment in which ten coins are flipped simultaneously. For purposes of analysis, we impose an ordering— usually by assigning indices. The question is how to model these repetitions. Should they be considered as ten trials of a single simple experiment? It turns out that this is not a useful formulation. We need to consider the composite trial as a single outcome— i.e., represented by a single point in the basic space $\omega$. Some authors give considerable attention the the nature of the basic space, describing it as a Cartesian product space, with each coordinate corresponding to one of the component outcomes. We find that unnecessary, and often confusing, in setting up the basic model. We simply suppose the basic space has enough elements to consider each possible outcome. For the experiment of flipping a coin ten times, there must be at least $2^{10} = 1024$ elements, one for each possible sequence of heads and tails. Of more importance is describing the various events associated with the experiment. We begin by identifying the appropriate component events. A component event is determined by propositions about the outcomes of the corresponding component trial. Example $1$ Component events • In the coin flipping experiment, consider the event $H_3$ that the third toss results in a head. Each outcome $\omega$ of the experiment may be represented by a sequence of $H$'s and $T$'s, representing heads and tails. The event $H_3$ consists of those outcomes represented by sequences with $H$ in the third position. Suppose $A$ is the event of a head on the third toss and a tail on the ninth toss. This consists of those outcomes corresponding to sequences with $H$ in the third position and $T$ in the ninth. Note that this event is the intersection $H_3 H_9^c$. • A somewhat more complex example is as follows. Suppose there are two boxes, each containing some red and some blue balls. The experiment consists of selecting at random a ball from the first box, placing it in the second box, then making a random selection from the modified contents of the second box. The composite trial is made up of two component selections. We may let $R_1$ be the event of selecting a red ball on the first component trial (from the first box), and $R_2$ be the event of selecting a red ball on the second component trial. Clearly $R_1$ and $R_2$ are component events. In the first example, it is reasonable to assume that the class {$H_i: 1 \le i \le 10$} is independent, and each component probability is usually taken to be 0.5. In the second case, the assignment of probabilities is somewhat more involved. For one thing, it is necessary to know the numbers of red and blue balls in each box before the composite trial begins. When these are known, the usual assumptions and the properties of conditional probability suffice to assign probabilities. This approach of utilizing component events is used tacitly in some of the examples in the unit on Conditional Probability. When appropriate component events are determined, various Boolean combinations of these can be expressed as minterm expansions. Example $2$ Four persons take one shot each at a target. Let $E_i$ be the event the $i$th shooter hits the target center. Let $A_3$ be the event exacty three hit the target. Then $A_3$ is the union of those minterms generated by the $E_i$ which have three places uncomplemented. $A_3 = E_1 E_2 E_3 E_4^c \bigvee E_1 E_2 E_3^c E_4 \bigvee E_1 E_2^c E_3 E_4 \bigvee E_1^c E_2 E_3 E_4^c$ Usually we would be able to assume the $E_i$ form an independent class. If each $P(E_i)$ is known, then all minterm probabilities can be calculated easily. The following is a somewhat more complicated example of this type. Example $3$ Ten race cars are involved in time trials to determine pole positions for an upcoming race. To qualify, they must post an average speed of 125 mph or more on a trial run. Let $E_i$ be the event the $i$th car makes qualifying speed. It seems reasonable to suppose the class {$E_i: 1 \le i \le 10$} is independent. If the respective probabilities for success are 0.90, 0.88, 0.93, 0.77, 0.85, 0.96, 0.72, 0.83, 0.91, 0.84, what is the probability that $k$ or more will qualify ($k = 6, 7, 8, 9, 10$)? Solution Let $A_k$ be the event exactly $k$ qualify. The class {$E_i: 1 \le i \le 10$} generates $2^{10} = 1024$ minterms. The event $A_k$ is the union of those minterms which have exactly $k$ places uncomplemented. The event $B_k$ that $k$ or more qualify is given by $B_k = \bigvee_{r = k}^{n} A_r$ The task of computing and adding the minterm probabilities by hand would be tedious, to say the least. However, we may use the function ckn, introduced in the unit on MATLAB and Independent Classes and illustrated in Example 4.4.2, to determine the desired probabilities quickly and easily. >> P = [0.90, 0.88, 0.93, 0.77, 0.85, 0.96,0.72, 0.83, 0.91, 0.84]; >> k = 6:10; >> PB = ckn(P,k) PB = 0.9938 0.9628 0.8472 0.5756 0.2114 An alternate approach is considered in the treatment of random variables. Bernoulli trials and the binomial distribution Many composite trials may be described as a sequence of success-failure trials. For each component trial in the sequence, the outcome is one of two kinds. One we designate a success and the other a failure. Examples abound: heads or tails in a sequence of coin flips, favor or disapprove of a proposition in a survey sample, and items from a production line meet or fail to meet specifications in a sequence of quality control checks. To represent the situation, we let $E_i$ be the event of a success on the $i$th component trial in the sequence. The event of a failure on the $i$th component trial is thus $E_i^c$. In many cases, we model the sequence as a Bernoulli sequence, in which the results on the successive component trials are independent and have the same probabilities. Thus, formally, a sequence of success-failure trials is Bernoulli iff The class {$E_i: 1 \le i$} is independent. The probability $P(E_i) = p$, invariant with $i$. Simulation of Bernoulli trials It is frequently desirable to simulate Bernoulli trials. By flipping coins, rolling a die with various numbers of sides (as used in certain games), or using spinners, it is relatively easy to carry this out physically. However, if the number of trials is large—say several hundred—the process may be time consuming. Also, there are limitations on the values of $p$, the probability of success. We have a convenient two-part m-procedure for simulating Bernoulli sequences. The first part, called btdata, sets the parameters. The second, called $bt$, uses the random number generator in MATLAB to produce a sequence of zeros and ones (for failures and successes). Repeated calls for bt produce new sequences. Example $4$ >> btdata Enter n, the number of trials 10 Enter p, the probability of success on each trial 0.37 Call for bt >> bt n = 10 p = 0.37 % n is kept small to save printout space Frequency = 0.4 To view the sequence, call for SEQ >> disp(SEQ) % optional call for the sequence 1 1 2 1 3 0 4 0 5 0 6 0 7 0 8 0 9 1 10 1 Repeated calls for bt yield new sequences with the same parameters. To illustrate the power of the program, it was used to take a run of 100,000 component trials, with probability $p$ of success 0.37, as above. Successive runs gave relative frequencies 0.37001 and 0.36999. Unless the random number generator is “seeded” to make the same starting point each time, successive runs will give different sequences and usually different relative frequencies. The binomial distribution A basic problem in Bernoulli sequences is to determine the probability of $k$ successes in $n$ component trials. We let $S_n$ be the number of successes in $n$ trials. This is a special case of a simple random variable, which we study in more detail in the chapter on "Random Variables and Probabilities". Let us characterize the events $A_{kn} = \{S_n = k\}$, $0 \le k \le n$. As noted above, the event $A_{kn}$ of exactly $k$ successes is the union of the minterms generated by {$E_i: 1 \le i$} in which there are $k$ successes (represented by $k$ uncomplemented $E_i$) and $n - k$ failures (represented by $n - k$ complemented $E_i^c$). Simple combinatorics show there are $C(n,k)$ ways to choose the $k$ places to be uncomplemented. Hence, among the $2^n$ minterms, there are $C(n, k) = \dfrac{n!}{k!(n - k)!}$ which have $k$ places uncomplemented. Each such minterm has probability $p^k (1 - p)^{n -k}$. Since the minterms are mutually exclusive, their probabilities add. We conclude that $P(S_n = k) = C(n, k) p^k (1 - p)^{n - k} = C(n, k) p^k q^{n - k}$ where $q = 1 - p$ for $0 \le k \le n$ These probabilities and the corresponding values form the distribution for $S_n$. This distribution is known as the binomial distribution, with parameters ($n, p$). We shorten this to binomial ($n, p$), and often writ $S_n$ ~ binomial ($n, p$). A related set of probabilities is $P(S_n \ge k) = P(B_{kn})$, $0 \le k \le n$. If the number $n$ of component trials is small, direct computation of the probabilities is easy with hand calculators. Example $5$ A reliability problem A remote device has five similar components which fail independently, with equal probabilities. The system remains operable if three or more of the components are operative. Suppose each unit remains active for one year with probability 0.8. What is the probability the system will remain operative for that long? Solution $P = C(5, 3) 0.8^3 \cdot 0.2^2 + C(5, 4) 0.8^4 \cdot 0.2 + C(5, 5) 0.8^5 = 10 \cdot 0.8^3 \cdot 0.2^2 + 5 \cdot 0.8^4 \cdot 0.2 + 0.8^5 = 0.9421$ Because Bernoulli sequences are used in so many practical situations as models for success-failure trials, the probabilities $P(S_n = k)$ and $P(S_n \ge k)$ have been calculated and tabulated for a variety of combinations of the parameters ($n, p$). Such tables are found in most mathematical handbooks. Tables of $P(S_n = k)$ are usually given a title such as binomial distribution, individual terms. Tables of $P(S_n \ge k)$ have a designation such as binomial distribution, cumulative terms. Note, however, some tables for cumulative terms give $P(S_n \le k)$. Care should be taken to note which convention is used. Example $6$ A reliability problem Consider again the system of Example 5, above. Suppose we attempt to enter a table of Cumulative Terms, Binomial Distribution at $n = 5$, $k = 3$, and $p = 0.8$. Most tables will not have probabilities greater than 0.5. In this case, we may work with failures. We just interchange the role of $E_i$ and $E_i^c$. Thus, the number of failures has the binomial ($n, p$) distribution. Now there are three or more successes iff there are not three or more failures. We go the the table of cumulative terms at $n = 5$, $k = 3$, and $p = 0.2$. The probability entry is 0.0579. The desired probability is 1 - 0.0579 = 0.9421. In general, there are $k$ or more successes in $n$ trials iff there are not $n - k + 1$ or more failures. m-functions for binomial probabilities Although tables are convenient for calculation, they impose serious limitations on the available parameter values, and when the values are found in a table, they must still be entered into the problem. Fortunately, we have convenient m-functions for these distributions. When MATLAB is available, it is much easier to generate the needed probabilities than to look them up in a table, and the numbers are entered directly into the MATLAB workspace. And we have great freedom in selection of parameter values. For example we may use $n$ of a thousand or more, while tables are usually limited to $n$ of 20, or at most 30. The two m-functions for calculating $P(A_{kn}$ and $P(B_{kn}$ are $P(A_{kn})$ is calculated by y = ibinom(n,p,k), where $k$ is a row or column vector of integers between 0 and $n$. The result $y$ is a row vector of the same size as $k$. $P(B_{kn})$ is calculated by y = cbinom(n,p,k), where $k$ is a row or column vector of integers between 0 and $n$. The result $y$ is a row vector of the same size as $k$. Example $7$ Use of m-functions ibinom and cbinom If $n = 10$ and $p = 0.39$, determine $P(A_{kn})$ and $P(B_{kn})$ for $k = 3, 5, 6, 8$. >> p = 0.39; >> k = [3 5 6 8]; >> Pi = ibinom(10,p,k) % individual probabilities Pi = 0.2237 0.1920 0.1023 0.0090 >> Pc = cbinom(10,p,k) % cumulative probabilities Pc = 0.8160 0.3420 0.1500 0.0103 Note that we have used probability $p = 0.39$. It is quite unlikely that a table will have this probability. Although we use only $n = 10$, frequently it is desirable to use values of several hundred. The m-functions work well for $n$ up to 1000 (and even higher for small values of p or for values very near to one). Hence, there is great freedom from the limitations of tables. If a table with a specific range of values is desired, an m-procedure called binomial produces such a table. The use of large $n$ raises the question of cumulation of errors in sums or products. The level of precision in MATLAB calculations is sufficient that such roundoff errors are well below pratical concerns. Example $8$ >> binomial % call for procedure Enter n, the number of trials 13 Enter p, the probability of success 0.413 Enter row vector k of success numbers 0:4 n p 13.0000 0.4130 k P(X=k) P(X>=k) 0 0.0010 1.0000 1.0000 0.0090 0.9990 2.0000 0.0379 0.9900 3.0000 0.0979 0.9521 4.0000 0.1721 0.8542 Remark. While the m-procedure binomial is useful for constructing a table, it is usually not as convenient for problems as the m-functions ibinom or cbinom. The latter calculate the desired values and put them directly into the MATLAB workspace. Joint Bernoulli trials Bernoulli trials may be used to model a variety of practical problems. One such is to compare the results of two sequences of Bernoulli trials carried out independently. The following simple example illustrates the use of MATLAB for this. Example $9$ A joint Bernoulli trial Bill and Mary take ten basketball free throws each. We assume the two seqences of trials are independent of each other, and each is a Bernoulli sequence. Mary: Has probability 0.80 of success on each trial. Bill: Has probability 0.85 of success on each trial. What is the probability Mary makes more free throws than Bill? Solution We have two Bernoulli sequences, operating independently. Mary: $n = 10$, $p = 0.80$ Bill: $n = 10$, $p = 0.85$ Let $M$ be the event Mary wins $M_k$ be the event Mary makes $k$ or more freethrows. $B_j$ be the event Bill makes exactly $j$ reethrows Then Mary wins if Bill makes none and Mary makes one or more, or Bill makes one and Mary makes two or more, etc. Thus $M = B_0 M_1 \bigvee B_1 M_2 \bigvee \cdot \cdot \cdot \bigvee B_9 M_{10}$ and $P(M) = P(B_0) P(M_1) + P(B_1) P(M_2) + \cdot \cdot \cdot + P(B_9) P(M_{10})$ We use cbinom to calculate the cumulative probabilities for Mary and ibinom to obtain the individual probabilities for Bill. >> pm = cbinom(10,0.8,1:10); % cumulative probabilities for Mary >> pb = ibinom(10,0.85,0:9); % individual probabilities for Bill >> D = [pm; pb]' % display: pm in the first column D = % pb in the second column 1.0000 0.0000 1.0000 0.0000 0.9999 0.0000 0.9991 0.0001 0.9936 0.0012 0.9672 0.0085 0.8791 0.0401 0.6778 0.1298 0.3758 0.2759 0.1074 0.3474 To find the probability $P(M)$ that Mary wins, we need to multiply each of these pairs together, then sum. This is just the dot or scalar product, which MATLAB calculates with the command $pm * pb'$. We may combine the generation of the probabilities and the multiplication in one command: >> P = cbinom(10,0.8,1:10)*ibinom(10,0.85,0:9)' P = 0.273 The ease and simplicity of calculation with MATLAB make it feasible to consider the effect of different values of n. Is there an optimum number of throws for Mary? Why should there be an optimum? An alternate treatment of this problem in the unit on Independent Random Variables utilizes techniques for independent simple random variables. Alternate MATLAB implementations Alternate implementations of the functions for probability calculations are found in the Statistical Package available as a supplementary package. We have utilized our formulation, so that only the basic MATLAB package is needed.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/04%3A_Independence_of_Events/4.03%3A_Composite_Trials.txt
Exercise $1$ The minterms generated by the class $\{A, B, C\}$ have minterm probabilities $pm = [0.15\ 0.05\ 0.02\ 0.18\ 0.25\ 0.05\ 0.18\ 0.12]$ Show that the product rule holds for all three, but the class is not independent. Answer pm = [0.15 0.05 0.02 0.18 0.25 0.05 0.18 0.12]; y = imintest(pm) The class is NOT independent Minterms for which the product rule fails y = 1 1 1 0 1 1 1 0 % The product rule hold for M7 = ABC Exercise $2$ The class {$A, B, C, D$}is independent, with respective probabilities 0.65, 0.37, 0.48, 0.63. Use the m-function minprob to obtain the minterm probabilities. Use the m-function minmap to put them in a 4 by 4 table corresponding to the minterm map convention we use. Answer P = [0.65 0.37 0.48 0.63]; p = minmap(minprob(P)) p = 0.0424 0.0249 0.0788 0.0463 0.0722 0.0424 0.1342 0.0788 0.0392 0.0230 0.0727 0.0427 0.0667 0.0392 0.1238 0.0727 Exercise $3$ The minterm probabilities for the software survey in Example 2 from "Minterms" are $pm = [0\ 0.05\ 0.10\ 0.05\ 0.20\ 0.10\ 0.40\ 0.10]$ Show whether or not the class {$A, B, C$} is independent: (1) by hand calculation, and (2) by use of the m-function imintest. Answer pm = [0 0.05 0.10 0.05 0.20 0.10 0.40 0.10]; y = imintest(pm) The class is NOT independent Minterms for which the product rule fails y = 1 1 1 1 % By hand check product rule for any minterm 1 1 1 1 Exercise $4$ The minterm probabilities for the computer survey in Example 3 from "Minterms" are $pm = [0.032\ 0.016\ 0.376\ 0.011\ 0.364\ 0.073\ 0.077\ 0.051]$ Show whether or not the class {$A, B, C$} is independent: (1) by hand calculation, and (2) by use of the m-function imintest. Answer npr04_04 Minterm probabilities for Exercise 4.4.4. are in pm y = imintest(pm) The class is NOT independent Minterms for which the product rule fails y = 1 1 1 1 1 1 1 1 Exercise $5$ Minterm probabilities $p(0)$ through $p(15)$ for the class {$A, B, C, D$} are, in order, $pm = [0.084\ 0.196\ 0.036\ 0.084\ 0.085\ 0.196\ 0.035\ 0.084\ 0.021\ 0.049\ 0.009\ 0.021\ 0.020\ 0.049\ 0.010\ 0.021]$ Use the m-function imintest to show whether or not the class {$A, B, C, D$} is independent. Answer npr04_05 Minterm probabilities for Exercise 4.4.5. are in pm imintest(pm) The class is NOT independent Minterms for which the product rule fails ans = 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 Exercise $6$ Minterm probabilities $p(0)$ through $p(15)$ for the opinion survey in Example 4 from "Minterms" are $pm = [0.085\ 0.195\ 0.035\ 0.085\ 0.080\ 0.200\ 0.035\ 0.085\ 0.020\ 0.050\ 0.010\ 0.020\ 0.020\ 0.050\ 0.015\ 0.015]$ show whether or not the class {$A, B, C, D$} is independent. Answer npr04_06 Minterm probabilities for Exercise 4.4.6. are in pm y = imintest(pm) The class is NOT independent Minterms for which the product rule fails y = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Exercise $7$ The class {$A, B, C$} is independent, with $P(A) = 0.30$, $P(B^c C) = 0.32$, and $P(AC) = 0.12$. Determine the minterm probabilities. Answer $P(C) = P(AC)/P(A) = 0.40$ AND $P(B) = 1 - P(B^c C)/P(C) = 0.20$. pm = minprob([0.3 0.2 0.4]) pm = 0.3360 0.2240 0.0840 0.0560 0.1440 0.0960 0.0360 0.0240 Exercise $8$ The class {$A, B, C$} is independent, with $P(A \cup B) = 0.6$, $P(A \cup C) = 0.7$, and $P(C) = 0.4$. Determine the probability of each minterm. Answer $P(A^c C^c) = P(A^c) P(C^c) = 0.3$ implies $P(A^c) =0.3/0.6 = 0.5 = P(A)$. $P(A^c B^c) = P(A^c) P(B^c) = 0.4$ implies $P(B^c) = 0.4/0.5 = 0.8$ implies $P(B) = 0.2$ P = [0.5 0.2 0.4]; pm = minprob(P) pm = 0.2400 0.1600 0.0600 0.0400 0.2400 0.1600 0.0600 0.0400 Exercise $9$ A pair of dice is rolled five times. What is the probability the first two results are “sevens” and the others are not? Answer $P = (1/6)^2 (5/6)^3 = 0.0161.$ Exercise $10$ David, Mary, Joan, Hal, Sharon, and Wayne take an exam in their probability course. Their probabilities of making 90 percent or more are 0.72 0.83 0.75 0.92 0.65 0.79 respectively. Assume these are independent events. What is the probability three or more, four or more, five or more make grades of at least 90 percent? Answer P = 0.01*[72 83 75 92 65 79]; y = ckn(P,[3 4 5]) y = 0.9780 0.8756 0.5967 Exercise $11$ Two independent random numbers between 0 and 1 are selected (say by a random number generator on a calculator). What is the probability the first is no greater than 0.33 and the other is at least 57? Answer $P = 0.33 \cdot (1 - 0.57) = 0.1419$ Exercise $12$ Helen is wondering how to plan for the weekend. She will get a letter from home (with money) with probability 0.05. There is a probability of 0.85 that she will get a call from Jim at SMU in Dallas. There is also a probability of 0.5 that William will ask for a date. What is the probability she will get money and Jim will not call or that both Jim will call and William will ask for a date? Answer $A$ ~ letter with money, $B$ ~ call from Jim, $C$ ~ William ask for date P = 0.01*[5 85 50]; minvec3 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. pm = minprob(P); p = ((A&Bc)|(B&C))*pm' p = 0.4325 Exercise $13$ A basketball player takes ten free throws in a contest. On her first shot she is nervous and has probability 0.3 of making the shot. She begins to settle down and probabilities on the next seven shots are 0.5, 0.6 0.7 0.8 0.8, 0.8 and 0.85, respectively. Then she realizes her opponent is doing well, and becomes tense as she takes the last two shots, with probabilities reduced to 0.75, 0.65. Assuming independence between the shots, what is the probability she will make $k$ or more for $k = 2,3, \cdot \cdot \cdot 10$? Answer P = 0.01*[30 50 60 70 80 80 80 85 75 65]; k = 2:10; p = ckn(P,k) p = Columns 1 through 7 0.9999 0.9984 0.9882 0.9441 0.8192 0.5859 0.3043 Columns 8 through 9 0.0966 0.0134 Exercise $14$ In a group there are $M$ men and $W$ women; m of the men and $w$ of the women are college graduates. An individual is picked at random. Let $A$ be the event the individual is a woman and $B$ be the event he or she is a college graduate. Under what condition is the pair {$A, B$} independent? Answer $P(A|B) = w/(m + w) = W/(W + M) = P(A)$ Exercise $15$ Consider the pair {$A, B$} of events. Let $P(A) = p$, $P(A^c) = q = 1 - p$, $P(B|A) = p_1$, and $P(B|A^c) = p_2$. Under what condition is the pair {$A, B$} independent? Answer $p_1 = P(B|A) = P(B|A^c) = p_2$ (see table of equivalent conditions). Exercise $16$ Show that if event $A$ is independent of itself, then $P(A) = 0$ or $P(A) = 1$. (This fact is key to an important "zero-one law".) Answer $P(A) = P(A \cap A) = P(A) P(A)$. $x^2 = x$ iff $x = 0$ or $x = 1$. Exercise $17$ Does {$A, B$} independent and {$B, C$} independent imply {$A, C$} is independent? Justify your answer. Answer % No. Consider for example the following minterm probabilities: pm = [0.2 0.05 0.125 0.125 0.05 0.2 0.125 0.125]; minvec3 Variables are A, B, C, Ac, Bc, Cc They may be renamed, if desired. PA = A*pm' PA = 0.5000 PB = B*pm' PB = 0.5000 PC = C*pm' PC = 0.5000 PAB = (A&B)*pm' % Product rule holds PAB = 0.2500 PBC = (B&C)*pm' % Product rule holds PBC = 0.2500 PAC = (A&C)*pm' % Product rule fails PAC = 0.3250 Exercise $18$ Suppose event $A$ implies $B$ (i.e. $A \subset B$). Show that if the pair {$A, B$} is independent, then either $P(A) = 0$ or $P(B) = 1$. Answer $A \subset B$ implies $P(AB) = P(A)$; independence implies $P(AB) = P(A) P(B)$. $P(A) = P(A) P(B)$ only if $P(B) = 1$ or $P(A) = 0$. Exercise $19$ A company has three task forces trying to meet a deadline for a new device. The groups work independently, with respective probabilities 0.8, 0.9, 0.75 of completing on time. What is the probability at least one group completes on time? (Think. Then solve “by hand.”) Answer At least one completes iff not all fail. $P = 1 - 0.2 \cdot 0.1 \cdot 0.25 = 0.9950$ Exercise $20$ Two salesmen work differently. Roland spends more time with his customers than does Betty, hence tends to see fewer customers. On a given day Roland sees five customers and Betty sees six. The customers make decisions independently. If the probabilities for success on Roland's customers are 0.7, 0.8, 0.8, 0.6, 0.7 and for Betty's customers are 0.6, 0.5, 0.4, 0.6, 0.6, 0.4, what is the probability Roland makes more sales than Betty? What is the probability that Roland will make three or more sales? What is the probability that Betty will make three or more sales? Answer PR = 0.1*[7 8 8 6 7]; PB = 0.1*[6 5 4 6 6 4]; PR3 = ckn(PR,3) PR3 = 0.8662 PB3 = ckn(PB,3) PB3 = 0.6906 PRgB = ikn(PB,0:4)*ckn(PR,1:5)' PRgB = 0.5065 Exercise $21$ Two teams of students take a probability exam. The entire group performs individually and independently. Team 1 has five members and Team 2 has six members. They have the following indivudal probabilities of making an ”A” on the exam. Team 1: 0.83 0.87 0.92 0.77 0.86 Team 2: 0.68 0.91 0.74 0.68 0.73 0.83 1. What is the probability team 1 will make at least as many A's as team 2? 2. What is the probability team 1 will make more A's than team 2? Answer P1 = 0.01*[83 87 92 77 86]; P2 = 0.01*[68 91 74 68 73 83]; P1geq = ikn(P2,0:5)*ckn(P1,0:5)' P1geq = 0.5527 P1g = ikn(P2,0:4)*ckn(P1,1:5)' P1g = 0.2561 Exercise $22$ A system has five components which fail independently. Their respective reliabilities are 0.93, 0.91, 0.78, 0.88, 0.92. Units 1 and 2 operate as a “series” combination. Units 3, 4, 5 operate as a two of three subsytem. The two subsystems operate as a parallel combination to make the complete system. What is reliability of the complete system? Answer R = 0.01*[93 91 78 88 92]; Ra = prod(R(1:2)) Ra = 0.8463 Rb = ckn(R(3:5),2) Rb = 0.9506 Rs = parallel([Ra Rb]) Rs = 0.9924 Exercise $23$ A system has eight components with respective probabilities 0.96 0.90 0.93 0.82 0.85 0.97 0.88 0.80 Units 1 and 2 form a parallel subsytem in series with unit 3 and a three of five combination of units 4 through 8. What is the reliability of the complete system? Answer R = 0.01*[96 90 93 82 85 97 88 80]; Ra = parallel(R(1:2)) Ra = 0.9960 Rb = ckn(R(4:8),3) Rb = 0.9821 Rs = prod([Ra R(3) Rb]) Rs = 0.9097 Exercise $24$ How would the reliability of the system in Exercise 4.4.23. change if units 1, 2, and 3 formed a parallel combination in series with the three of five combination? Answer Rc = parallel(R(1:3)) Rc = 0.9997 Rss = prod([Rb Rc]) Rss = 0.9818 Exercise $25$ How would the reliability of the system in Exercise 4.4.23. change if the reliability of unit 3 were changed from 0.93 to 0.96? What change if the reliability of unit 2 were changed from 0.90 to 0.95 (with unit 3 unchanged)? Answer R1 = R; R1(3) =0.96; Ra = parallel(R1(1:2)) Ra = 0.9960 Rb = ckn(R1(4:8),3) Rb = 0.9821 Rs3 = prod([Ra R1(3) Rb]) Rs3 = 0.9390 R2 = R; R2(2) = 0.95; Ra = parallel(R2(1:2)) Ra = 0.9980 Rb = ckn(R2(4:8),3) Rb = 0.9821 Rs4 = prod([Ra R2(3) Rb]) Rs4 = 0.9115 Exercise $26$ Three fair dice are rolled. What is the probability at least one will show a six? Answer $P = 1 - (5/6)^3 = 0.4213$ Exercise $27$ A hobby shop finds that 35 percent of its customers buy an electronic game. If customers buy independently, what is the probability that at least one of the next five customers will buy an electronic game? Answer $P = 1 - 0.65^5 = 0.8840$ Exercise $28$ Under extreme noise conditions, the probability that a certain message will be transmitted correctly is 0.1. Successive messages are acted upon independently by the noise. Suppose the message is transmitted ten times. What is the probability it is transmitted correctly at least once? Answer $P = 1 - 0.9^{10} = 0.6513$ Exercise $29$ Suppose the class $\{A_i: 1 \le i \le n\}$ is independent, with $P(A_i) = p_i$, $1 \le i \le n$. What is the probability that at least one of the events occurs? What is the probability that none occurs? Answer $P1 = 1 -P0$, $P0 = \prod_{i = 1}^{n} (1 - p_i)$ Exercise $30$ In one hundred random digits, 0 through 9, with each possible digit equally likely on each choice, what is the probility 8 or more are sevens? Answer $P$ = cbinom(100, 0.1, 8) = 0.7939 Exercise $31$ Ten customers come into a store. If the probability is 0.15 that each customer will buy a television set, what is the probability the store will sell three or more? Answer $P$ = cbinom(10, 0.15, 3) = 0.1798 Exercise $32$ Seven similar units are put into service at time $t = 0$. The units fail independently. The probability of failure of any unit in the first 400 hours is 0.18. What is the probability that three or more units are still in operation at the end of 400 hours? Answer $P$ = cbinom(7, 0.82, 3) = 0.9971 Exercise $33$ A computer system has ten similar modules. The circuit has redundancy which ensures the system operates if any eight or more of the units are operative. Units fail independently, and the probability is 0.93 that any unit will survive between maintenance periods. What is the probability of no system failure due to these units? Answer $P$ = cbinom(10,0.93,8) = 0.9717 Exercise $34$ Only thirty percent of the items from a production line meet stringent requirements for a special job. Units from the line are tested in succession. Under the usual assumptions for Bernoulli trials, what is the probability that three satisfactory units will be found in eight or fewer trials? Answer $P$ = cbinom(8, 0.3, 3) = 0.4482 Exercise $35$ The probability is 0.02 that a virus will survive application of a certain vaccine. What is the probability that in a batch of 500 viruses, fifteen or more will survive treatment? Answer $P$ = cbinom(500, 0.02, 15) = 0.0814 Exercise $36$ In a shipment of 20,000 items, 400 are defective. These are scattered randomly throughout the entire lot. Assume the probability of a defective is the same on each choice. What is the probability that 1. Two or more will appear in a random sample of 35? 2. At most five will appear in a random sample of 50? Answer $P$1 = cbinom(35, 0.02, 2) = 0.1547. $P$2 = 1 – cbinom(35, 0.02, 6) = 0.9999 Exercise $37$ A device has probability $p$ of operating successfully on any trial in a sequence. What probability $p$ is necessary to ensure the probability of successes on all of the first four trials is 0.85? With that value of $p$, what is the probability of four or more successes in five trials? Answer $p = 0.85^{1/4}\0, \(P$ cbinom(5, $p$, 4) = 0.9854. Exercise $38$ A survey form is sent to 100 persons. If they decide independently whether or not to reply, and each has probability 1/4 of replying, what is the probability of $k$ or more replies, where $k = 15, 20, 25, 30, 35, 40$? Answer P = cbinom(100,1/4,15:5:40) P = 0.9946 0.9005 0.5383 0.1495 0.0164 0.0007 Exercise $39$ Ten numbers are produced by a random number generator. What is the probability four or more are less than or equal to 0.63? Answer $P$1 = cbinom(10, 0.63, 4) = 0.9644 Exercise $40$ A player rolls a pair of dice five times. She scores a “hit” on any throw if she gets a 6 or 7. She wins iff she scores an odd number of hits in the five throws. What is the probability a player wins on any sequence of five throws? Suppose she plays the game 20 successive times. What is the probability she wins at least 10 times? What is the probability she wins more than half the time? Answer Each roll yields a hit with probability $p = \dfrac{6}{36} + \dfrac{5}{36} = \dfrac{11}{36}$. PW = sum(ibinom(5,11/36,[1 3 5])) PW = 0.4956 P2 = cbinom(20,PW,10) P2 = 0.5724 P3 = cbinom(20,PW,11) P3 = 0.3963` Exercise $41$ Erica and John spin a wheel which turns up the integers 0 through 9 with equal probability. Results on various trials are independent. Each spins the wheel 10 times. What is the probability Erica turns up a seven more times than does John? Answer $P$ = ibinom(10, 0.1, 0:9) * cbinom(10, 0.1, 1:10)' = 0.3437 Exercise $42$ Erica and John play a different game with the wheel, above. Erica scores a point each time she gets an integer 0, 2, 4, 6, or 8. John scores a point each time he turns up a 1, 2, 5, or 7. If Erica spins eight times; John spins 10 times. What is the probability John makes more points than Erica? Answer $P$ = ibinom(8, 0.5, 0:8) * cbinom(10, 0.4, 1:9)' = 0.4030 Exercise $43$ A box contains 100 balls; 30 are red, 40 are blue, and 30 are green. Martha and Alex select at random, with replacement and mixing after each selection. Alex has a success if he selects a red ball; Martha has a success if she selects a blue ball. Alex selects seven times and Martha selects five times. What is the probability Martha has more successes than Alex? Answer $P$ = ibinom(7, 0.3, 0:4) * cbinom(5, 0.4, 1:5)' = 0.3613 Exercise $44$ Two players roll a fair die 30 times each. What is the probability that each rolls the same number of sixes? Answer $P$ = sum(ibinom(30, 1/6, 0:30).^2) = 0.1386 Exercise $45$ A warehouse has a stock of $n$ items of a certain kind, $r$ of which are defective. Two of the items are chosen at random, without replacement. What is the probability that at least one is defective? Show that for large $n$ the number is very close to that for selection with replacement, which corresponds to two Bernoulli trials with pobability $p = r/n$ of success on any trial. Answer $P1 = \dfrac{r}{n} \cdot \dfrac{r - 1}{n - 1} + \dfrac{r}{n} \cdot \dfrac{n - r}{n - 1} + \dfrac{n - r}{n} \cdot \dfrac{r}{n - 1} = \dfrac{(2n - 1)r - r^2}{n(n - 1)}$ $P2 = 1 - (\dfrac{r}{n})^2 = \dfrac{2nr - r^2}{n^2}$ Exercise $46$ A coin is flipped repeatedly, until a head appears. Show that with probability one the game will terminate. tip: The probability of not terminating in $n$ trials is $q^n$. Answer Let $N =$ event never terminates and $N_k =$ event does not terminate in $k$ plays. Then $N \subset N_k$ for all $k$ implies $0 \le P(N) \le P(N_k) = 1/2^k$ for all $k$, we conclude $P(N) = 0$. Exercise $47$ Two persons play a game consecutively until one of them is successful or there are ten unsuccesful plays. Let $E_i$ be the event of a success on the $i$th play of the game. Suppose {$E_i: 1 \le i$} is an independent class with $P(E_i) = p_1$ for i odd and $P(E_i) = p_2$ for $i$ even. Let $A$ be the event the first player wins, $B$ be the event the second player wins, and $C$ be the event that neither wins. 1. Express $A$, $B$, and $C$ in terms of the $E_i$. 2. Determine $P(A)$, $P(B)$, and $P(C)$ in terms of $p_1$, $p_2$, $q_1 = 1 - p_1$, and $q_2 = 1 - p_2$. Obtain numerical values for the case $p_1 = 1/4$ and $p_2 = 1/3$. 3. Use appropriate facts about the geometric series to show that $P(A) = P(B)$ iff $p_1 = p_2 / (1 + p_2)$. 4. Suppose $p_2 = 0.5$. Use the result of part (c) to find the value of $p_1$ to make $P(A) = P(B)$ and then determine $P(A)$, $P(B)$, and $P(C)$. Answer a. $C = \bigcap_{i = 1}^{10} E_i^c$. $A = E_1 \bigvee E_1^c E_2^c E_3 \bigvee E_1^c E_2^c E_3^c E_4^c E_5 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8^c E_9$ $B = E_1^c E_2 \bigvee E_1^c E_2^c E_3^c E_4 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8 \bigvee E_1^c E_2^c E_3^c E_4^c E_5^c E_6^c E_7^c E_8^c E_9^c E_{10}$ $P(A) = p_1 [1 + q_1q_2 + (q_1q_2)^2 + (q_1 q_2)^3 + (q_1 q_2)^4] = p1 \dfrac{1 - (q_1 q_2)^5}{1 - q_1 q_2}$ $P(B) = q_1 p_2 \dfrac{1 - (q_1 q_2)^5}{1 - q_1 q_2} P(C) = (q_1q_2)^5$ For $p_1 = 1/4$, $p_2 = 1/3$, we have $q_1 q_2 = 1/2$ and $q_1 p_2 = 1/4$. In this case $P(A) = \dfrac{1}{4} \cdot \dfrac{31}{16} = 31/64 = 0.4844 = P(B), P(C) = 1/32$ Note that $P(A) + P(B) + P(C) = 1$. c. $P(A) = P(B)$ iff $p_1 = q_1p_2 = (1 - p_1)p_2$ iff $p_1 = p_2/(1 + p_2)$. d. $p_1 = 0.5/1.5 = 1/3$ Exercise $48$ Three persons play a game consecutively until one achieves his objective. Let $E_i$ be the event of a success on the $i$th trial, and suppose $\{E_i: 1 \le i\}$ is an independent class, with $P(E_i) = p_1$ for $i = 1, 4, 7, \cdot \cdot \cdot, P(E_i) = p_2$ for $i = 2, 5, 8, \cdot\cdot\cdot$, and $P(E_i) = p_3$ for $i = 3, 6, 9, \cdot\cdot\cdot$. Let $A, B, C$ be the respective events the first, second, and third player wins. a. Express $A, B$, and $C$ in terms of the $E_i$. b. Determine the probabilities in terms of $p_1, p_2, p_3$, then obtain numerical values in the case $p_1 = 1/4$, $p_2 = 1/3$, and $p_3 = 1/2$. Answer a. $A = E_1 \bigvee \bigvee_{k = 1}^{\infty} \bigcap_{i = 1}^{3k} E_i^c E_{3k + 1}$ $B = E_1^c E_2 \bigvee \bigvee_{k = 1}^{\infty} \bigcap_{i = 1}^{3k + 1} E_i^c E_{3k + 2}$ $C = E_1^c E_2^c E_3\bigvee \bigvee_{k = 1}^{\infty} \bigcap_{i = 1}^{3k + 2} E_i^c E_{3k + 3}$ b. $P(A) = p_1 \sum_{k = 0}^{\infty} (q_1 q_2 q_3)^k = \dfrac{p_1}{1 - q_1q_2q_3}$ $P(B) = \dfrac{q_1 p_2}{1 - q_1 q_2 q_3}$ $P(C) = \dfrac{q_1q_2p_3}{1 - q_1 q_2 q_3}$ For $p_1 = 1/4$, $p_2 = 1/3$. $p_3 = 1/2$, $P(A) = P(B) = P(C) = 1/3$. Exercise $49$ What is the probability of a success on the $i$th trial in a Bernoulli sequence of $n$ component trials, given there are $r$ successes? Answer $P(A_{rn} E_i = pC(n - 1, r - 1) p^{r - 1} q^{n - r}$ and $P(A_{rn}) = C(n, r) p^r q^{n - r}$. Hence $P(E_i| A_A rn) = C(n - 1, r - 1) / C(n, r) = r/n$. Exercise $50$ A device has $N$ similar components which may fail independently, with probability $p$ of failure of any component. The device fails if one or more of the components fails. In the event of failure of the device, the components are tested sequentially. 1. What is the probability the first defective unit tested is the $n$th, given one or more components have failed? 2. What is the probability the defective unit is the $n$th, given that exactly one has failed? 3. What is the probability that more than one unit has failed, given that the first defective unit is the $n$th? Answer Let $A_1$ = event one failure, $B_1$ = event of one or more failures, $B_2$ = event of two or more failures, and $F_n$ = the event the first defective unit found is the $n$th. a. $F_n \subset B_1$ implies $P(F_n|B_1) = P(F_n)/P(B_1) = \dfrac{q^{n - 1}p}{1 - q^N}$ $P(F_n|A_1) = \dfrac{P(F_n A_1}{P(A_1)} = \dfrac{q^{n - 1} p q^{N - n}}{Npq^{N -1}} = \dfrac{1}{N}$ (see Exercise) b. Since probability not all from $n$th are good is $1 - q^{N - n}$. $P(B_2|F_n) = \dfrac{P(B_2F_n}{P(F_n)} = \dfrac{q^{n - 1} p (1 - Q^{N- 1}}{q^{n - 1}p} = 1 - q^{N-n}$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/04%3A_Independence_of_Events/4.04%3A_Problems_on_Independence_of_Events.txt
5.1. Conditional Independence* The idea of stochastic (probabilistic) independence is explored in the unit Independence of Events. The concept is approached as lack of conditioning: $P(A|B) = P(A)$. This is equivalent to the product rule $P(AB) = P(A) P(B)$. We consider an extension to conditional independence. The concept Examination of the independence concept reveals two important mathematical facts: • Independence of a class of non mutually exclusive events depends upon the probability measure, and not on the relationship between the events. Independence cannot be displayed on a Venn diagram, unless probabilities are indicated. For one probability measure a pair may be independent while for another probability measure the pair may not be independent. • Conditional probability is a probability measure, since it has the three defining properties and all those properties derived therefrom. This raises the question: is there a useful conditional independence—i.e., independence with respect to a conditional probability measure? In this chapter we explore that question in a fruitful way. Among the simple examples of “operational independence" in the unit on independence of events, which lead naturally to an assumption of “probabilistic independence” are the following: • If customers come into a well stocked shop at different times, each unaware of the choice made by the other, the the item purchased by one should not be affected by the choice made by the other. • If two students are taking exams in different courses, the grade one makes should not affect the grade made by the other. Example $1$ Buying umbrellas and the weather A department store has a nice stock of umbrellas. Two customers come into the store “independently.” Let A be the event the first buys an umbrella and B the event the second buys an umbrella. Normally, we should think the events {$A, B$} form an independent pair. But consider the effect of weather on the purchases. Let C be the event the weather is rainy (i.e., is raining or threatening to rain). Now we should think $P(A|C) > P(A|C^c)$ and $P(B|C) > P(B|C^c)$. The weather has a decided effect on the likelihood of buying an umbrella. But given the fact the weather is rainy (event C has occurred), it would seem reasonable that purchase of an umbrella by one should not affect the likelihood of such a purchase by the other. Thus, it may be reasonable to suppose $P(A|C) = P(A|BC)$ or, in another notation, $P_C(A) = P_C(A|B)$ An examination of the sixteen equivalent conditions for independence, with probability measure $P$ replaced by probability measure $P_C$, shows that we have independence of the pair {$A, B$} with respect to the conditional probability measure $P_C(\cdot) = P(\cdot |C)$. Thus, $P(A|C^c) = P(A|BC^c)$. For this example, we should also expect that $P(A|C^c = P(A|BC^c)$, so that there is independence with respect to the conditional probability measure $P(\cdot |C^c)$. Does this make the pair {$A, B$} independent (with respect to the prior probability measure $P$)? Some numerical examples make it plain that only in the most unusual cases would the pair be independent. Without calculations, we can see why this should be so. If the first customer buys an umbrella, this indicates a higher than normal likelihood that the weather is rainy, in which case the second customer is likely to buy. The condition leads to $P(B|A) > P(B)$. Consider the following numerical case. Suppose $P(AB|C) = P(A|C)P(B|C)$ and $P(AB|C^c) = P(A|C^c) P(B|C^c)$ and $P(A|C) = 0.60$, $P(A|C^c) = 0.20$, $P(B|C) = 0.50$, $P(B|C^c) = 0.15$, with $P(C) = 0.30$. Then $P(A) = P(A|C) P(C) + P(A|C^c) P(C^c) = 0.3200$ $P(B) = P(B|C) P(C) + P(B|C^c) P(C^c) = 0.2550$ $P(AB) = P(AB|C) P(C) + P(AB|C^c) P(C^c) = P(A|C) P(B|C) P(C) + P(A|C^c) P(C^c) = 0.1110$ As a result, $P(A) P(B) = 0.0816 \ne 0.1110 = P(AB)$ The product rule fails, so that the pair is not independent. An examination of the pattern of computation shows that independence would require very special probabilities which are not likely to be encountered. Example $2$ Students and exams Two students take exams in different courses, Under normal circumstances, one would suppose their performances form an independent pair. Let A be the event the first student makes grade 80 or better and B be the event the second has a grade of 80 or better. The exam is given on Monday morning. It is the fall semester. There is a probability 0.30 that there was a football game on Saturday, and both students are enthusiastic fans. Let C be the event of a game on the previous Saturday. Now it is reasonable to suppose $P(A|C) = P(A|BC)$ and $P(A|C^c) = P(A|BC^c)$ If we know that there was a Saturday game, additional knowledge that B has occurred does not affect the lielihood that A occurs. Again, use of equivalent conditions shows that the situation may be expressed $P(AB|C) = P(A|C) P(B|C)$ and $P(AB|C^c) = P(A|C^c) P(B|C^c)$ Under these conditions, we should suppose that $P(A|C) < P(A|C^c)$ and $P(B|C) < P(B|C^c)$. If we knew that one did poorly on the exam, this would increase the likelihoood there was a Saturday game and hence increase the likelihood that the other did poorly. The failure to be independent arises from a common chance factor that affects both. Although their performances are “operationally” independent, they are not independent in the probability sense. As a numerical example, suppose $P(A|C) = 0.7$ $P(A|C^c) = 0.9$ $P(B|C) = 0.6$ $P(B|C^c) = 0.8$ $P(C) = 0.3$ Straightforward calculations show $P(A) = 0.8400$, $P(B) = 0.7400$, $P(AB) = 0.6300$. Note that $P(A|B) = 0.8514 > P(A)$ as would be expected. Sixteen equivalent conditions Using the facts on repeated conditioning and the equivalent conditions for independence, we may produce a similar table of equivalent conditions for conditional independence. In the hybrid notation we use for repeated conditioning, we write $P_C(A|B) = P_C(A)$ or $P_C(AB) = P_C(A)P_C(B)$ This translates into $P(A|BC) = P(A|C)$ or $P(AB|C) = P(A|C) P(B|C)$ If it is known that $C$ has occurred, then additional knowledge of the occurrence of $B$ does not change the likelihood of $A$. If we write the sixteen equivalent conditions for independence in terms of the conditional probability measure $P_C(\cdot)$, then translate as above, we have the following equivalent conditions. Table 5.1. Sixteen equivalent conditions $P(A|BC) = P(A|C)$ $P(B|AC) = P(B|C)$ $P(AB|C) = P(A|C) P(B|C)$ $P(A|B^c C)$ = P(A|C)\) $P(B^c|AC) = P(B^c|C)$ $P(AB^c|C) = P(A|C) P(B^c|C)$ $P(A^c| BC) = P(A^c|C)$ $P(B|A^c C) = P(B|C)$ $P(A^cB|C) = P(A^c|C) P(B|C)$ $P(A^c|B^cC) = P(a^c|C)$ $P(B^c|A^cC) = P(B^c|C)$ $P(A^cB^c|C) = P(A^c|C) P(B^c|C)$ Table 5.2. $P(A|BC) = P(A|B^c C)$ $P(A^c|B^cC) = P(A^c|B^c C)$ $P(B|AC) = P(B|A^cC)$ $P(B^c|AC) = P(B^c|A^cC)$ The patterns of conditioning in the examples above belong to this set. In a given problem, one or the other of these conditions may seem a reasonable assumption. As soon as one of these patterns is recognized, then all are equally valid assumptions. Because of its simplicity and symmetry, we take as the defining condition the product rule $P(AB|C) = P(A|C) = P(B|C)$. Definition A pair of events {$A, B$} is said to be conditionally independent, given C, designated {$A, B$} iff the following product rule holds: $P(AB|C) = P(A|C) P(B|C)$. The equivalence of the four entries in the right hand column of the upper part of the table, establish The replacement rule If any of the pairs {$A, B$}, {$A, B^c$}, {$A^c, B$} or {$A^c, B^c$} is conditionally independent, given C, then so are the others. — □ This may be expressed by saying that if a pair is conditionally independent, we may replace either or both by their complements and still have a conditionally independent pair. To illustrate further the usefulness of this concept, we note some other common examples in which similar conditions hold: there is operational independence, but some chance factor which affects both. • Two contractors work quite independently on jobs in the same city. The operational independence suggests probabilistic independence. However, both jobs are outside and subject to delays due to bad weather. Suppose A is the event the first contracter completes his job on time and B is the event the second completes on time. If C is the event of “good” weather, then arguments similar to those in Examples 1 and 2 make it seem reasonable to suppose {$A, B$} ci $|C$ and {$A, B$} ci $|C^c$. Remark. In formal probability theory, an event must be sharply defined: on any trial it occurs or it does not. The event of “good weather” is not so clearly defined. Did a trace of rain or thunder in the area constitute bad weather? Did rain delay on one day in a month long project constitute bad weather? Even with this ambiguity, the pattern of probabilistic analysis may be useful. • A patient goes to a doctor. A preliminary examination leads the doctor to think there is a thirty percent chance the patient has a certain disease. The doctor orders two independent tests for conditions that indicate the disease. Are results of these tests really independent? There is certainly operational independence—the tests may be done by different laboratories, neither aware of the testing by the others. Yet, if the tests are meaningful, they must both be affected by the actual condition of the patient. Suppose D is the event the patient has the disease, A is the event the first test is positive (indicates the conditions associated with the disease) and B is the event the second test is positive. Then it would seem reasonable to suppose {$A, B$} ci $|D$ and {$A, B$} ci $|D^c$. In the examples considered so far, it has been reasonable to assume conditional independence, given an event C, and conditional independence, given the complementary event. But there are cases in which the effect of the conditioning event is asymmetric. We consider several examples. • Two students are working on a term paper. They work quite separately. They both need to borrow a certain book from the library. Let C be the event the library has two copies available. If A is the event the first completes on time and B the event the second is successful, then it seems reasonable to assume {$A, B$} ci $|C$. However, if only one book is available, then the two conditions would not be conditionally independent. In general $P(B|AC^c) < P(B|C^c)$, since if the first student completes on time, then he or she must have been successful in getting the book, to the detriment of the second. • If the two contractors of the example above both need material which may be in scarce supply, then successful completion would be conditionally independent, give an adequate supply, whereas they would not be conditionally independent, given a short supply. • Two students in the same course take an exam. If they prepared separately, the event of both getting good grades should be conditionally independent. If they study together, then the likelihoods of good grades would not be independent. With neither cheating or collaborating on the test itself, if one does well, the other should also. Since conditional independence is ordinary independence with respect to a conditional probability measure, it should be clear how to extend the concept to larger classes of sets. Definition A class $\{A_i: i \in J\}$, where $J$ is an arbitrary index set, is conditionally independent, given event $C$, denoted $\{A_i: i \in J\}$ ci $|C$, iff the product rule holds for every finite subclass of two or more. As in the case of simple independence, the replacement rule extends. The replacement rule If the class $\{A_i: i \in J\}$ ci $|C$, then any or all of the events Ai may be replaced by their complements and still have a conditionally independent class. The use of independence techniques Since conditional independence is independence, we may use independence techniques in the solution of problems. We consider two types of problems: an inference problem and a conditional Bernoulli sequence. Example $3$ Use of independence techniques Sharon is investigating a business venture which she thinks has probability 0.7 of being successful. She checks with five “independent” advisers. If the prospects are sound, the probabilities are 0.8, 0.75, 0.6, 0.9, and 0.8 that the advisers will advise her to proceed; if the venture is not sound, the respective probabilities are 0.75, 0.85, 0.7, 0.9, and 0.7 that the advice will be negative. Given the quality of the project, the advisers are independent of one another in the sense that no one is affected by the others. Of course, they are not independent, for they are all related to the soundness of the venture. We may reasonably assume conditional independence of the advice, given that the venture is sound and also given that the venture is not sound. If Sharon goes with the majority of advisers, what is the probability she will make the right decision? Solution If the project is sound, Sharon makes the right choice if three or more of the five advisors are positive. If the venture is unsound, she makes the right choice if three or more of the five advisers are negative. Let $H =$ the event the project is sound, $F =$ the event three or more advisers are positive, $G = F^c =$ the event three or more are negative, and $E =$ the event of the correct decision. Then $P(E) = P(FH) + P(GH^c) = P(F|H) P(H) + P(G|H^c) P(H^c)$ Let $E_i$ be the event the $i$th adviser is positive. Then $P(F|H) =$ the sum of probabilities of the form $P(M_k|H)$, where $M_k$ are minterms generated by the class $\{E_i : 1 \le i \le 5\}$. Because of the assumed conditional independence, $P(E_1 E_2^c E_3^c E_4 E_5|H) = P(E_1|H) P(E_2^c|H) P(E_3^c|H) P(E_4|H) P(E_5|H)$ with similar expressions for each $P(M_k|H)$ and $P(M_k|H^c)$. This means that if we want the probability of three or more successes, given $H$, we can use ckn with the matrix of conditional probabilities. The following MATLAB solution of the investment problem is indicated. P1 = 0.01*[80 75 60 90 80]; P2 = 0.01*[75 85 70 90 70]; PH = 0.7; PE = ckn(P1,3)*PH + ckn(P2,3)*(1 - PH) PE = 0.9255 Often a Bernoulli sequence is related to some conditioning event H. In this case it is reasonable to assume the sequence $\{E_i : 1 \le i \le n\}$ ci $|H$ and ci $|H^c$. We consider a simple example. Example $4$ Test of a claim A race track regular claims he can pick the winning horse in any race 90 percent of the time. In order to test his claim, he picks a horse to win in each of ten races. There are five horses in each race. If he is simply guessing, the probability of success on each race is 0.2. Consider the trials to constitute a Bernoulli sequence. Let $H$ be the event he is correct in his claim. If $S$ is the number of successes in picking the winners in the ten races, determine $P(H|S = k)$ for various numbers $k$ of correct picks. Suppose it is equally likely that his claim is valid or that he is merely guessing. We assume two conditional Bernoulli trials: claim is valid: Ten trials, probability $p = P(E_i | H) = 0.9$. Guessing at random: Ten trials, probability $p = P(E_i|H^c) = 0.2$. Let $S=$ number of correct picks in ten trials. Then $\dfrac{P(H|S = k}{P(H^c|S = k)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(S = k|H)}{P(S = k|H^c)}$, $0 \le k \le 10$ Giving him the benefit of the doubt, we suppose $P(H)/P(H^c) = 1$ and calculate the conditional odds. k = 0:10; Pk1 = ibinom(10,0.9,k); % Probability of k successes, given H Pk2 = ibinom(10,0.2,k); % Probability of k successes, given H^c OH = Pk1./Pk2; % Conditional odds-- Assumes P(H)/P(H^c) = 1 e = OH > 1; % Selects favorable odds disp(round([k(e);OH(e)]')) 6 2 % Needs at least six to have creditability 7 73 % Seven would be creditable, 8 2627 % even if P(H)/P(H^c) = 0.1 9 94585 10 3405063 Under these assumptions, he would have to pick at least seven correctly to give reasonable validation of his claim.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/05%3A_Conditional_Independence/5.01%3A_Conditional_Independence.txt
Some Patterns of Probable Inference We are concerned with the likelihood of some hypothesized condition. In general, we have evidence for the condition which can never be absolutely certain. We are forced to assess probabilities (likelihoods) on the basis of the evidence. Some typical examples: Table 5.3. HYPOTHESIS EVIDENCE Job success Personal traits Presence of oil Geological structures Operation of a device Physical condition Market condition Test market condition Presence of a disease Tests for symptoms If $H$ is the event the hypothetical condition exists and $E$ is the event the evidence occurs, the probabilities available are usually $P(H)$ (or an odds value), $P(E|H)$, and . What is desired is $P(H|E)$ or, equivalently, the odds $P(H|E)/P(H^c|E)$. We simply use Bayes' rule to reverse the direction of conditioning. $\dfrac{P(H|E)}{P(H^c|E)} = \dfrac{P(E|H)}{P(E|H^c)} \cdot \dfrac{P(H)}{P(H^c)}$ No conditional independence is involved in this case. Independent evidence for the hypothesized condition Suppose there are two “independent” bits of evidence. Now obtaining this evidence may be “operationally” independent, but if the items both relate to the hypothesized condition, then they cannot be really independent. The condition assumed is usually of the form $P(E_1|H) = P(E_1|HE_2)$ —if $H$ occurs, then knowledge of $E_2$ does not affect the likelihood of $E_1$. Similarly, we usually have $P(E_1|H^c) = P(E_1|H^cE_2)$. Thus $\{E_1, E_2\}$ ci $|H$ and $\{E_1, E_2\}$ ci $|H^c$. Example $1$ Independent medical tests Suppose a doctor thinks the odds are 2/1 that a patient has a certain disease. She orders two independent tests. Let $H$ be the event the patient has the disease and $E_1$ and $E_2$ be the events the tests are positive. Suppose the first test has probability 0.1 of a false positive and probability 0.05 of a false negative. The second test has probabilities 0.05 and 0.08 of false positive and false negative, respectively. If both tests are positive, what is the posterior probability the patient has the disease? Solution Assuming $\{E_1, E_2\}$ ci $|H$ and ci $|H^c$, we work first in terms of the odds, then convert to probability. $\dfrac{P(H|E_1 E_2)}{P(H^c|E_1 E_2)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(E_1E_2|H)}{P(E_1E_2|H^c)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(E_1|H) P(E_2|H)}{P(E_1|H^c) P(E_2|H^c)}$ The data are $P(H)/P(H^c) = 2$, $P(E_1|H) = 0.95$, $P(E_1|H^c) = 0.1$, $P(E_2|H) = 0.92$, $P(E_2|H^c) = 0.05$ Substituting values, we get $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2} = 2 \cdot \dfrac{0.95 \cdot 0.92}{0.10 \cdot 0.05} = \dfrac{1748}{5}$ so that $P(H|E_1E_2) = \dfrac{1748}{1753} = 1 - \dfrac{5}{1753} = 1 - 0.0029$ Evidence for a symptom Sometimes the evidence dealt with is not evidence for the hypothesized condition, but for some condition which is stochastically related. For purposes of exposition, we refer to this intermediary condition as a symptom. Consider again the examples above. Table 5.4. HYPOTHESIS SYMPTOM EVIDENCE Job success Personal traits Diagnostic test results Presence of oil Geological structures Geophysical survey results Operation of a device Physical condition Monitoring report Market condition Test market condition Market survey result Presence of a disease Physical symptom Test for symptom We let $S$ be the event the symptom is present. The usual case is that the evidence is directly related to the symptom and not the hypothesized condition. The diagnostic test results can say something about an applicant's personal traits, but cannot deal directly with the hypothesized condition. The test results would be the same whether or not the candidate is successful in the job (he or she does not have the job yet). A geophysical survey deals with certain structural features beneath the surface. If a fault or a salt dome is present, the geophysical results are the same whether or not there is oil present. The physical monitoring report deals with certain physical characteristics. Its reading is the same whether or not the device will fail. A market survey treats only the condition in the test market. The results depend upon the test market, not the national market. A blood test may be for certain physical conditions which frequently are related (at least statistically) to the disease. But the result of the blood test for the physical condition is not directly affected by the presence or absence of the disease. Under conditions of this type, we may assume $P(E|SH) = P(E|SH^c)$ and $P(E|S^cH) = P(E|S^cH^c)$ These imply $\{E, H\}$ ci $|S$ and ci $|S^c$. Now $\dfrac{P(H|E)}{P(H^c|E)} = \dfrac{P(HE)}{P(H^cE)} = \dfrac{P(HES) + P(HES^c)}{P(H^cES) + P(H^c E S^c)} = \dfrac{P(HS) P(E|HS) + P(HS^c) P(E|HS^c)}{P(H^cS)P(E|H^cS) + P(H^cS^c) P(E|H^cS^c)}$ $=\dfrac{P(HS) P(E|S) P(HS^c) P(E|S^c)}{P(H^cS) P(E|S) + P(H^cS^c) P(E|S^c)}$ It is worth noting that each term in the denominator differs from the corresponding term in the numerator by having $H^c$ in place of $H$. Before completing the analysis, it is necessary to consider how $H$ and $S$ are related stochastically in the data. Four cases may be considered. Data are $P(S|H)$, $P(S|H^c)$, and $P(H)$. Data are $P(S|H)$, $P(S|H^c)$, and $P(S)$. Data are $P(H|S)$, $P(H|S^c)$, and $P(S)$. Data are $P(H|S)$, $P(H|S^c)$, and $P(H)$. Case a: \dfrac{P(H|S)}{P(H^c|S)} = \dfrac{P(H) P(S|H) P(E|S) + P(H) P(S^c|H) P(E|S^c)}{P(H^c) P(S|H^c) P(E|S) + P(H^c) P(S^c|H^c) P(E|S^c)}\) Example $2$ Geophysical survey Let $H$ be the event of a successful oil well, $S$ be the event there is a geophysical structure favorable to the presence of oil, and $E$ be the event the geophysical survey indicates a favorable structure. We suppose $\{H, E\}$ ci $|S$ and ci $|S^c$. Data are $P(H)/P(H^c) = 3$, $P(S|H) = 0.92$, $P(S|H^c) = 0.20$, $P(E|S) = 0.95$, $P(E|S^c) = 0.15$ Then $\dfrac{P(H|E)}{P(H^c|E)} = 3 \cdot \dfrac{0.92 \cdot 0.95 + 0.08 \cdot 0.15}{0.20 \cdot 0.95 + 0.80 \cdot 0.15} = \dfrac{1329}{155} = 8.5742$ so that $P(H|E) = 1 - \dfrac{155}{1484}= 0.8956$ The geophysical result moved the prior odds of 3/1 to posterior odds of 8.6/1, with a corresponding change of probabilities from 0.75 to 0.90. Case b: Data are $P(S)$$P(S|H)$, $P(S|H^c)$, $P(E|S)$, and $P(E|S^c)$. If we can determine $P(H)$, we can proceed as in case a. Now by the law of total probability $P(S) = P(S|H) P(H) + P(S|H^c)[1 - P(H)]$ which may be solved algebraically to give $P(H) = \dfrac{P(S) - P(S|H^c)}{P(S|H) - P(S|H^c)}$ Example $3$ Geophysical survey revisited In many cases a better estimate of $P(S)$ or the odds $P(S)/P(S^c)$ can be made on the basis of previous geophysical data. Suppose the prior odds for $S$ are 3/1, so that $P(S) = 0.75$. Using the other data in Example, we have $P(H) = \dfrac{P(S) - P(S|H^c)}{P(S|H) - P(S|H^c)} = \dfrac{0.75-0.20}{0.92-0.20} = 55/72$, so that $\dfrac{\(P(H)}{P(H^c)} = 55/17$ Using the pattern of case a, we have $\dfrac{P(H|E)}{P(H^c|E)} = \dfrac{55}{17} \cdot \dfrac{0.92 \cdot 0.95 + 0.08 \cdot 0.15}{0.20 \cdot 0.95 + 0.80 \cdot 0.15} = \dfrac{4873}{527} = 9.2467$ so that $P(H|E) = 1 - \dfrac{527}{5400} = 0.9024$ Usually data relating test results to symptom are of the form $P(E|S)$ and $P(E|S^c)$, or equivalent. Data relating the symptom and the hypothesized condition may go either way. In cases a and b, the data are in the form $P(S|H)$ and $P(S|H^c)$, or equivalent, derived from data showing the fraction of times the symptom is noted when the hypothesized condition is identified. But these data may go in the opposite direction, yielding $P(H|S)$ and $P(H|S^c)$, or equivalent. This is the situation in cases c and d. Data c: Data are $P(E|S)$, $P(E|S^c)$, $P(H|S)$, $P(H|S^c)$ and $P(S)$. Example $4$ Evidence for a disease symptom with prior P(S) When a certain blood syndrome is observed, a given disease is indicated 93 percent of the time. The disease is found without this syndrome only three percent of the time. A test for the syndrome has probability 0.03 of a false positive and 0.05 of a false negative. A preliminary examination indicates a probability 0.30 that a patient has the syndrome. A test is performed; the result is negative. What is the probability the patient has the disease? Solution In terms of the notation above, the data are $P(S) = 0.30$, $P(E|S^c) = 0.03$, $P(E^c|S) = 0.05$ $P(H|S) = 0.93$, and $P(H|S^c) = 0.03$ We suppose $\{H, E\}$ ci $|S$ and ci $|S^c$. $\dfrac{P(H|E^c)}{P(H^c|E^c)} = \dfrac{P(S) P(H|S) P(E^c|S) + P(S^c) P(H|S^c) P(E^c|S^c)}{P(S) P(H^c|S) P(E^c|s) + P(S^c)P(H^c|S^c) P(E^c|S^c)}$ $=\dfrac{0.30 \cdot 0.93 \cdot 0.05 + 0.07 \cdot 0.03 \cdot 0.97}{0.30 \cdot 0.07 \cdot 0.05 + 0.70 \cdot 0.97 \cdot 0.97} = \dfrac{429}{8246}$ which implies $P(H|E^c) = 429/8675 \approx 0.05$ Case d: This differs from case c only in the fact that a prior probability for $H$ is assumed. In this case, we determine the corresponding probability for $S$ by $P(S) = \dfrac{P(H) - P(H|S^c)}{P(H|S) - P(H|S^c)}$ and use the pattern of case c. Example $5$ Evidence for a disease symptom with prior P(h) Suppose for the patient in Example the physician estimates the odds favoring the presence of the disease are 1/3, so that $P(H) = 0.25$. Again, the test result is negative. Determine the posterior odds, given $E^c$. Solution First we determine $P(S) = \dfrac{P(H) - P(H|S^c)}{P(H|S) - P(H|S^c)} = \dfrac{0.25 - 0.03}{0.93 - 0.03} = 11/45$ Then $\dfrac{P(H|E^c)}{P(H^c|E^c)} = \dfrac{(11/45) \cdot 0.93 \cdot 0.05 + (34/45) \cdot 0.03 \cdot 0.97}{(11/45) \cdot 0.07 \cdot 0.05 + (34/45) \cdot 0.97 \cdot 0.97} = \dfrac{15009}{320291} = 0.047$ The result of the test drops the prior odds of 1/3 to approximately 1/21. Independent evidence for a symptom In the previous cases, we consider only a single item of evidence for a symptom. But it may be desirable to have a “second opinion.” We suppose the tests are for the symptom and are not directly related to the hypothetical condition. If the tests are operationally independent, we could reasonably assume $P(E_1|SE_2) = P(E_1 |SE_2^c)$ $\{E_1, E_2\}$ ci $|S$ $P(E_1|SH) = P(E_1|SH^c)$ $\{E_1, H\}$ ci $|S$ $P(E_2|SH) = P(E_2|SH^c)$ $\{E_2, H\}$ ci $|S$ $P(E_1E_2|SH) = P(E_1E_2|SH^c)$ $\{E_1, E_2, H\}$ ci $|S$ This implies $\{E_1, E_2, H\}$ ci $|S$. A similar condition holds for $S^c$. As for a single test, there are four cases, depending on the tie between $S$ and $H$. We consider a "case a" example. Example $6$ A market survey problem A food company is planning to market nationally a new breakfast cereal. Its executives feel confident that the odds are at least 3 to 1 the product would be successful. Before launching the new product, the company decides to investigate a test market. Previous experience indicates that the reliability of the test market is such that if the national market is favorable, there is probability 0.9 that the test market is also. On the other hand, if the national market is unfavorable, there is a probability of only 0.2 that the test market will be favorable. These facts lead to the following analysis. Let $H$ be the event the national market is favorable (hypothesis) $S$ be the event the test market is favorable (symptom) The initial data are the following probabilities, based on past experience: • (a) Prior odds: $P(H)/P(H^c) = 3$ • (b) Reliability of the test market: $P(S|H) = 0.9$ $P(S|H^c) = 0.2$ If it were known that the test market is favorable, we should have $\dfrac{P(H|S)}{P(H^c|S)} = \dfrac{P(S|H) P(H)}{P(S|H^c)P(H^c)} = \dfrac{0.9}{0.2} \cdot 3 = 13.5$ Unfortunately, it is not feasible to know with certainty the state of the test market. The company decision makers engage two market survey companies to make independent surveys of the test market. The reliability of the companies may be expressed as follows. Let $E_1$ be the event the first company reports a favorable test market. $E_2$ be the event the second company reports a favorable test market. On the basis of previous experience, the reliability of the evidence about the test market (the symptom) is expressed in the following conditional probabilities. $P(E_1|S) = 0.9$ $P(E_1|S^c) = 0.3$ $P(E_2|S) = 0.8$ $B(E_2|S^c) = 0.2$ Both survey companies report that the test market is favorable. What is the probability the national market is favorable, given this result? Solution The two survey firms work in an “operationally independent” manner. The report of either company is unaffected by the work of the other. Also, each report is affected only by the condition of the test market— regardless of what the national market may be. According to the discussion above, we should be able to assume $\{E_1, E_2, H\}$ ci $|S$ and $\{E_1, E_2, H\}$ ci $S^c$ We may use a pattern similar to that in Example 2, as follows: $\dfrasc{P(H|E_1 E_2)}{P(H^c |E_1 E_2)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(S|H) P(E_1|S)P(E_2|S) + P(S^c|H) P(E_1|S^c) P(E_2|S^2)}{P(S|H^c) P(E_1|S) P(E_2|S) + P(S^c|H^c) P(E_1|S^c) P(E_2|S^c)}$ $= 3 \cdot \dfrac{0.9 \cdot 0.9 \cdot 0.8 + 0.1 \cdot 0.3 \cdot 0.2}{0.2 \cdot 0.9 \cdot 0.8 + 0.8 \cdot 0.3 \cdot 0.2} = \dfrac{327}{32} \approx 10.22$ in terms of the posterior probability, we have $P(H|E_1E_2) = \dfrac{327/32}{1 + 327/32} = \dfrac{327}{359} = 1 - \dfrac{32}{359} \approx 0.91$ We note that the odds favoring $H$, given positive indications from both survey companies, is 10.2 as compared with the odds favoring H, given a favorable test market, of 13.5. The difference reflects the residual uncertainty about the test market after the market surveys. Nevertheless, the results of the market surveys increase the odds favoring a satisfactory market from the prior 3 to 1 to a posterior 10.2 to 1. In terms of probabilities, the market surveys increase the likelihood of a favorable market from the original $P(H) =0.75$ to the posterior $P(H|E_1 E_2)$. The conditional independence of the results of the survey makes possible direct use of the data. A classification problem A population consists of members of two subgroups. It is desired to formulate a battery of questions to aid in identifying the subclass membership of randomly selected individuals in the population. The questions are designed so that for each individual the answers are independent, in the sense that the answers to any subset of these questions are not affected by and do not affect the answers to any other subset of the questions. The answers are, however, affected by the subgroup membership. Thus, our treatment of conditional idependence suggests that it is reasonable to supose the answers are conditionally independent, given the subgroup membership. Consider the following numerical example. Example $7$ A classification problem A sample of 125 subjects is taken from a population which has two subgroups. The subgroup membership of each subject in the sample is known. Each individual is asked a battery of ten questions designed to be independent, in the sense that the answer to any one is not affected by the answer to any other. The subjects answer independently. Data on the results are summarized in the following table: Table 5.5. GROUP 1 (69 members) GROUP 2 (56 members) Q Yes No Unc. Yes No Unc. 1 42 22 5 20 31 5 2 34 27 8 16 37 3 3 15 45 9 33 19 4 4 19 44 6 31 18 7 5 22 43 4 23 28 5 6 41 13 15 14 37 5 7 9 52 8 31 17 8 8 40 26 3 13 38 5 9 48 12 9 27 24 5 10 20 37 12 35 16 5 Assume the data represent the general population consisting of these two groups, so that the data may be used to calculate probabilities and conditional probabilities. Several persons are interviewed. The result of each interview is a “profile” of answers to the questions. The goal is to classify the person in one of the two subgroups on the basis of the profile of answers. The following profiles were taken. • Y, N, Y, N, Y, U, N, U, Y. U • N, N, U, N, Y, Y, U, N, N, Y • Y, Y, N, Y, U, U, N, N, Y, Y Classify each individual in one of the subgroups. Solution Let $G_1 =$ the event the person selected is from group 1, and $G_2 = G_1^c =$ the event the person selected is from group 2. Let $A_i$ = the event the answer to the $i$th question is “Yes” $B_i$ = the event the answer to the $i$th question is “No” $C_i$ = the event the answer to the $i$th question is “Uncertain” The data are taken to mean $P(A_1|G_1) = 42/69$, $P(B_3|G_2) = 19/56$, etc. The profile Y, N, Y, N, Y, U, N, U, Y. U corresponds to the event $E = A_1 B_2 A_3 B_4 A_5 C_6 B_7 C_8 A_9 C_{10}$ We utilize the ratio form of Bayes' rule to calculate the posterior odds $\dfrac{P(G_1|E)}{P(G_2|E)} = \dfrac{P(E|G_1)}{P(E|G_2)} \cdot \dfrac{P(G_1)}{P(G_2)}$ If the ratio is greater than one, classify in group 1; otherwise classify in group 2 (we assume that a ratio exactly one is so unlikely that we can neglect it). Because of conditional independence, we are able to determine the conditional probabilities $P(E|G_1) = \dfrac{42 \cdot 27 \cdot 15 \cdot 44 \cdot 22 \cdot 15 \cdot 52 \cdot 3 \cdot 48 \cdot 12}{69^{10}}$ and $P(E|G_2) = \dfrac{29 \cdot 37 \cdot 33 \cdot 18 \cdot 23 \cdot 5 \cdot 17 \cdot 5 \cdot 24 \cdot 5}{56^{10}}$ The odds $P(G_2)/P(G_2) = 69/56$. We find the posterior odds to be $\dfrac{P(G_1 |E)}{P(G_2|E)} = \dfrac{42 \cdot 27 \cdot 15 \cdot 44 \cdot 22 \cdot 15 \cdot 52 \cdot 3 \cdot 48 \cdot 12}{29 \cdot 37 \cdot 33 \cdot 18 \cdot 23 \cdot 5 \cdot 17 \cdot 5 \cdot 24 \cdot 5} \cdot \dfrac{56^9}{69^9} = 5.85$ The factor $56^{9} /69^{9}$ comes from multiplying $56^{10}/69^{10}$ by the odds $P(G_1)/P(G_2) = 69/56$. Since the resulting posterior odds favoring Group 1 is greater than one, we classify the respondent in group 1. While the calculations are simple and straightforward, they are tedious and error prone. To make possible rapid and easy solution, say in a situation where successive interviews are underway, we have several m-procedures for performing the calculations. Answers to the questions would normally be designated by some such designation as Y for yes, N for no, and U for uncertain. In order for the m-procedure to work, these answers must be represented by numbers indicating the appropriate columns in matrices A and B. Thus, in the example under consideration, each Y must be translated into a 1, each N into a 2, and each U into a 3. The task is not particularly difficult, but it is much easier to have MATLAB make the translation as well as do the calculations. The following two-stage approach for solving the problem works well. The first m-procedure oddsdf sets up the frequency information. The next m-procedure odds calculates the odds for a given profile. The advantage of splitting into two m-procedures is that we can set up the data once, then call repeatedly for the calculations for different profiles. As always, it is necessary to have the data in an appropriate form. The following is an example in which the data are entered in terms of actual frequencies of response. % file oddsf4.m % Frequency data for classification A = [42 22 5; 34 27 8; 15 45 9; 19 44 6; 22 43 4; 41 13 15; 9 52 8; 40 26 3; 48 12 9; 20 37 12]; B = [20 31 5; 16 37 3; 33 19 4; 31 18 7; 23 28 5; 14 37 5; 31 17 8; 13 38 5; 27 24 5; 35 16 5]; disp('Call for oddsdf') Example $8$ Classification using frequency data oddsf4 % Call for data in file oddsf4.m Call for oddsdf % Prompt built into data file oddsdf % Call for m-procedure oddsdf Enter matrix A of frequencies for calibration group 1 A Enter matrix B of frequencies for calibration group 2 B Number of questions = 10 Answers per question = 3 Enter code for answers and call for procedure "odds" y = 1; % Use of lower case for easier writing n = 2; u = 3; odds % Call for calculating procedure Enter profile matrix E [y n y n y u n u y u] % First profile Odds favoring Group 1: 5.845 Classify in Group 1 odds % Second call for calculating procedure Enter profile matrix E [n n u n y y u n n y] % Second profile Odds favoring Group 1: 0.2383 Classify in Group 2 odds % Third call for calculating procedure Enter profile matrix E [y y n y u u n n y y] % Third profile Odds favoring Group 1: 5.05 Classify in Group 1 The principal feature of the m-procedure odds is the scheme for selecting the numbers from the $A$ and $B$ matrices. If $E$ = [$yynyuunnyy$] , then the coding translates this into the actual numerical matrix [1 1 2 1 3 3 2 2 1 1] used internally. Then $A(:, E)$ is a matrix with columns corresponding to elements of $E$. Thus e = A(:,E) e = 42 42 22 42 5 5 22 22 42 42 34 34 27 34 8 8 27 27 34 34 15 15 45 15 9 9 45 45 15 15 19 19 44 19 6 6 44 44 19 19 22 22 43 22 4 4 43 43 22 22 41 41 13 41 15 15 13 13 41 41 9 9 52 9 8 8 52 52 9 9 40 40 26 40 3 3 26 26 40 40 48 48 12 48 9 9 12 12 48 48 20 20 37 20 12 12 37 37 20 20 The $i$th entry on the $i$th column is the count corresponding to the answer to the $i$th question. For example, the answer to the third question is N (no), and the corresponding count is the third entry in the N (second) column of $A$. The element on the diagonal in the third column of $A(:, E)$ is the third element in that column, and hence the desired third entry of the N column. By picking out the elements on the diagonal by the command diag(A(:,E)), we have the desired set of counts corresponding to the profile. The same is true for diag(B(:,E)). Sometimes the data are given in terms of conditional probabilities and probabilities. A slight modification of the procedure handles this case. For purposes of comparison, we convert the problem above to this form by converting the counts in matrices $A$ and $B$ to conditional probabilities. We do this by dividing by the total count in each group (69 and 56 in this case). Also, $P(G_1) = 69/125 = 0.552$ and $P(G_2) = 56/125 = 0.448$. Table 5.6. GROUP 1 $P(G_1) = 69/125$ GROUP 2 $P(G_2) = 56/125$ Q Yes No Unc. Yes No Unc. 1 0.6087 0.3188 0.0725 0.3571 0.5536 0.0893 2 0.4928 0.3913 0.1159 0.2857 0.6607 0.0536 3 0.2174 0.6522 0.1304 0.5893 0.3393 0.0714 4 0.2754 0.6376 0.0870 0.5536 0.3214 0.1250 5 0.3188 0.6232 0.0580 0.4107 0.5000 0.0893 6 0.5942 0.1884 0.2174 0.2500 0.6607 0.0893 7 0.1304 0.7536 0.1160 0.5536 0.3036 0.1428 8 0.5797 0.3768 0.0435 0.2321 0.6786 0.0893 9 0.6957 0.1739 0.1304 0.4821 0.4286 0.0893 10 0.2899 0.5362 0.1739 0.6250 0.2857 0.0893 These data are in an m-file oddsp4.m. The modified setup m-procedure oddsdp uses the conditional probabilities, then calls for the m-procedure odds. Example $9$ Calculation using conditional probability data oddsp4 % Call for converted data (probabilities) oddsdp % Setup m-procedure for probabilities Enter conditional probabilities for Group 1 A Enter conditional probabilities for Group 2 B Probability p1 individual is from Group 1 0.552 Number of questions = 10 Answers per question = 3 Enter code for answers and call for procedure "odds" y = 1; n = 2; u = 3; odds Enter profile matrix E [y n y n y u n u y u] Odds favoring Group 1: 5.845 Classify in Group 1 The slight discrepancy in the odds favoring Group 1 (5.8454 compared with 5.8452) can be attributed to rounding of the conditional probabilities to four places. The presentation above rounds the results to 5.845 in each case, so the discrepancy is not apparent. This is quite acceptable, since the discrepancy has no effect on the results.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/05%3A_Conditional_Independence/5.02%3A_Patterns_of_Probable_Inference.txt
Exercise $1$ Suppose $\{A., B\}$ ci $|C$ and $\{A, B\}$ ci $|C^c$, $P(C) = 0.7$, and $P(A|C) = 0.4$, $P(B|C) = 0.6$, $P(A|C^c) = 0.3$, $P(B|C^c) = 0.2$ Show whether or not the pair $\{A., B\}$ is independent. Answer $P(A) = P(A|C) P(C) + P(A|C^c)P(C^c)$, $P(B) = P(B|C)P(C)$ + P(B|C^c) P(C^c)\), and $P(AB) = P(A|C) P(B|C) P(C) + P(A|C^c) P(B|C^c) P(B|C^c)P(C^c)$ PA = 0.4*0.7 + 0.3*0.3 PA = 0.3700 PB = 0.6*0.7 + 0.2*0.3 PB = 0.4800 PA*PB ans = 0.1776 PAB = 0.4*0.6*0.7 + 0.3*0.2*0.3 PAB = 0.1860 % PAB not equal PA*PB; not independent Exercise $2$ Suppose $\{A_1, A_2, A_3\}$ ci $|C$ and ci $|C^c$, with $P(C) = 0.4$, and $P(A_i|C) = 0.90, 0.85, 0.80$ $P(A_i|C^c) = 0.20, 0.15, 0.20$ for $i = 1, 2, 3$, respectively Determine the posterior odds $P(C|A_1A_2^cA_3)/P(C^c|A_1A_2^cA_3)$. Answer $\dfrac{P(C|A_1A_2^cA_3)}{P(C^c|A_1A_2^cA_3)} = \dfrac{P(C)}{P(C^c)} \cdot \dfrac{P(A_1C) P(A_2^c|C) P(A_3|C)}{P(A_1|C^c) P(A_2^c|C^c) P(A_3|C^c)}$ $=\dfrac{0.4}{0.6} \cdot \dfrac{0.9 \cdot 0.15 \cdot 0.80}{0.20 \cdot 0.85 \cdot 0.20} = \dfrac{108}{51} = 2.12$ Exercise $3$ Five world class sprinters are entered in a 200 meter dash. Each has a good chance to break the current track record. There is a thirty percent chance a late cold front will move in, bringing conditions that adversely affect the runners. Otherwise, conditions are expected to be favorable for an outstanding race. Their respective probabilities of breaking the record are: • Good weather (no front): 0.75, 0.80, 0.65, 0.70, 0.85 • Poor weather (front in): 0.60, 0.65, 0.50, 0.55, 0.70 The performances are (conditionally) independent, given good weather, and also, given poor weather. What is the probability that three or more will break the track record? Hint. If $B_3$ is the event of three or more, $P(B_3) = P(B_3|W) P(W) + P(B_3|W^c) P(W^c)$. Answer PW = 0.01*[75 80 65 70 85]; PWc = 0.01*[60 65 50 55 70]; P = ckn(PW,3)*0.7 + ckn(PWc,3)*0.3 P = 0.8353 Exercise $4$ A device has five sensors connected to an alarm system. The alarm is given if three or more of the sensors trigger a switch. If a dangerous condition is present, each of the switches has high (but not unit) probability of activating; if the dangerous condition does not exist, each of the switches has low (but not zero) probability of activating (falsely). Suppose $D =$ the event of the dangerous condition and $A =$ the event the alarm is activated. Proper operation consists of $AD \bigvee A^cD^c$. Suppose $E_i =$ the event the $i$th unit is activated. Since the switches operate independently, we suppose $\{E_1, E_2, E_3, E_4, E_5\}$ ci $|D$ and ci $|D^c$ Assume the conditional probabilities of the $E_1$, given $D$, are 0.91, 0.93, 0.96, 0.87, 0.97, and given $D^c$, are 0.03, 0.02, 0.07, 0.04, 0.01, respectively. If $P(D) = 0.02$, what is the probability the alarm system acts properly? Suggestion. Use the conditional independence and the procedure ckn. Answer P1 = 0.01*[91 93 96 87 97]; P2 = 0.01*[3 2 7 4 1]; P = ckn(P1,3)*0.02 + (1 - ckn(P2,3))*0.98 P = 0.9997 Exercise $5$ Seven students plan to complete a term paper over the Thanksgiving recess. They work independently; however, the likelihood of completion depends upon the weather. If the weather is very pleasant, they are more likely to engage in outdoor activities and put off work on the paper. Let $E_i$ be the event the $i$th student completes his or her paper, $A_k$ be the event that $k$ or more complete during the recess, and W be the event the weather is highly conducive to outdoor activity. It is reasonable to suppose $\{E_i: 1 \le i \le 7\}$ and ci $|W^c$. Suppose $P(E_i|W) = 0.4, 0.5, 0.3, 0.7, 0.5, 0.6, 0.2$ $P(E_i|W^c) = 0.7, 0.8, 0.5, 0.9, 0.7, 0.8, 0.5$ respectively, and $P(W) = 0.8$. Determine the probability $P(A_4)$ that four our more complete their papers and $P(A_5)$ that five or more finish. Answer PW = 0.1*[4 5 3 7 5 6 2]; PWc = 0.1*[7 8 5 9 7 8 5]; PA4 = ckn(PW,4)*0.8 + ckn(PWc,4)*0.2 PA4 = 0.4993 PA5 = ckn(PW,5)*0.8 + ckn(PWc,5)*0.2 PA5 = 0.2482 Exercise $6$ A manufacturer claims to have improved the reliability of his product. Formerly, the product had probability 0.65 of operating 1000 hours without failure. The manufacturer claims this probability is now 0.80. A sample of size 20 is tested. Determine the odds favoring the new probability for various numbers of surviving units under the assumption the prior odds are 1 to 1. How many survivors would be required to make the claim creditable? Answer Let $E_1$ be the event the probability is 0.80 and $E_2$ be the event the probability is 0.65. Assume $P(E_1)/P(E_2) = 1$. $\dfrac{P(E_1 |S_n = k)}{P(E_2|S_n = k)} = \dfrac{P(E_1)}{P(E_2)} \cdot \dfrac{P(S_n = k| E_1)}{P(S_n = k|E_2)}$ k = 1:20; odds = ibinom(20,0.80,k)./ibinom(20,0.65,k); disp([k;odds]') - - - - - - - - - - - - 13.0000 0.2958 14.0000 0.6372 15.0000 1.3723 % Need at least 15 or 16 successes 16.0000 2.9558 17.0000 6.3663 18.0000 13.7121 19.0000 29.5337 20.0000 63.6111 Exercise $7$ A real estate agent in a neighborhood heavily populated by affluent professional persons is working with a customer. The agent is trying to assess the likelihood the customer will actually buy. His experience indicates the following: if H is the event the customer buys, S is the event the customer is a professional with good income, and E is the event the customer drives a prestigious car, then $P(S) = 0.7$ $P(S|H) = 0.90$ $P(S|H^c) = 0.2$ $P(E|S) = 0.95$ $P(E|S^c) = 0.25$ Since buying a house and owning a prestigious car are not related for a given owner, it seems reasonable to suppose $P(E|HS) = P(E|H^cS)$ and $P(E|HS^c) = P(E|H^cS^c)$. The customer drives a Cadillac. What are the odds he will buy a house? Answer Assumptions amount to $\{H, E\}$ ci $|S$ and ci $|S^c$. $\dfrac{P(H|S)}{P(H^c|S)} = \dfrac{P(H) P(S|H)}{P(H^c) P(S|H^c)}$ $P(S) = P(H) P(S|H) + [1 - P(H)] P(S|H^c)$ which implies $P(H) = \dfrac{P(S) - P(S|H^c)}{P(S|H) - P(S|H^c)} = 5/7$ so that $\dfrac{P(H|S)}{P(H^c|S)} = \dfrac{5}{2} \cdot \dfrac{0.9}{0.2} = \dfrac{45}{4}$ Exercise $8$ In deciding whether or not to drill an oil well in a certain location, a company undertakes a geophysical survey. On the basis of past experience, the decision makers feel the odds are about four to one favoring success. Various other probabilities can be assigned on the basis of past experience. Let • $H$ be the event that a well would be successful • $S$ be the event the geological conditions are favorable • $E$ be the event the results of the geophysical survey are positive The initial, or prior, odds are $P(H)/P(H^c) = 4$. Previous experience indicates $P(S|H) = 0.9$ $P(S|H^c) = 0.20$ $P(E|S) = 0.95$ $P(E|S^c) = 0.10$ Make reasonable assumptions based on the fact that the result of the geophysical survey depends upon the geological formations and not on the presence or absence of oil. The result of the survey is favorable. Determine the posterior odds $P(H|E)/P(H^c|E)$. Answer $\dfrac{P(H|E)}{P(H^c|E)} = \dfrac{P(H)}{P(H^c)} \cdot \dfrac{P(S|H) P(E|S) + P(S^c|H) P(E|S^c)}{P(S|H^c) P(E|S) + P(S^c|H^c) P(E|S^c)}$ $= 4 \cdot \dfrac{0.90 \cdot 0.95 + 0.10 \cdot 0.10}{0.20 \cdot 0.95 + 0.80 \cdot 0.10} = 12.8148$ Exercise $9$ A software firm is planning to deliver a custom package. Past experience indicates the odds are at least four to one that it will pass customer acceptance tests. As a check, the program is subjected to two different benchmark runs. Both are successful. Given the following data, what are the odds favoring successful operation in practice? Let • $H$ be the event the performance is satisfactory • $S$ be the event the system satisfies customer acceptance tests • $E_1$ be the event the first benchmark tests are satisfactory. • $E_2$ be the event the second benchmark test is ok. Under the usual conditions, we may assume $\{H, E_1, E_2\}$ ci $|S$ and ci $|S^c$. Reliability data show $P(H|S) = 0.95$, $P(H|S^c) = 0.45$ $P(E_1|S) = 0.90$ $P(E_1|S^c) = 0.25$ $P(E_2|S) = 0.95$ $P(E_2|S^c) = 0.20$ Determine the posterior odds $P(H|E_1E_2)/P(H^c|E_1E_2)$. Answer $\dfrac{P(H|E_1 E_2)}{P(H^c|E_1E_2)} = \dfrac{P(HE_1E_2S) + P(HE_1E_2S^c)}{P(H^cE_1E_2 S) + P(H^cE_1E_2S^c)}$ $= \dfrac{P(S) P(H|S) P(E_1|S) P(E_2|S) + P(S^c) P(H|S^c) P(E_1|S^c) P(E_2|S^c)}{P(S) P(H^c|S) P(E_1|S) P(E_2|S) + P(S^c) P(H^c|S^c) P(E_1|S^c) P(E_2|S^c)}$ $= \dfrac{0.80 \cdot 0.95 \cdot 0.90 \cdot 0.95 + 0.20 \cdot 0.45 \cdot 0.25 \cdot 0.20}{0.80 \cdot 0.05 \cdot 0.90 \cdot 0.95 + 0.20 \cdot 0.55 \cdot 0.25 \cdot 0.20} = 16.64811$ Exercise $10$ A research group is contemplating purchase of a new software package to perform some specialized calculations. The systems manager decides to do two sets of diagnostic tests for significant bugs that might hamper operation in the intended application. The tests are carried out in an operationally independent manner. The following analysis of the results is made. • $H$ = the event the program is satisfactory for the intended application • $S$ = the event the program is free of significant bugs • $E_1$ = the event the first diagnostic tests are satisfactory • $E_2$ = the event the second diagnostic tests are satisfactory Since the tests are for the presence of bugs, and are operationally independent, it seems reasonable to assume $\{H, E_1, E_2\}$ ci $|S$ and $\{H, E_1, E_2\}$ ci $|S^c$. Because of the reliability of the software company, the manager thinks $P(S) = 0.85$. Also, experience suggests $P(H|S) = 0.95$ $P(E_1|S) = 0.90$ $P(E_2|S) = 0.95$ $P(H|S^c) = 0.30$ $P(E_1|S^c) = 0.20$ $P(E_2|S^c) = 0.25$ Determine the posterior odds favoring $H$ if results of both diagnostic tests are satisfactory. Answer $\dfrac{P(H|E_1E_2)}{P(H^c|E_1 E_2)} = \dfrac{P(HE_1E_2S) + P(HE_1E_2S^c)}{P(H^cE_1E_2S) + P(H^cE_1E_2S^c)}$ $P(HE_1E_2S) = P(S) P(H|S) P(E_1|SH) P(E_2|SHE_1) = P(S) P(H|S) P(E_1|S) P(E_2|S)$ with similar expressions for the other terms. $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2)} = \dfrac{0.85 \cdot 0.95 \cdot 0.90 \cdot 0.95 + 0.15 \cdot 0.30 \cdot 0.25 \cdot 0.20}{0.85 \cdot 0.05 \cdot 0.90 \cdot 0.95 + 0.15 \cdot 0.70 \cdot 0.25 \cdot 0.20} = 16.6555$ Exercise $11$ A company is considering a new product now undergoing field testing. Let • $H$ be the event the product is introduced and successful • $S$ be the event the R&D group produces a product with the desired characteristics. • $E$ be the event the testing program indicates the product is satisfactory The company assumes $P(S) = 0.9$ and the conditional probabilities $P(H|S) = 0.90$ $P(H|S^c) = 0.10$ $P(E|S) = 0.95$ $P(E|S^c) = 0.15$ Since the testing of the merchandise is not affected by market success or failure, it seems reasonable to suppose $\{H, E\}$ ci $|S$ and ci $|S^c$. The field tests are favorable. Determine $P(H|E)/P(H^c|E)$. Answer $\dfrac{P(H|E)}{P(H^c |E)} = \dfrac{P(S) P(H|S) P(E|S) + P(S^c) P(H|S^c) P(E|S^c)}{P(S) P(H^c|S) P(E|S) + P(S^c) P(H^c|S^c) P(E|S^c)}$ $= \dfrac{0.90 \cdot 0.90 \cdot 0.95 + 0.10 \cdot 0.10 \cdot 0.15}{0.90 \cdot 0.10 \cdot 0.95 + 0.10 \cdot 0.90 \cdot 0.15} = 7.7879$ Exercise $12$ Martha is wondering if she will get a five percent annual raise at the end of the fiscal year. She understands this is more likely if the company's net profits increase by ten percent or more. These will be influenced by company sales volume. Let • $H$ = the event she will get the raise • $S$ = the event company profits increase by ten percent or more • $E$ = the event sales volume is up by fifteen percent or more Since the prospect of a raise depends upon profits, not directly on sales, she supposes $\{H, E\}$ ci $|S$ and $\{H, E\}$ ci $|S^c$. She thinks the prior odds favoring suitable profit increase is about three to one. Also, it seems reasonable to suppose $P(H|S) = 0.80$ $P(H|S^c) = 0.10$ $P(E|S) = 0.95$ $P(E|S^c) = 0.10$ End of the year records show that sales increased by eighteen percent. What is the probability Martha will get her raise? Answer $\dfrac{P(H|E)}{P(H^c |E)} = \dfrac{P(S) P(H|S) P(E|S) + P(S^c) P(H|S^c) P(E|S^c)}{P(S) P(H^c|S) P(E|S) + P(S^c) P(H^c|S^c) P(E|S^c)}$ $= \dfrac{0.75 \cdot 0.80 \cdot 0.95 + 0.25 \cdot 0.10 \cdot 0.10}{0.75 \cdot 0.20 \cdot 0.95 + 0.25 \cdot 0.90 \cdot 0.10} = 3.4697$ Exercise $13$ A physician thinks the odds are about 2 to 1 that a patient has a certain disease. He seeks the “independent” advice of three specialists. Let $H$ be the event the disease is present, and $A, B, C$be the events the respective consultants agree this is the case. The physician decides to go with the majority. Since the advisers act in an operationally independent manner, it seems reasonable to suppose $\{A, B, C\}$ ci $|H$ and ci $|H^c$. Experience indicates $P(A|H) = 0.8$, $P(B|H) = 0.7$, $P(C|H) - 0.75$ $P(A^c|H^c) = 0.85$, $P(B^c|H^c) = 0.8$, $P(C^c|H^c) = 0.7$ What is the probability of the right decision (i.e., he treats the disease if two or more think it is present, and does not if two or more think the disease is not present)? Answer PH = 0.01*[80 70 75]; PHc = 0.01*[85 80 70]; pH = 2/3; P = ckn(PH,2)*pH + ckn(PHc,2)*(1 - pH) P = 0.8577 Exercise $14$ A software company has developed a new computer game designed to appeal to teenagers and young adults. It is felt that there is good probability it will appeal to college students, and that if it appeals to college students it will appeal to a general youth market. To check the likelihood of appeal to college students, it is decided to test first by a sales campaign at Rice and University of Texas, Austin. The following analysis of the situation is made. • $H$ = the event the sales to the general market will be good • $s$ = the event the game appeals to college students • $E_1$= the event the sales are good at Rice • $E_2$ = the event the sales are good at UT, Austin Since the tests are for the reception are at two separate universities and are operationally independent, it seems reasonable to assume $\{H, E_1, E_2\}$ ci $|S$ and $\{H, E_1, E_2\}$ ci $|S^c$. Because of its previous experience in game sales, the managers think $P(S) = 0.80$. Also, experience suggests $P(H|S) = 0.95$ $P(E_1|S) = 0.90$ $P(E_2|S) = 0.95$ $P(H|S^c) = 0.30$ $P(E_1|S^c) = 0.20$ $P(E_2|S^c) = 0.25$ Determine the posterior odds favoring $H$ if sales results are satisfactory at both schools. Answer $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2)} = \dfrac{P(HE_1E_2S) + P(HE_1E_2S^c)}{P(H^cE_1E_2S) + P(H^cE_1E_2S^c)}$ $= \dfrac{P(S) P(H|S) P(E_1|S) P(E_2|S) + P(S^c) P(H|S^c) P(E_1|S^c) P(E_2|S^c)}{P(S) P(H^c|S) P(E_1|S) P(E_2|S) + P(S^c) P(H^c|S^c) P(E_1|S^c) P(E_2|S^c)}$ $= \dfrac{0.80 \cdot 0.95 \cdot 0.90 \cdot 0.95 + 0.20 \cdot 0.30 \cdot 0.20 \cdot 0.25}{0.80 \cdot 0.05 \cdot 0.90 \cdot 0.95 + 0.20 \cdot 0.70 \cdot 0.20 \cdot 0.25} = 15.8447$ Exercise $15$ In a region in the Gulf Coast area, oil deposits are highly likely to be associated with underground salt domes. If $H$ is the event that an oil deposit is present in an area, and $S$ is the event of a salt dome in the area, experience indicates $P(S|H) = 0.9$ and $P(S|H^c) = 1$. Company executives believe the odds favoring oil in the area is at least 1 in 10. It decides to conduct two independent geophysical surveys for the presence of a salt dome. Let $E-1, E_2$ be the events the surveys indicate a salt dome. Because the surveys are tests for the geological structure, not the presence of oil, and the tests are carried out in an operationally independent manner, it seems reasonable to assume $\{H, E_1, E_2\}$ ci $|S$ and ci $|S^c$. Data on the reliability of the surveys yield the following probabilities $P(E_1|S) = 0.95$ $P(E_1|S^c) = 0.05$ $P(E_2|S) = 0.90$ $P(E_2|S^c) = 0.10$ Determine the posterior odds $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2)}$. Should the well be drilled? Answer $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2)} = \dfrac{P(HE_1E_2S) + P(HE_1E_2S^c)}{P(H^cE_1E_2S) + P(H^cE_1E_2S^c)}$ $P(HE_1E_2S) = P(H) P(S|H) P(E_1|SH) P(E_2|SHE_1) = P(H) P(S|H) P(E_1|S) P(E_2|S)$ with similar expressions for the other terms. $\dfrac{P(H|E_1E_2)}{P(H^c|E_1E_2)} = \dfrac{1}{10} \cdot \dfrac{0.9 \cdot 0.95 \cdot 0.90 + 0.10 \cdot 0.05 \cdot 0.10}{0.1 \cdot 0.95 \cdot 0.90 + 0.90 \cdot 0.05 \cdot 0.10} = 0.8556$ Exercise $16$ A sample of 150 subjects is taken from a population which has two subgroups. The subgroup membership of each subject in the sample is known. Each individual is asked a battery of ten questions designed to be independent, in the sense that the answer to any one is not affected by the answer to any other. The subjects answer independently. Data on the results are summarized in the following table: GROUP 1 (84 members) GROUP 2 (66 members) Q Yes No Unc Yes No Unc 1 51 26 7 27 34 5 2 42 32 10 19 43 4 3 19 54 11 39 22 5 4 24 53 7 38 19 9 5 27 52 5 28 33 5 6 49 19 16 19 41 6 7 16 59 9 37 21 8 8 47 32 5 19 42 5 9 55 17 12 27 33 6 10 24 53 7 39 21 6 Assume the data represent the general population consisting of these two groups, so that the data may be used to calculate probabilities and conditional probabilities. Several persons are interviewed. The result of each interview is a “profile” of answers to the questions. The goal is to classify the person in one of the two subgroups For the following profiles, classify each individual in one of the subgroups 1. y, n, y, n, y, u, n, u, y. u 2. n, n, u, n, y, y, u, n, n, y 3. y, y, n, y, u, u, n, n, y, y Answer % file npr05_16.m % Data for Exercise 5.3.16. A = [51 26 7; 42 32 10; 19 54 11; 24 53 7; 27 52 5; 49 19 16; 16 59 9; 47 32 5; 55 17 12; 24 53 7]; B = [27 34 5; 19 43 4; 39 22 5; 38 19 9; 28 33 5; 19 41 6; 37 21 8; 19 42 5; 27 33 6; 39 21 6]; disp('Call for oddsdf') npr05_16 Call for oddsdf oddsdf Enter matrix A of frequencies for calibration group 1 A Enter matrix B of frequencies for calibration group 2 B Number of questions = 10 Answers per question = 3 Enter code for answers and call for procedure "odds" y = 1; n = 2; u = 3; odds Enter profile matrix E [y n y n y u n u y u] Odds favoring Group 1: 3.743 Classify in Group 1 odds Enter profile matrix E [n n u n y y u n n y] Odds favoring Group 1: 0.2693 Classify in Group 2 odds Enter profile matrix E [y y n y u u n n y y] Odds favoring Group 1: 5.286 Classify in Group 1 Exercise $17$ The data of Exercise 5.3.16., above, are converted to conditional probabilities and probabilities, as follows (probabilities are rounded to two decimal places). GROUP 1 $P(G_1) = 0.56$ GROUP 2 $P(G_2) = 0.44$ Q Yes No Unc Yes No Unc 1 0.61 0.31 0.08 0.41 0.51 0.08 2 0.50 0.38 0.12 0.29 0.65 0.06 3 0.23 0.64 0.13 0.59 0.33 0.08 4 0.29 0.63 0.08 0.57 0.29 0.14 5 0.32 0.62 0.06 0.42 0.50 0.08 6 0.58 0.23 0.19 0.29 0.62 0.09 7 0.19 0.70 0.11 0.56 0.32 0.12 8 0.56 0.38 0.06 0.29 0.63 0.08 9 0.65 0.20 0.15 0.41 0.50 0.09 10 0.29 0.63 0.08 0.59 0.32 0.09 For the following profiles classify each individual in one of the subgroups. 1. y, n, y, n, y, u, n, u, y, u 2. n, n, u, n, y, y, u, n, n, y 3. y, y, n, y, u, u, n, n, y, y Answer npr05_17 % file npr05_17.m % Data for Exercise 5.3.17. PG1 = 84/150; PG2 = 66/125; A = [0.61 0.31 0.08 0.50 0.38 0.12 0.23 0.64 0.13 0.29 0.63 0.08 0.32 0.62 0.06 0.58 0.23 0.19 0.19 0.70 0.11 0.56 0.38 0.06 0.65 0.20 0.15 0.29 0.63 0.08]; B = [0.41 0.51 0.08 0.29 0.65 0.06 0.59 0.33 0.08 0.57 0.29 0.14 0.42 0.50 0.08 0.29 0.62 0.09 0.56 0.32 0.12 0.29 0.64 0.08 0.41 0.50 0.09 0.59 0.32 0.09]; disp('Call for oddsdp') Call for oddsdp oddsdp Enter matrix A of conditional probabilities for Group 1 A Enter matrix B of conditional probabilities for Group 2 B Probability p1 an individual is from Group 1 PG1 Number of questions = 10 Answers per question = 3 Enter code for answers and call for procedure "odds" y = 1; n = 2; u = 3; odds Enter profile matrix E [y n y n y u n u y u] Odds favoring Group 1: 3.486 Classify in Group 1 odds Enter profile matrix E [n n u n y y u n n y] Odds favoring Group 1: 0.2603 Classify in Group 2 odds Enter profile matrix E [y y n y u u n n y y] Odds favoring Group 1: 5.162 Classify in Group 1
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/05%3A_Conditional_Independence/5.03%3A_Problems_on_Conditional_Independence.txt
Probability associates with an event a number which indicates the likelihood of the occurrence of that event on any trial. An event is modeled as the set of those possible outcomes of an experiment which satisfy a property or proposition characterizing the event. Often, each outcome is characterized by a number. The experiment is performed. If the outcome is observed as a physical quantity, the size of that quantity (in prescribed units) is the entity actually observed. In many nonnumerical cases, it is convenient to assign a number to each outcome. For example, in a coin flipping experiment, a “head” may be represented by a 1 and a “tail” by a 0. In a Bernoulli trial, a success may be represented by a 1 and a failure by a 0. In a sequence of trials, we may be interested in the number of successes in a sequence of $n$ component trials. One could assign a distinct number to each card in a deck of playing cards. Observations of the result of selecting a card could be recorded in terms of individual numbers. In each case, the associated number becomes a property of the outcome. Random variables as functions We consider in this chapter real random variables (i.e., real-valued random variables). In the chapter on Random Vectors and Joint Distributions, we extend the notion to vector-valued random quantites. The fundamental idea of a real random variable is the assignment of a real number to each elementary outcome $\omega$ in the basic space $\Omega$. Such an assignment amounts to determining a function $X$, whose domain is $\Omega$ and whose range is a subset of the real line R. Recall that a real-valued function on a domain (say an interval $I$ on the real line) is characterized by the assignment of a real number $y$ to each element $x$ (argument) in the domain. For a real-valued function of a real variable, it is often possible to write a formula or otherwise state a rule describing the assignment of the value to each argument. Except in special cases, we cannot write a formula for a random variable $X$. However, random variables share some important general properties of functions which play an essential role in determining their usefulness. Mappings and inverse mappings There are various ways of characterizing a function. Probably the most useful for our purposes is as a mapping from the domain $\Omega$ to the codomain R. We find the mapping diagram of Figure 1 extremely useful in visualizing the essential patterns. Random variable $X$, as a mapping from basic space $\Omega$ to the real line R, assigns to each element $\omega$ a value $t = X(\omega)$. The object point $\omega$ is mapped, or carried, into the image point $t$. Each $\omega$ is mapped into exactly one $t$, although several $\omega$ may have the same image point. Figure 6.1.1. The basic mapping diagram $t = X(\omega)$. Associated with a function $X$ as a mapping are the inverse mapping $X^{-1}$ and the inverse images it produces. Let $M$ be a set of numbers on the real line. By the inverse image of $M$ under the mapping $X$, we mean the set of all those $\omega \in \Omega$ which are mapped into $M$ by $X$ (see Figure 2). If $X$ does not take a value in $M$, the inverse image is the empty set (impossible event). If $M$ includes the range of $X$, (the set of all possible values of $X$), the inverse image is the entire basic space $\Omega$. Formally we write $X^{-1} (M) = \{\omega: X(\omega) \in M\}$ Now we assume the set $X^{-1} (M)$, a subset of $\Omega$, is an event for each $M$. A detailed examination of that assertion is a topic in measure theory. Fortunately, the results of measure theory ensure that we may make the assumption for any $X$ and any subset $M$ of the real line likely to be encountered in practice. The set $X^{-1} (M)$ is the event that $X$ takes a value in $M$. As an event, it may be assigned a probability. Before considering further examples, we note a general property of inverse images. We state it in terms of a random variable, which maps $\Omega$ to the real line (see Figure 3). Preservation of set operations Let $X$ be a mapping from $\Omega$ to the real line R. If $M, M_i, i \in J$ are sets of real numbers, with respective inverse images $E$, $E_i$, then $X^{-1} (M^c) = E^c$, $X^{-1} (\bigcup_{i \in J} M_i) = \bigcup_{i \in J} E_i$ and $X^{-1} (\bigcap_{i \in J} M_i) = \bigcap_{i \in J} E_i$ Examination of simple graphical examples exhibits the plausibility of these patterns. Formal proofs amount to careful reading of the notation. Central to the structure are the facts that each element ω is mapped into only one image point t and that the inverse image of $M$ is the set of all those $\omega$ which are mapped into image points in $M$. An easy, but important, consequence of the general patterns is that the inverse images of disjoint $M, N$ are also disjoint. This implies that the inverse of a disjoint union of $M_i$ is a disjoint union of the separate inverse images. Example $2$ Events determined by a random variable Consider, again, the random variable $S_n$ which counts the number of successes in a sequence of $n$ Bernoulli trials. Let $n = 10$ and $p = 0.33$. Suppose we want to determine the probability $P(2 < S_{10} \le 8)$. Let $A_k = \{\omega: S_{10} (\omega) = k\}$, which we usually shorten to $A_k = \{S_{10} = k\}$. Now the $A_k$ form a partition, since we cannot have $\omega \in A_k$ and $\omega \in A_k$ $j \ne k$ (i.e., for any $\omega$, we cannot have two values for $S_n (\omega)$). Now, $\{2 < S_{10} \le 8\} = A_3 \bigvee A_4 \bigvee A_5 \bigvee A_6 \bigvee A_7 \bigvee A_8$ since $S_{10}$ takes on a value greater than 2 but no greater than 8 iff it takes one of the integer values from 3 to 8. By the additivity of probability, Mass transfer and induced probability distribution Because of the abstract nature of the basic space and the class of events, we are limited in the kinds of calculations that can be performed meaningfully with the probabilities on the basic space. We represent probability as mass distributed on the basic space and visualize this with the aid of general Venn diagrams and minterm maps. We now think of the mapping from $\Omega$ to R as a producing a point-by-point transfer of the probability mass to the real line. This may be done as follows: To any set $M$ on the real line assign probability mass $P_X(M) = P(X^{-1}(M))$ It is apparent that $P_X(M) \ge 0$ and $P_X$(R) $= P(\Omega) = 1$. And because of the preservation of set operations by the inverse mapping $P_X(\bigvee_{i = 1}^{\infty} M_i) = P(X^{-1}(\bigvee_{i = 1}^{\infty} M_i)) = P(\bigvee_{i = 1}^{\infty} X^{-1}(M_i)) = \sum_{i = 1}^{\infty} P(X^{-1}(M_i)) = \sum_{i = 1}^{\infty} P_X(M_i)$ This means that $P_X$ has the properties of a probability measure defined on the subsets of the real line. Some results of measure theory show that this probability is defined uniquely on a class of subsets of R that includes any set normally encountered in applications. We have achieved a point-by-point transfer of the probability apparatus to the real line in such a manner that we can make calculations about the random variable $X$. We call $P_X$ the probability measure induced by X. Its importance lies in the fact that $P(X \in M) = P_X(M)$. Thus, to determine the likelihood that random quantity X will take on a value in set M, we determine how much induced probability mass is in the set M. This transfer produces what is called the probability distribution for X. In the chapter "Distribution and Density Functions", we consider useful ways to describe the probability distribution induced by a random variable. We turn first to a special class of random variables. Simple random variables We consider, in some detail, random variables which have only a finite set of possible values. These are called simple random variables. Thus the term “simple” is used in a special, technical sense. The importance of simple random variables rests on two facts. For one thing, in practice we can distinguish only a finite set of possible values for any random variable. In addition, any random variable may be approximated as closely as pleased by a simple random variable. When the structure and properties of simple random variables have been examined, we turn to more general cases. Many properties of simple random variables extend to the general case via the approximation procedure. Representation with the aid of indicator functions In order to deal with simple random variables clearly and precisely, we must find suitable ways to express them analytically. We do this with the aid of indicator functions. Three basic forms of representation are encountered. These are not mutually exclusive representatons. Standard or canonical form, which displays the possible values and the corresponding events. If X takes on distinct values $\{t_1, t_2, \cdot\cdot\cdot, t_n\}$ with respective probabilities $\{p_1, p_2, \cdot\cdot\cdot, p_n\}$ and if $A_i = \{X = t_i\}$, for $1 \le i \le n$, then $\{A_1, A_2, \cdot \cdot\cdot, A_n\}$ is a partition (i.e., on any trial, exactly one of these events occurs). We call this the partition determined by (or, generated by) X. We may write $X = t_1 I_{A_1} + t_2 I_{A_2} + \cdot\cdot\cdot + t_n I_{A_n} = \sum_{i = 1}^{n} t_i I_{A_i}$ If $X(\omega) = t_i$, then $\omega \in A_i$, so that $I_{A_i} (\omega) = 1$ and all the other indicator functions have value zero. The summation expression thus picks out the correct value $t_i$. This is true for any $t_i$, so the expression represents $X(\omega)$for all $\omega$. The distinct set $\{A, B, C\}$ of the values and the corresponding probabilities $\{p_1, p_2, \cdot\cdot\cdot, p_n\}$ constitute the distribution for X. Probability calculations for X are made in terms of its distribution. One of the advantages of the canonical form is that it displays the range (set of values), and if the probabilities $\{A, B, C, D\}$ are known, the distribution is determined. Note that in canonical form, if one of the $t_i$ has value zero, we include that term. For some probability distributions it may be that $P(A_i) = 0$ for one or more of the $t_i$. In that case, we call these values null values, for they can only occur with probability zero, and hence are practically impossible. In the general formulation, we include possible null values, since they do not affect any probabilitiy calculations. Example $3$ Successes in Bernoulli trials As the analysis of Bernoulli trials and the binomial distribution shows (see Section 4.8), canonical form must be $S_n = \sum_{k = 0}^{n} k I_{A_k}$ with $P(A_k) = C(n, k) p^{k} (1-p)^{n - k}$, $0 \le k \le n$ For many purposes, both theoretical and practical, canonical form is desirable. For one thing, it displays directly the range (i.e., set of values) of the random variable. The distribution consists of the set of values $\{t_k: 1 \le k \le n\}$ paired with the corresponding set of probabilities $\{p_k: 1 \le k \le n\}$, where $p_k = P(A_k) = P(X = t_k)$. Simple random variable X may be represented by a primitive form $X = c_1 I_{C_1} + c_2 I_{C_2} + \cdot \cdot \cdot, c_m I_{C_m}$, where $\{C_j: 1 \le j \le m\}$ is a partition Remarks • If $\{C_j: 1 \le j \le m\}$ is a disjoint class, but $\bigcup_{j = 1}^{m} C_j \ne \Omega$, we may append the event $C_{m + 1} = [\bigcup_{j = 1}^{m} C_j]^c$ and assign value zero to it. • We say a primitive form, since the representation is not unique. Any of the Ci may be partitioned, with the same value $c_i$ associated with each subset formed. • Canonical form is a special primitive form. Canonical form is unique, and in many ways normative. Example $4$ Simple random variables in primitive form • A wheel is spun yielding, on a equally likely basis, the integers 1 through 10. Let $C_i$ be the event the wheel stops at $i$, $1 \le i \le 10$. Each $P(C_i) = 0.1$. If the numbers 1, 4, or 7 turn up, the player loses ten dollars; if the numbers 2, 5, or 8 turn up, the player gains nothing; if the numbers 3, 6, or 9 turn up, the player gains ten dollars; if the number 10 turns up, the player loses one dollar. The random variable expressing the results may be expressed in primitive form as $X = -10 I_{C_1} + 0 I_{C_2} + 10 I_{C_3} - 10 I_{C_4} + 0 I_{C_5} + 10 I_{C_6} - 10 I_{C_7} + 0 I_{C_8} + 10I_{C_9} - I_{C_{10}}$ • A store has eight items for sale. The prices are $3.50,$5.00, $3.50,$7.50, $5.00,$5.00, $3.50, and$7.50, respectively. A customer comes in. She purchases one of the items with probabilities 0.10, 0.15, 0.15, 0.20, 0.10 0.05, 0.10 0.15. The random variable expressing the amount of her purchase may be written $X = 3.5 I_{C_1} + 5.0 I_{C_2} + 3.5 I_{C_3} + 7.5 I_{C_4} + 5.0 I_{C_5} + 5.0 I_{C_6} + 3.5 I_{C_7} + 7.5 I_{C_8}$ We commonly have X represented in affine form, in which the random variable is represented as an affine combination of indicator functions (i.e., a linear combination of the indicator functions plus a constant, which may be zero). $X = c_0 + c_1 I_{E_1} + c_2 I_{E_2} + \cdot\cdot \cdot + c_m I_{E_m} = c_0 + \sum_{j = 1}^{m} c_j I_{E_j}$ In this form, the class $\{E_1, E_2, \cdot\cdot\cdot, E_m\}$ is not necessarily mutually exclusive, and the coefficients do not display directly the set of possible values. In fact, the $E_i$ often form an independent class. Remark. Any primitive form is a special affine form in which $c_0 = 0$ and the $E_i$ form a partition. Example $5$ Consider, again, the random variable $S_n$ which counts the number of successes in a sequence of $n$ Bernoulli trials. If $E_i$ is the event of a success on the $i$th trial, then one natural way to express the count is $S_n = \sum_{i = 1}^{n} I_{E_i}$, with $P(E_i) = p$ $1 \le i \le n$ This is affine form, with $c_0 = 0$ and $c_i =1$ for $1 \le i \le n$. In this case, the $E_i$ cannot form a mutually exclusive class, since they form an independent class. Events generated by a simple random variable: canonical form We may characterize the class of all inverse images formed by a simple random $X$ in terms of the partition it determines. Consider any set $M$ of real numbers. If $t_i$ in the range of $X$ is in $M$, then every point $\omega \in A_i$ maps into $t_i$, hence into $M$. If the set $J$ is the set of indices $i$ such that $t_i \in M$, then Only those points $\omega$ in $A_M = \bigvee_{i \in J} A_i$ map into $M$. Hence, the class of events (i.e., inverse images) determined by $X$ consists of the impossible event $\emptyset$, the sure event $\Omega$, and the union of any subclass of the $A_i$ in the partition determined by $X$. Example $6$ Events determined by a simple random variable Suppose simple random variable $X$ is represented in canonical form by $X = -2I_A - I_B + 0 I_C + 3I_D$ Then the class $\{A, B, C, D\}$ is the partition determined by $X$ and the range of $X$ is $\{-2, -1, 0, 3\}$. 1. If $M$ is the interval [-2, 1], the the values -2, -1, and 0 are in $M$ and $X^{-1}(M) = A \bigvee B \bigvee C$. 2. If $M$ is the set (-2, -1] $\cup$ [1, 5], then the values -1, 3 are in $M$ and $X^{-1}(M) = B \bigvee D$. 3. The event $\{X \le 1\} = \{X \in (-\infty, 1]\} = X^{-1} (M)$, where $M = (- \infty, 1]$. Since values -2, -1, 0 are in $M$, the event $\{X \le 1\} = A \bigvee B \bigvee C$. Determination of the distribution Determining the partition generated by a simple random variable amounts to determining the canonical form. The distribution is then completed by determining the probabilities of each event $A_k = \{X = t_k\}$. From a primitive form Before writing down the general pattern, we consider an illustrative example. Example $7$ The distribution from a primitive form Suppose one item is selected at random from a group of ten items. The values (in dollars) and respective probabilities are $c_j$ 2.00 1.50 2.00 2.50 1.50 1.50 1.00 2.50 2.00 1.50 $P(C_j)$ 0.08 0.11 0.07 0.15 0.10 0.09 0.14 0.08 0.08 0.10 By inspection, we find four distinct values: $t _ 1 = 1.00$, $t_2 = 1.50$, $t_3 = 2.00$, and $t_4 = 2.50$. The value 1.00 is taken on for $\omega \in C_7$, so that $A_1 = C_7$ and $P(A_1) = P(C_7) = 0.14$. Value 1.50 is taken on for $\omega \in C_2, C_5, C_6, C_{10}$ so that $A_2 = C_2 \bigvee C_5 \bigvee C_6 \bigvee C_{10}$ and $P(A_2) = P(C_2) + P(C_5) + P(C_6) + P(C_{10}) = 0.40$ Similarly $P(A_3) = P(C_1) + P(C_3) + P(C_9) = 0.23$ and $P(A_4) = P(C_4) + P(C_8) = 0.25$ The distribution for X is thus $k$ 1.00 1.50 2.00 2.50 $P(X = k)$ 0.14 0.40 0.23 0.23 The general procedure may be formulated as follows: If $X = \sum_{j = 1}^{m} c_j I_{c_j}$, we identify the set of distinct values in the set $\{c_j: 1 \le j \le m\}$. Suppose these are $t_1 < t_2 < \cdot\cdot\cdot < t_n$. For any possible value $t_i$ in the range, identify the index set $J_i$ of those $j$ such that $c_j = t_i$ Then the terms $\sum_{J_i} c_j I_{c_j} = t_i \sum_{J_i} I_{c_j} = t_i I_{A_i}$, where $A_i = \bigvee_j \in J_i C_j$, and $P(A_i) = P(X = t_i) = \sum_{j \in J} P(C_j)$ Examination of this procedure shows that there are two phases: • Select and sort the distinct values $t_1, t_2, \cdot\cdot\cdot, t_n$ • Add all probabilities associated with each value $t_i$ to determine $P(X = t_i)$ We use the m-function csort which performs these two operations (see Example 4 from "Minterms and MATLAB Calculations"). Example $8$ Use of csort on Example 6.1.7 >> C = [2.00 1.50 2.00 2.50 1.50 1.50 1.00 2.50 2.00 1.50]; % Matrix of c_j >> pc = [0.08 0.11 0.07 0.15 0.10 0.09 0.14 0.08 0.08 0.10]; % Matrix of P(C_j) >> [X,PX] = csort(C,pc); % The sorting and consolidating operation >> disp([X;PX]') % Display of results 1.0000 0.1400 1.5000 0.4000 2.0000 0.2300 2.5000 0.2300 For a problem this small, use of a tool such as csort is not really needed. But in many problems with large sets of data the m-function csort is very useful. From affine form Suppose $X$ is in affine form, $X = c_0 + c_1 I_{E_1} + c_2 I_{E_2} + \cdot\cdot\cdot + c_m I_{E_m} = c_0 + \sum_{j = 1}^{m} c_j I_{E_j}$ We determine a particular primitive form by determining the value of $X$ on each minterm generated by the class $\{E_j: 1 \le j \le m\}$. We do this in a systematic way by utilizing minterm vectors and properties of indicator functions. $X$ is constant on each minterm generated by the class $\{E_1, E_2, \cdot\cdot\cdot, E_m\}$ since, as noted in the treatment of the minterm expansion, each indicator function $I_{E_i}$ is constant on each minterm. We determine the value $s_i$ of $X$ on each minterm $M_i$. This describes $X$ in a special primitive form $X = \sum_{k = 0}^{2^m - 1} s_i I_{M_i}$, with $P(M_i) = p_i$, $0 \le i \le 2^m - 1$ We apply the csort operation to the matrices of values and minterm probabilities to determine the distribution for $X$. We illustrate with a simple example. Extension to the general case should be quite evident. First, we do the problem “by hand” in tabular form. Then we use the m-procedures to carry out the desired operations. Example $9$ Finding the distribution from affine form A mail order house is featuring three items (limit one of each kind per customer). Let • $E_1$ = the event the customer orders item 1, at a price of 10 dollars. • $E_2$ = the event the customer orders item 2, at a price of 18 dollars. • $E_3$ = the event the customer orders item 3, at a price of 10 dollars. There is a mailing charge of 3 dollars per order. We suppose $\{E_1, E_2, E_3\}$ is independent with probabilities 0.6, 0.3, 0.5, respectively. Let $X$ be the amount a customer who orders the special items spends on them plus mailing cost. Then, in affine form, $X = 10 I_{E_1} + 18 I_{E_2} + 10 I_{E_3} + 3$ We seek first the primitive form, using the minterm probabilities, which may calculated in this case by using the m-function minprob. 1. To obtain the value of $X$ on each minterm we • Multiply the minterm vector for each generating event by the coefficient for that event • Sum the values on each minterm and add the constant To complete the table, list the corresponding minterm probabilities. $i$ 10 $I_{E_1}$ 18 $I_{E_2}$ 10 $I_{E_3}$ c $s-i$ $pm_i$ 0 0 0 0 3 3 0.14 1 0 0 10 3 13 0.14 2 0 18 0 3 21 0.06 3 0 18 10 3 31 0.06 4 10 0 0 3 13 0.21 5 10 0 10 3 23 0.21 6 10 18 0 3 31 0.09 7 10 18 10 3 41 0.09 We then sort on the $s_i$, the values on the various $M_i$, to expose more clearly the primitive form for $X$. “Primitive form” Values $i$ $s_i$ $pm_i$ 0 3 0.14 1 13 0.14 4 13 0.21 2 21 0.06 5 23 0.21 3 31 0.06 6 31 0.09 7 41 0.09 The primitive form of $X$ is thus $X = 3I_{M_0} + 12I_{M_1} + 13I_{M_4} + 21I_{M_2} + 23I_{M_5} + 31I_{M_3} + 31I_{M_6} + 41I_{M_7} We note that the value 13 is taken on on minterms \(M_1$ and $M_4$. The probability $X$ has the value 13 is thus $p(1) + p(4)$. Similarly, $X$ has value 31 on minterms $M_3$ and $M_6$. • To complete the process of determining the distribution, we list the sorted values and consolidate by adding together the probabilities of the minterms on which each value is taken, as follows: $k$ $t_k$ $p_k$ 1 3 0.14 2 13 0.14 + 0.21 = 0.35 3 21 0.06 4 23 0.21 5 31 0.06 + 0.09 = 0.15 6 41 0.09 The results may be put in a matrix $X$ of possible values and a corresponding matrix PX of probabilities that $X$ takes on each of these values. Examination of the table shows that $X =$ [3 13 21 23 31 41] and $PX =$ [0.14 0.35 0.06 0.21 0.15 0.09] Matrices $X$ and PX describe the distribution for $X$. An m-procedure for determining the distribution from affine form We now consider suitable MATLAB steps in determining the distribution from affine form, then incorporate these in the m-procedure canonic for carrying out the transformation. We start with the random variable in affine form, and suppose we have available, or can calculate, the minterm probabilities. The procedure uses mintable to set the basic minterm vector patterns, then uses a matrix of coefficients, including the constant term (set to zero if absent), to obtain the values on each minterm. The minterm probabilities are included in a row matrix. Having obtained the values on each minterm, the procedure performs the desired consolidation by using the m-function csort. Example $10$ Steps in determining the distribution for X in Example 6.1.9 >> c = [10 18 10 3]; % Constant term is listed last >> pm = minprob(0.1*[6 3 5]); >> M = mintable(3) % Minterm vector pattern M = 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 % - - - - - - - - - - - - - - % An approach mimicking hand'' calculation >> C = colcopy(c(1:3),8) % Coefficients in position C = 10 10 10 10 10 10 10 10 18 18 18 18 18 18 18 18 10 10 10 10 10 10 10 10 >> CM = C.*M % Minterm vector values CM = 0 0 0 0 10 10 10 10 0 0 18 18 0 0 18 18 0 10 0 10 0 10 0 10 >> cM = sum(CM) + c(4) % Values on minterms cM = 3 13 21 31 13 23 31 41 % - - - - - - - - - - - - - % Practical MATLAB procedure >> s = c(1:3)*M + c(4) s = 3 13 21 31 13 23 31 41 >> pm = 0.14 0.14 0.06 0.06 0.21 0.21 0.09 0.09 % Extra zeros deleted >> const = c(4)*ones(1,8);} >> disp([CM;const;s;pm]') % Display of primitive form 0 0 0 3 3 0.14 % MATLAB gives four decimals 0 0 10 3 13 0.14 0 18 0 3 21 0.06 0 18 10 3 31 0.06 10 0 0 3 13 0.21 10 0 10 3 23 0.21 10 18 0 3 31 0.09 10 18 10 3 41 0.09 >> [X,PX] = csort(s,pm); % Sorting on s, consolidation of pm >> disp([X;PX]') % Display of final result 3 0.14 13 0.35 21 0.06 23 0.21 31 0.15 41 0.09 The two basic steps are combined in the m-procedure canonic, which we use to solve the previous problem. Example $11$ Use of canonic on the variables of Example 6.1.10 >> c = [10 18 10 3]; % Note that the constant term 3 must be included last >> pm = minprob([0.6 0.3 0.5]); >> canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution >> disp(XDBN) 3.0000 0.1400 13.0000 0.3500 21.0000 0.0600 23.0000 0.2100 31.0000 0.1500 41.0000 0.0900 With the distribution available in the matrices $X$ (set of values) and PX (set of probabilities), we may calculate a wide variety of quantities associated with the random variable. We use two key devices: 1. Use relational and logical operations on the matrix of values $X$ to determine a matrix $M$ which has ones for those values which meet a prescribed condition. $P(X \in M)$: PM = M*PX' 2. Determine $G = g(X) = [g(X_1) g(X_2) \cdot\cdot\cdot g(X_n)]$ by using array operations on matrix $X$. We have two alternatives: 1. Use the matrix $G$, which has values $g(t_i)$ for each possible value $t_i$ for $X$, or, 2. Apply csort to the pair $(G, PX)$ to get the distribution for $Z = g(X)$. This distribution (in value and probability matrices) may be used in exactly the same manner as that for the original random variable $X$. Example $12$ Continuation of Example 6.1.11 Suppose for the random variable $X$ in Example 6.11 it is desired to determine the probabilities $P(15 \le X \le 35)$, $P(|X - 20| \le 7)$, and $(X - 10) (X - 25) > 0)$ >> M = (X>=15)&(X<=35); M = 0 0 1 1 1 0 % Ones for minterms on which 15 <= X <= 35 >> PM = M*PX' % Picks out and sums those minterm probs PM = 0.4200 >> N = abs(X-20)<=7; N = 0 1 1 1 0 0 % Ones for minterms on which |X - 20| <= 7 >> PN = N*PX' % Picks out and sums those minterm probs PN = 0.6200 >> G = (X - 10).*(X - 25) G = 154 -36 -44 -26 126 496 % Value of g(t_i) for each possible value >> P1 = (G>0)*PX' % Total probability for those t_i such that P1 = 0.3800 % g(t_i) > 0 >> [Z,PZ] = csort(G,PX) % Distribution for Z = g(X) Z = -44 -36 -26 126 154 496 PZ = 0.0600 0.3500 0.2100 0.1500 0.1400 0.0900 >> P2 = (Z>0)*PZ' % Calculation using distribution for Z P2 = 0.3800 Example $13$ Alternate formulation of Example 4.3.3 from "Composite Trials" Ten race cars are involved in time trials to determine pole positions for an upcoming race. To qualify, they must post an average speed of 125 mph or more on a trial run. Let $E_i$ be the event the $i$th car makes qualifying speed. It seems reasonable to suppose the class $\{E_i: 1 \le i \le 10\}$ is independent. If the respective probabilities for success are 0.90, 0.88, 0.93, 0.77, 0.85, 0.96, 0.72, 0.83, 0.91, 0.84, what is the probability that $k$ or more will qualify ($k$ = 6,7,8,9,10)? Solution Let $X = \sum_{i = 1}^{10} I_{E_i}$ >> c = [ones(1,10) 0]; >> P = [0.90, 0.88, 0.93, 0.77, 0.85, 0.96, 0.72, 0.83, 0.91, 0.84]; >> canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution >> k = 6:10; >> for i = 1:length(k) Pk(i) = (X>=k(i))*PX'; end >> disp(Pk) 0.9938 0.9628 0.8472 0.5756 0.2114 This solution is not as convenient to write out. However, with the distribution for $X$ as defined, a great many other probabilities can be determined. This is particularly the case when it is desired to compare the results of two independent races or “heats.” We consider such problems in the study of Independent Classes of Random Variables. A function form for canonic One disadvantage of the procedure canonic is that it always names the output $X$ and PX. While these can easily be renamed, frequently it is desirable to use some other name for the random variable from the start. A function form, which we call canonicf, is useful in this case. Example $14$ Alternate solution of Example 6.1.13, using canonicf >> c = [10 18 10 3]; >> pm = minprob(0.1*[6 3 5]); >> [Z,PZ] = canonicf(c,pm); >> disp([Z;PZ]') % Numbers as before, but the distribution 3.0000 0.1400 % matrices are now named Z and PZ 13.0000 0.3500 21.0000 0.0600 23.0000 0.2100 31.0000 0.1500 41.0000 0.0900 General random variables The distribution for a simple random variable is easily visualized as point mass concentrations at the various values in the range, and the class of events determined by a simple random variable is described in terms of the partition generated by $X$ (i.e., the class of those events of the form $A_i = [X = t_i]$ for each $t_i$ in the range). The situation is conceptually the same for the general case, but the details are more complicated. If the random variable takes on a continuum of values, then the probability mass distribution may be spread smoothly on the line. Or, the distribution may be a mixture of point mass concentrations and smooth distributions on some intervals. The class of events determined by $X$ is the set of all inverse images $X^{-1} (M)$ for $M$ any member of a general class of subsets of subsets of the real line known in the mathematical literature as the Borel sets. There are technical mathematical reasons for not saying M is any subset, but the class of Borel sets is general enough to include any set likely to be encountered in applications—certainly at the level of this treatment. The Borel sets include any interval and any set that can be formed by complements, countable unions, and countable intersections of Borel sets. This is a type of class known as a sigma algebra of events. Because of the preservation of set operations by the inverse image, the class of events determined by random variable $X$ is also a sigma algebra, and is often designated $\sigma(X)$. There are some technical questions concerning the probability measure $P_X$ induced by $X$, hence the distribution. These also are settled in such a manner that there is no need for concern at this level of analysis. However, some of these questions become important in dealing with random processes and other advanced notions increasingly used in applications. Two facts provide the freedom we need to proceed with little concern for the technical details. $X^{-1} (M)$ is an event for every Borel set $M$ iff for every semi-infinite interval $(-\infty, t]$ on the real line $X^{-1} ((-\infty, t])$ is an event. The induced probability distribution is determined uniquely by its assignment to all intervals of the form $(-\infty, t]$. These facts point to the importance of the distribution function introduced in the next chapter. Another fact, alluded to above and discussed in some detail in the next chapter, is that any general random variable can be approximated as closely as pleased by a simple random variable. We turn in the next chapter to a description of certain commonly encountered probability distributions and ways to describe them analytically.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/06%3A_Random_Variables_and_Probabilities/6.01%3A_Random_Variables_and_Probabilities.txt
Exercise $1$ The following simple random variable is in canonical form: $X = -3.75 I_A - 1.13 I_B + 0 I_C + 2.6 I_D$. Express the events $\{X \in (-4, 2]\}$, $\{X \in (0, 3]\}$, $\{X \in (-\infty, 1]\}$, and {$X \ge 0$} in terms of $A$, $B$, $C$, and $D$. Answer • $A \bigvee B \bigvee C$ • $D$ • $A \bigvee B \bigvee C$ • $C$ • $C \bigvee D$ Exercise $2$ Random variable $X$, in canonical form, is given by $X = -2I_{A} - I_B + I_C + 2I_D + 5I_E$. Express the events $\{X \in [2, 3)\}$, $\{X \le 0\}$, $\{X < 0\}$, $\{|X - 2| \le 3\}$, and $\{X^2 \ge 4\}$, in terms of $A, B, C, D, and E$. Answer • $D$ • $A \bigvee B$ • $A \bigvee B$ • $B \bigvee C \bigvee D \bigvee E$ • $A \bigvee D \bigvee E$ Exercise $3$ The class $\{C_j: 1 \le j \le 10\}$ is a partition. Random variable $X$ has values {1, 3, 2, 3, 4, 2, 1, 3, 5, 2} on $C_1$ through $C_{10}$, respectively. Express X\) in canonical form. Answer T = [1 3 2 3 4 2 1 3 5 2]; [X,I] = sort(T) X = 1 1 2 2 2 3 3 3 4 5 I = 1 7 3 6 10 2 4 8 5 9 $X = I_A + 2I_B + 3I_C + 4I_D + 5I_E$ $A = C_1 \bigvee C_7$, $B = C_3 \bigvee C_6 \bigvee C_{10}$, $C = C_2 \bigvee C_4 \bigvee C_8$, $D = C_5$, $E = C_9$ Exercise $4$ The class $\{C_j: 1 \le j \le 10\}$ in Exercise has respective probabilities 0.08, 0.13, 0.06, 0.09, 0.14, 0.11, 0.12, 0.07, 0.11, 0.09. Determine the distribution for $X$ Answer T = [1 3 2 3 4 2 1 3 5 2]; pc = 0.01*[8 13 6 9 14 11 12 7 11 9]; [X,PX] = csort(T,pc); disp([X;PX]') 1.0000 0.2000 2.0000 0.2600 3.0000 0.2900 4.0000 0.1400 5.0000 0.1100 Exercise $5$ A wheel is spun yielding on an equally likely basis the integers 1 through 10. Let Ci be the event the wheel stops at $i$, $1 \le i \le 10$. Each $P(C_i) = 0.1$. If the numbers 1, 4, or 7 turn up, the player loses ten dollars; if the numbers 2, 5, or 8 turn up, the player gains nothing; if the numbers 3, 6, or 9 turn up, the player gains ten dollars; if the number 10 turns up, the player loses one dollar. The random variable expressing the results may be expressed in primitive form as $X = -10I_{C_1} + 0I_{C_2} + 10I_{C_3} - 10I_{C_4} + 0I_{C_5} + 10I_{C_6} - 10I_{C_7} + 0I_{C_8} + 10I_{C_9} - I_{C_{10}}$ • Determine the distribution for $X$, (a) by hand, (b) using MATLAB. • Determine $P(X < 0)$, $P(X > 0)$. Answer p = 0.1*ones(1,10); c = [-10 0 10 -10 0 10 -10 0 10 -1]; [X,PX] = csort(c,p); disp([X;PX]') -10.0000 0.3000 -1.0000 0.1000 0 0.3000 10.0000 0.3000 Pneg = (X<0)*PX' Pneg = 0.4000 Ppos = (X>0)*PX' Ppos = 0.300 Exercise $6$ A store has eight items for sale. The prices are $3.50,$5.00, $3.50,$7.50, $5.00,$5.00, $3.50, and$7.50, respectively. A customer comes in. She purchases one of the items with probabilities 0.10, 0.15, 0.15, 0.20, 0.10 0.05, 0.10 0.15. The random variable expressing the amount of her purchase may be written $X = 3.5 I_{C_1} + 5.0 I_{C_2} + 3.5 I_{C_3} + 7.5 I_{C_4} + 5.0 I_{C_5} + 5.0I_{C_6} + 3.5 I_{C_7} + 7.5I_{C_8}$ Determine the distribution for $X$ (a) by hand, (b) using MATLAB. Answer p = 0.01*[10 15 15 20 10 5 10 15]; c = [3.5 5 3.5 7.5 5 5 3.5 7.5]; [X,PX] = csort(c,p); disp([X;PX]') 3.5000 0.3500 5.0000 0.3000 7.5000 0.3500 Exercise $7$ Suppose $X$, $Y$ in canonical form are $X = 2 I_{A_1} + 3 I_{A_2} + 5 I_{A_3}$ $Y = I_{B_1} + 2 I_{B_2} + 3I_{B_3}$ The $P(A_i)$ are 0.3, 0.6, 0.1, respectively, and the $P(B_j)$ are 0.2 0.6 0.2. Each pair {$A_i, B_j$} is independent. Consider the random variable $Z = X + Y$. Then $Z = 2 + 1$ on $A_1 B_1$, $Z = 3 + 3$ on $A_2 B_3$, etc. Determine the value of $Z$ on each $A_i B_j$ and determine the corresponding $P(A_i B_j)$. From this, determine the distribution for $Z$. Answer A = [2 3 5]; B = [1 2 3]; a = rowcopy(A,3); b = colcopy(B,3); Z =a + b % Possible values of sum Z = X + Y Z = 3 4 6 4 5 7 5 6 8 PA = [0.3 0.6 0.1]; PB = [0.2 0.6 0.2]; pa= rowcopy(PA,3); pb = colcopy(PB,3); P = pa.*pb % Probabilities for various values P = 0.0600 0.1200 0.0200 0.1800 0.3600 0.0600 0.0600 0.1200 0.0200 [Z,PZ] = csort(Z,P); disp([Z;PZ]') % Distribution for Z = X + Y 3.0000 0.0600 4.0000 0.3000 5.0000 0.4200 6.0000 0.1400 7.0000 0.0600 8.0000 0.0200 Exercise $8$ For the random variables in Exercise, let $W = XY$. Determine the value of $W$ on each $A_i B_j$ and determine the distribution of $W$. Answer XY = a.*b XY = 2 3 5 % XY values 4 6 10 6 9 15 W PW % Distribution for W = XY 2.0000 0.0600 3.0000 0.1200 4.0000 0.1800 5.0000 0.0200 6.0000 0.4200 9.0000 0.1200 10.0000 0.0600 15.0000 0.0200 Exercise $9$ A pair of dice is rolled. 1. Let $X$ be the minimum of the two numbers which turn up. Determine the distribution for $X$ 2. Let $Y$ be the maximum of the two numbers. Determine the distribution for $Y$. 3. Let $Z$ be the sum of the two numbers. Determine the distribution for $Z$. 4. Let $W$ be the absolute value of the difference. Determine its distribution. Answer t = 1:6; c = ones(6,6); [x,y] = meshgrid(t,t) x = 1 2 3 4 5 6 % x-values in each position 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 y = 1 1 1 1 1 1 % y-values in each position 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 m = min(x,y); % min in each position M = max(x,y); % max in each position s = x + y; % sum x+y in each position d = abs(x - y); % |x - y| in each position [X,fX] = csort(m,c) % sorts values and counts occurrences X = 1 2 3 4 5 6 fX = 11 9 7 5 3 1 % PX = fX/36 [Y,fY] = csort(M,c) Y = 1 2 3 4 5 6 fY = 1 3 5 7 9 11 % PY = fY/36 [Z,fZ] = csort(s,c) Z = 2 3 4 5 6 7 8 9 10 11 12 fZ = 1 2 3 4 5 6 5 4 3 2 1 %PZ = fZ/36 [W,fW] = csort(d,c) W = 0 1 2 3 4 5 fW = 6 10 8 6 4 2 % PW = fW/36 Exercise $10$ Minterm probabilities $p(0)$ through $p(15)$ for the class $\{A, B , C, D\}$ are, in order, 0.072 0.048 0.018 0.012 0.168 0.112 0.042 0.028 0.062 0.048 0.028 0.010 0.170 0.110 0.040 0. Determine the distribution for random variable $X = -5.3I_A - 2.5 I_B + 2.3 I_C + 4.2 I_D - 3.7$ Answer % file npr06_10.m % Data for Exercise 6.2.10. pm = [ 0.072 0.048 0.018 0.012 0.168 0.112 0.042 0.028 ... 0.062 0.048 0.028 0.010 0.170 0.110 0.040 0.032]; c = [-5.3 -2.5 2.3 4.2 -3.7]; disp('Minterm probabilities are in pm, coefficients in c') npr06_10 Minterm probabilities are in pm, coefficients in c canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution XDBN XDBN = -11.5000 0.1700 -9.2000 0.0400 -9.0000 0.0620 -7.3000 0.1100 -6.7000 0.0280 -6.2000 0.1680 -5.0000 0.0320 -4.8000 0.0480 -3.9000 0.0420 -3.7000 0.0720 -2.5000 0.0100 -2.0000 0.1120 -1.4000 0.0180 0.3000 0.0280 0.5000 0.0480 2.8000 0.0120 Exercise $11$ On a Tuesday evening, the Houston Rockets, the Orlando Magic, and the Chicago Bulls all have games (but not with one another). Let A be the event the Rockets win, $B$ be the event the Magic win, and $C$ be the event the Bulls win. Suppose the class {$A, B, C$} is independent, with respective probabilities 0.75, 0.70 0.8. Ellen's boyfriend is a rabid Rockets fan, who does not like the Magic. He wants to bet on the games. She decides to take him up on his bets as follows: • $10 to 5 on the Rockets --- i.e. She loses five if the Rockets win and gains ten if they lose •$10 to 5 against the Magic • even $5 to 5 on the Bulls. Ellen's winning may be expressed as the random variable $X = -5 I_A + 10 I_{A^c} + 10 I_B - 5 I_{B^c} - 5 I_C + 5I_{C^c} = -15 I_A + 15 I_B - 10 I_C + 10$ Determine the distribution for $X$. What are the probabilities Ellen loses money, breaks even, or comes out ahead? Answer P = 0.01*[75 70 80]; c = [-15 15 -10 10]; canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution disp(XDBN) -15.0000 0.1800 -5.0000 0.0450 0 0.4800 10.0000 0.1200 15.0000 0.1400 25.0000 0.0350 PXneg = (X<0)*PX' PXneg = 0.2250 PX0 = (X==0)*PX' PX0 = 0.4800 PXpos = (X>0)*PX' PXpos = 0.2950 Exercise $12$ The class {$A, B, C, D$} has minterm probabilities $pm = 0.001 *$ [5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302] • Determine whether or not the class is independent. • The random variable $X = I_A + I_B + I_C + I_D$counts the number of the events which occur on a trial. Find the distribution for X and determine the probability that two or more occur on a trial. Find the probability that one or three of these occur on a trial. Answer npr06_12 Minterm probabilities in pm, coefficients in c a = imintest(pm) The class is NOT independent Minterms for which the product rule fails a = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution XDBN = 0 0.0050 1.0000 0.0430 2.0000 0.2120 3.0000 0.4380 4.0000 0.3020 P2 = (X>=2)*PX' P2 = 0.9520 P13 = ((X==1)|(X==3))*PX' P13 = 0.4810 Exercise $13$ James is expecting three checks in the mail, for$20, $26, and$33 dollars. Their arrivals are the events $A, B, C$. Assume the class is independent, with respective probabilities 0.90, 0.75, 0.80. Then $X = 20 I_A + 26 I_B + 33 I_C$ represents the total amount received. Determine the distribution for $X$. What is the probability he receives at least $50? Less than$30? Answer c = [20 26 33 0]; P = 0.01*[90 75 80]; canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution disp(XDBN) 0 0.0050 20.0000 0.0450 26.0000 0.0150 33.0000 0.0200 46.0000 0.1350 53.0000 0.1800 59.0000 0.0600 79.0000 0.5400 P50 = (X>=50)*PX' P50 = 0.7800 P30 = (X <30)*PX' P30 = 0.0650 Exercise $14$ A gambler places three bets. He puts down two dollars for each bet. He picks up three dollars (his original bet plus one dollar) if he wins the first bet, four dollars if he wins the second bet, and six dollars if he wins the third. His net winning can be represented by the random variable $X = 3I_A + 4I_B + 6I_C - 6$, with $P(A) = 0.5$, $P(B) = 0.4$, $P(C) = 0.3$ Assume the results of the games are independent. Determine the distribution for $X$. Answer c = [3 4 6 -6]; P = 0.1*[5 4 3]; canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution dsp(XDBN) -6.0000 0.2100 -3.0000 0.2100 -2.0000 0.1400 0 0.0900 1.0000 0.1400 3.0000 0.0900 4.0000 0.0600 7.0000 0.0600 Exercise $15$ Henry goes to a hardware store. He considers a power drill at $35, a socket wrench set at$56, a set of screwdrivers at $18, a vise at$24, and hammer at \$8. He decides independently on the purchases of the individual items, with respective probabilities 0.5, 0.6, 0.7, 0.4, 0.9. Let $X$ be the amount of his total purchases. Determine the distribution for $X$. Answer c = [35 56 18 24 8 0]; P = 0.1*[5 6 7 4 9]; canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution disp(XDBN) 0 0.0036 8.0000 0.0324 18.0000 0.0084 24.0000 0.0024 26.0000 0.0756 32.0000 0.0216 35.0000 0.0036 42.0000 0.0056 43.0000 0.0324 50.0000 0.0504 53.0000 0.0084 56.0000 0.0054 59.0000 0.0024 61.0000 0.0756 64.0000 0.0486 67.0000 0.0216 74.0000 0.0126 77.0000 0.0056 80.0000 0.0036 82.0000 0.1134 85.0000 0.0504 88.0000 0.0324 91.0000 0.0054 98.0000 0.0084 99.0000 0.0486 106.0000 0.0756 109.0000 0.0126 115.0000 0.0036 117.0000 0.1134 123.0000 0.0324 133.0000 0.0084 141.0000 0.0756 Exercise $16$ A sequence of trials (not necessarily independent) is performed. Let $E_i$ be the event of success on the $i$th component trial. We associate with each trial a "payoff function" $X_i = aI_{E_i} + b I_{E_i^c}$. Thus, an amount $a$ is earned if there is a success on the trial and an amount $b$ (usually negative) if there is a failure. Let $S_n$ be the number of successes in the $n$ trials and $W$ be the net payoff. Show that $W = (a - b) S_n + bn$. Answer $X_i = aI_{E_i} + b(1 - I_{E_i}) = (a - b) I_{E_i} + b$ $W = \sum_{i = 1}^{n} X_i = (a - b) \sum_{i = 1}^{n} I_{E_i} + bn = (a - b) S_n + bn$ Exercise $17$ A marker is placed at a reference position on a line (taken to be the origin); a coin is tossed repeatedly. If a head turns up, the marker is moved one unit to the right; if a tail turns up, the marker is moved one unit to the left. 1. Show that the position at the end of ten tosses is given by the random variable $X = \sum_{i = 1}^{10} I_{E_i} - \sum_{i = 1}^{10} I_{E_i^c} = 2 \sum_{i = 1}^{10} I_{E_i} - 10 = 2S_{10} - 10$ where $E_i$ is the event of a head on the $i$th toss and $S_{10}$ is the number of heads in ten trials. • After ten tosses, what are the possible positions and the probabilities of being in each? Answer $X_i = I_{E_i} - I_{E_i^c} = I_{E_i} - (1 - I_{E_i}) = 2I_{E_i} - 1$ $X = \sum_{i = 1}^{10} X_i = 2\sum_{i = 1}^{n} I_{E_i} - 10$ S = 0:10; PS = ibinom(10,0.5,0:10); X = 2*S - 10; disp([X;PS]') -10.0000 0.0010 -8.0000 0.0098 -6.0000 0.0439 -4.0000 0.1172 -2.0000 0.2051 0 0.2461 2.0000 0.2051 4.0000 0.1172 6.0000 0.0439 8.0000 0.0098 10.0000 0.0010 Exercise $18$ Margaret considers five purchases in the amounts 5, 17, 21, 8, 15 dollars with respective probabilities 0.37, 0.22, 0.38, 0.81, 0.63. Anne contemplates six purchases in the amounts 8, 15, 12, 18, 15, 12 dollars, with respective probabilities 0.77, 0.52, 0.23, 0.41, 0.83, 0.58. Assume that all eleven possible purchases form an independent class. 1. Determine the distribution for $X$, the amount purchased by Margaret. 2. Determine the distribution for $Y$, the amount purchased by Anne. 3. Determine the distribution for $Z = X + Y$, the total amount the two purchase. Suggestion for part (c). Let MATLAB perform the calculations. Answer [r,s] = ndgrid(X,Y); [t,u] = ndgrid(PX,PY); z = r + s; pz = t.*u; [Z,PZ] = csort(z,pz); % file npr06_18.m cx = [5 17 21 8 15 0]; cy = [8 15 12 18 15 12 0]; pmx = minprob(0.01*[37 22 38 81 63]); pmy = minprob(0.01*[77 52 23 41 83 58]); npr06_18 [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); [r,s] = ndgrid(X,Y); [t,u] = ndgrid(PX,PY); z = r + s; pz = t.*u; [Z,PZ] = csort(z,pz); a = length(Z) a = 125 % 125 different values plot(Z,cumsum(PZ)) % See figure Plotting details omitted
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/06%3A_Random_Variables_and_Probabilities/6.02%3A_Problems_on_Random_Variables_and_Probabilities.txt
In the unit on Random Variables and Probability we introduce real random variables as mappings from the basic space $\Omega$ to the real line. The mapping induces a transfer of the probability mass on the basic space to subsets of the real line in such a way that the probability that $X$ takes a value in a set $M$ is exactly the mass assigned to that set by the transfer. To perform probability calculations, we need to describe analytically the distribution on the line. For simple random variables this is easy. We have at each possible value of $X$ a point mass equal to the probability $X$ takes that value. For more general cases, we need a more useful description than that provided by the induced probability measure $P_X$. The Distribution Function In the theoretical discussion on Random Variables and Probability, we note that the probability distribution induced by a random variable $X$ is determined uniquely by a consistent assignment of mass to semi-infinite intervals of the form $(-\infty, t]$ for each real $t$. This suggests that a natural description is provided by the following. Definition The distribution function $F_X$ for random variable $X$ is given by $F_X(t) P(X \le t) = P(X \in (-\infty, t])$ $\forall t \in R$ In terms of the mass distribution on the line, this is the probability mass at or to the left of the point t. As a consequence, $F_X$ has the following properties: • (F1) : $F_X$ must be a nondecreasing function, for if $t > s$ there must be at least as much probability mass at or to the left of $t$ as there is for $s$. • (F2) : $F_X$ is continuous from the right, with a jump in the amount $p_0$ at $t_0$ iff $P(X = t_0) = p_0$. If the point $t$ approaches $t_0$ from the left, the interval does not include the probability mass at $t_0$ until $t$ reaches that value, at which point the amount at or to the left of t increases ("jumps") by amount $p_0$; on the other hand, if $t$ approaches $t_0$ from the right, the interval includes the mass $p_0$ all the way to and including $t_0$, but drops immediately as $t$ moves to the left of $t_0$. • (F3) : Except in very unusual cases involving random variables which may take “infinite” values, the probability mass included in $(-\infty, t]$ must increase to one as t moves to the right; as $t$ moves to the left, the probability mass included must decrease to zero, so that $F_X(-\infty) = \lim_{t \to - \infty} F_X(t) = 0$ and $F_X(\infty) = \lim_{t \to \infty} F_X(t) = 1$ A distribution function determines the probability mass in each semiinfinite interval $(\infty, t]$. According to the discussion referred to above, this determines uniquely the induced distribution. The distribution function $F_X$ for a simple random variable is easily visualized. The distribution consists of point mass $p_i$ at each point $t_i$ in the range. To the left of the smallest value in the range, $F_X(t) = 0$; as t increases to the smallest value $t_1$, $F_X(t)$ remains constant at zero until it jumps by the amount $p_1$ ... $F_X(t)$ remains constant at $p_1$ until $t$ increases to $t_2$, where it jumps by an amount p2 to the value $p_1 + p_2$. This continues until the value of $F_X(t)$ reaches 1 at the largest value $t_n$. The graph of $F_X$ is thus a step function, continuous from the right, with a jump in the amount $p_i$ at the corresponding point $t_i$ in the range. A similar situation exists for a discrete-valued random variable which may take on an infinity of values (e.g., the geometric distribution or the Poisson distribution considered below). In this case, there is always some probability at points to the right of any $t_i$, but this must become vanishingly small as $t$ increases, since the total probability mass is one. The procedure ddbn may be used to plot the distribution function for a simple random variable from a matrix X of values and a corresponding matrix PX of probabilities. Example $1$: Graph of FX for a simple random variable >> c = [10 18 10 3]; % Distribution for X in Example 6.5.1 >> pm = minprob(0.1*[6 3 5]); >> canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution >> ddbn % Circles show values at jumps Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % Printing details See Figure 7.1 Figure 7.1.1. Distribution function for Example 7.1.1 Description of some common discrete distributions We make repeated use of a number of common distributions which are used in many practical situations. This collection includes several distributions which are studied in the chapter "Random Variables and Probabilities". Indicator function. $X = I_E P(X = 1) = P(E) = pP(X = 0) = q = 1 - p$. The distribution function has a jump in the amount $q$ at $t = 0$ and an additional jump of $p$ to the value 1 at $t = 1$. Simple random variable $X = \sum_{t_i} I_{A_i}$ (canonical form) $P(X = t_i) = P(A_i) = p_1$ The distribution function is a step function, continuous from the right, with jump of $p_i$ at $t = t_i$ (See Figure 7.1.1 for Example 7.1.1) Binomial ($n, p$). This random variable appears as the number of successes in a sequence of $n$ Bernoulli trials with probability $p$ of success. In its simplest form $X = \sum_{i = 1}^{n} I_{E_i}$ with $\{E_i: 1 \le i \le n\}$ independent $P(E_i) = p$ $P(X = k) = C(n, k) p^k q^{n -k}$ As pointed out in the study of Bernoulli sequences in the unit on Composite Trials, two m-functions ibinom andcbinom are available for computing the individual and cumulative binomial probabilities. Geometric ($p$) There are two related distributions, both arising in the study of continuing Bernoulli sequences. The first counts the number of failures before the first success. This is sometimes called the “waiting time.” The event {$X = k$} consists of a sequence of $k$ failures, then a success. Thus $P(X = k) = q^k p$, $0 \le k$ The second designates the component trial on which the first success occurs. The event {$Y = k$} consists of $k - 1$ failures, then a success on the $k$th component trial. We have $P(Y = k) = q^{k - 1} p$, $1 \le k$ We say $X$ has the geometric distribution with parameter ($p$), which we often designate by $X~$ geometric ($p$). Now $Y = X + 1$ or $Y - 1 = X$. For this reason, it is customary to refer to the distribution for the number of the trial for the first success by saying $Y - 1 ~$ geometric ($p$). The probability of $k$ or more failures before the first success is $P(X \ge k) = q^k$. Also $P(X \ge n + k| X \ge n) = \dfrac{P(X \ge n + k)}{P(X \ge n)} = q^{n + k}/q^{n} = q^k = P(X \ge k)$ This suggests that a Bernoulli sequence essentially "starts over" on each trial. If it has failed $n$ times, the probability of failing an additional $k$ or more times before the next success is the same as the initial probability of failing $k$ or more times before the first success. Example $2$: The geometric distribution A statistician is taking a random sample from a population in which two percent of the members own a BMW automobile. She takes a sample of size 100. What is the probability of finding no BMW owners in the sample? Solution The sampling process may be viewed as a sequence of Bernoulli trials with probability $p = 0.02$ of success. The probability of 100 or more failures before the first success is $0.98^{100} = 0.1326$ or about 1/7.5. Negative binomial ($m, p$). $X$ is the number of failures before the $m$th success. It is generally more convenient to work with $Y = X + m$, the number of the trial on which the $m$th success occurs. An examination of the possible patterns and elementary combinatorics show that $P(Y = k) = C(k - 1, m - 1) p^m q^{k - m}$, $m \le k$ There are m–1 successes in the first $k - 1$ trials, then a success. Each combination has probability $p^m q^{k - m}$. We have an m-function nbinom to calculate these probabilities. Example $3$: A game of chance A player throws a single six-sided die repeatedly. He scores if he throws a 1 or a 6. What is the probability he scores five times in ten or fewer throws? >> p = sum(nbinom(5,1/3,5:10)) p = 0.2131 An alternate solution is possible with the use of the binomial distribution. The $m$th success comes not later than the kth trial iff the number of successes in $k$ trials is greater than or equal to $m$. >> P = cbinom(10,1/3,5) P = 0.2131 Poisson ($\mu$). This distribution is assumed in a wide variety of applications. It appears as a counting variable for items arriving with exponential interarrival times (see the relationship to the gamma distribution below). For large $n$ and small $p$ (which may not be a value found in a table), the binomial distribution is approximately Poisson ($np$). Use of the generating function (see Transform Methods) shows the sum of independent Poisson random variables is Poisson. The Poisson distribution is integer valued, with $P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}$ (0 \le k\) Although Poisson probabilities are usually easier to calculate with scientific calculators than binomial probabilities, the use of tables is often quite helpful. As in the case of the binomial distribution, we have two m-functions for calculating Poisson probabilities. These have advantages of speed and parameter range similar to those for ibinom and cbinom. $P(X = k)$ is calculated by P = ipoisson(mu,k), where $k$ is a row or column vector of integers and the result $P$ is a row matrix of the probabilities. $P(X \ge k)$ is calculated by P = cpoisson(mu,k), where $k$ is a row or column vector of integers and the result $P$ is a row matrix of the probabilities. Example $4$: Poisson counting random variable The number of messages arriving in a one minute period at a communications network junction is a random variable N∼ Poisson (130). What is the probability the number of arrivals is greater than equal to 110, 120, 130, 140, 150, 160 ? >> p = cpoisson(130,110:10:160) p = 0.9666 0.8209 0.5117 0.2011 0.0461 0.0060 The descriptions of these distributions, along with a number of other facts, are summarized in the table DATA ON SOME COMMON DISTRIBUTIONS in Appendix C. The Density Function If the probability mass in the induced distribution is spread smoothly along the real line, with no point mass concentrations, there is a probability density function $f_X$ which satisfies $P(X \in M) = P_X(M) = \int_M f_X(t)\ dt$ (are under the graph of $f_X$ over $M$) At each $t$, $f_X(t)$ is the mass per unit length in the probability distribution. The density function has three characteristic properties: (f1) $f_X \ge 0$ (f2) $\int_R f_X = 1$ (f3) $F_X (t) = \int_{-\infty}^{t} f_X$ A random variable (or distribution) which has a density is called absolutely continuous. This term comes from measure theory. We often simply abbreviate as continuous distribution. Remarks 1. There is a technical mathematical description of the condition “spread smoothly with no point mass concentrations.” And strictly speaking the integrals are Lebesgue integrals rather than the ordinary Riemann kind. But for practical cases, the two agree, so that we are free to use ordinary integration techniques. 2. By the fundamental theorem of calculus $f_X(t) = F_X^{'} (t)$ at every point of continuity of $f_X$ • Any integrable, nonnegative function $f$ with $\int f = 1$ determines a distribution function $F$, which in turn determines a probability distribution. If $\int f \ne 1$, multiplication by the appropriate positive constant gives a suitable $f$. An argument based on the Quantile Function shows the existence of a random variable with that distribution. • In the literature on probability, it is customary to omit the indication of the region of integration when integrating over the whole line. Thus $\int g(t) f_X (t) dt = \int_R g(t) f_X(t) dt$ The first expression is not an indefinite integral. In many situations, $f_X$ will be zero outside an interval. Thus, the integrand effectively determines the region of integration. Figure 7.1.2. The Weibull density for $\alpha = 2$, $\lambda = 0.25, 1, 4$. Figure 7.1.3. The Weibull density for $\alpha = 10$, $\lambda = 0.001, 1, 1000$. Some common absolutely continuous distributions Uniform $(a, b)$. Mass is spread uniformly on the interval $[a, b]$. It is immaterial whether or not the end points are included, since probability associated with each individual point is zero. The probability of any subinterval is proportional to the length of the subinterval. The probability of being in any two subintervals of the same length is the same. This distribution is used to model situations in which it is known that $X$ takes on values in $[a, b]$ but is equally likely to be in any subinterval of a given length. The density must be constant over the interval (zero outside), and the distribution function increases linearly with $t$ in the interval. Thus, $f_X (t) = \dfrac{1}{b - a}$ ($a < t < b$) (zero outside the interval) The graph of $F_X$ rises linearly, with slope 1/($b - a$) from zero at $t = a$ to one at $t = b$. Symmetric triangular $(-a, a)$, $f_X(t) = \begin{cases} (a + t)/a^2 & -a \le t < 0 \ (a - t)/a^2 & 0 \le t \le a \end{cases}$. This distribution is used frequently in instructional numerical examples because probabilities can be obtained geometrically. It can be shifted, with a shift of the graph, to different sets of values. It appears naturally (in shifted form) as the distribution for the sum or difference of two independent random variables uniformly distributed on intervals of the same length. This fact is established with the use of the moment generating function (see Transform Methods). More generally, the density may have a triangular graph which is not symmetric. Example $5$: Use of a triangular distribution Suppose $X~$ symmetric triangular (100, 300). Determine $P(120 < X \le 250)$. Remark. Note that in the continuous case, it is immaterial whether the end point of the intervals are included or not. Solution To get the area under the triangle between 120 and 250, we take one minus the area of the right triangles between 100 and 120 and between 250 and 300. Using the fact that areas of similar triangles are proportional to the square of any side, we have $P = 1 - \dfrac{1}{2} ((20/100)^2 + (50/100)^2) = 0.855$ Exponential ($\lambda$) $f_X(t) = \lambda e^{-\lambda t}$ $t \ge 0$ (zero elsewhere). Integration shows $F_X(t) = 1 - e^{-\lambda t}$ (t \ge 0\) (zero elsewhere). We note that $P(X > 0) = 1 - F_X(t) = e^{-\lambda t}$ $t \ge 0$. This leads to an extremely important property of the exponential distribution. Since $X > t + h$, $h > 0$ implies $X > t$, we have $P(X > t + h|X > t) = P(X > t + h)/P(X > t) = e^{-\lambda (t+ h)}/e^{-\lambda t} = e^{-\lambda h} = P(X > h)$ Because of this property, the exponential distribution is often used in reliability problems. Suppose $X$ represents the time to failure (i.e., the life duration) of a device put into service at $t = 0$. If the distribution is exponential, this property says that if the device survives to time $t$ (i.e., $X > t$) then the (conditional) probability it will survive $h$ more units of time is the same as the original probability of surviving for $h$ units of time. Many devices have the property that they do not wear out. Failure is due to some stress of external origin. Many solid state electronic devices behave essentially in this way, once initial “burn in” tests have removed defective units. Use of Cauchy's equation (Appendix B) shows that the exponential distribution is the only continuous distribution with this property. Gamma distribution $(\alpha, \lambda)$ $f_X(t) = \dfrac{\lambda^{\alpha} t^{\alpha - 1} e^{-\lambda t}}{\Gamma (\alpha)}$ $t \ge 0$ (zero elsewhere) We have an m-function gammadbn to determine values of the distribution function for $X~$ gamma $(\alpha, \lambda)$. Use of moment generating functions shows that for $\alpha = n$, a random variable $X~$ gamma $(n, \lambda)$ has the same distribution as the sum of $n$ independent random variables, each exponential ($lambda$). A relation to the Poisson distribution is described in Sec 7.5. Example $6$: An arrival problem On a Saturday night, the times (in hours) between arrivals in a hospital emergency unit may be represented by a random quantity which is exponential ($\lambda = 3$). As we show in the chapter Mathematical Expectation, this means that the average interarrival time is 1/3 hour or 20 minutes. What is the probability of ten or more arrivals in four hours? In six hours? Solution The time for ten arrivals is the sum of ten interarrival times. If we suppose these are independent, as is usually the case, then the time for ten arrivals is gamma (10, 3). >> p = gammadbn(10,3,[4 6]) p = 0.7576 0.9846 Normal, or Gaussian $(\mu, \sigma^2)$ $f_X (t) = \dfrac{1}{\sigma \sqrt{2 \pi}}$ exp $(-\dfrac{1}{2} (\dfrac{t - \mu}{\sigma})^2)$ $\forall t$ We generally indicate that a random variable $X$ has the normal or gaussian distribution by writing $X ~ N(\mu, \sigma^2)$, putting in the actual values for the parameters. The gaussian distribution plays a central role in many aspects of applied probability theory, particularly in the area of statistics. Much of its importance comes from the central limit theorem (CLT), which is a term applied to a number of theorems in analysis. Essentially, the CLT shows that the distribution for the sum of a sufficiently large number of independent random variables has approximately the gaussian distribution. Thus, the gaussian distribution appears naturally in such topics as theory of errors or theory of noise, where the quantity observed is an additive combination of a large number of essentially independent quantities. Examination of the expression shows that the graph for $f_X(t)$ is symmetric about its maximum at $t = \mu$.. The greater the parameter $\sigma^2$, the smaller the maximum value and the more slowly the curve decreases with distance from $\mu$.. Thus parameter $\mu$. locates the center of the mass distribution and $\sigma^2$ is a measure of the spread of mass about $\mu$. The parameter $\mu$ is called the mean value and $\sigma^2$ is the variance. The parameter $\sigma$, the positive square root of the variance, is called the standard deviation. While we have an explicit formula for the density function, it is known that the distribution function, as the integral of the density function, cannot be expressed in terms of elementary functions. The usual procedure is to use tables obtained by numerical integration. Since there are two parameters, this raises the question whether a separate table is needed for each pair of parameters. It is a remarkable fact that this is not the case. We need only have a table of the distribution function for $X ~ N(0,1)$. This is refered to as the standardized normal distribution. We use $\varphi$ and $\phi$ for the standardized normal density and distribution functions, respectively. Standardized normal $varphi(t) = \dfrac{1}{\sqrt{2 \pi}} e^{-t^2/2}$ so that the distribution function is $\phi (t) = \int_{-\infty}^{t} \varphi (u) du$. The graph of the density function is the well known bell shaped curve, symmetrical about the origin (see Figure 7.1.4). The symmetry about the origin contributes to its usefulness. $P(X \le t) = \phi (t)$ = area under the curve to the left of $t$ Note that the area to the left of $t = -1.5$ is the same as the area to the right of $t = 1.5$, so that $\phi (-2) = 1 - \phi(2)$. The same is true for any $t$, so that we have $\phi (-t) = 1 - \phi(t)$ $\forall t$ This indicates that we need only a table of values of $\phi(t)$ for $t > 0$ to be able to determine $\phi (t)$ for any $t$. We may use the symmetry for any case. Note that $\phi(0) = 1/2$, Figure 7.1.4. The standardized normal distribution. Example $7$: Standardized normal calculations Suppose $X ~ N(0, 1)$. Determine $P(-1 \le X \le 2)$ and $P(|X| > 1)$ Solution 1. $P(-1 \le X \le 2) = \phi (2) - \phi (-1) = \phi (2) - [1 - \phi(1)] = \phi (2) + \phi (1) - 1$ 2. $P(|X| > 1) = P(X > 1) + P(X < -1) = 1 - \phi(1) + \phi (-1) = 2[1 -\phi(1)]$ From a table of standardized normal distribution function (see Appendix D), we find $\phi(2) = 0.9772$ and $\phi(1) = 0.8413$ which gives $P(-1 \le X \le 2) = 0.8185$ and $P(|X| > 1) = 0.3174$ General gaussian distribution For $X~N(\mu, \sigma^2)$, the density maintains the bell shape, but is shifted with different spread and height. Figure 7.1.5 shows the distribution function and density function for $X ~N(2, 0.1)$. The density is centered about $t = 2$. It has height 1.2616 as compared with 0.3989 for the standardized normal density. Inspection shows that the graph is narrower than that for the standardized normal. The distribution function reaches 0.5 at the mean value 2. Figure 7.1.5. The normal density and distribution functions for $X ~N(2, 0.1)$. A change of variables in the integral shows that the table for standardized normal distribution function can be used for any case. $F_X (t) = \dfrac{1}{\sigma \sqrt{2\pi}}\int_{-\infty}^{t} \text{exp}(-\dfrac{1}{2} (\dfrac{x - \mu}{\sigma})^2) dx = \int_{-infty}^{t} \varphi (\dfrac{x - \mu}{\sigma}) \dfrac{1}{\sigma} dx$ Make the change of variable and corresponding formal changes $u = \dfrac{x - \mu}{\sigma}$ $du = \dfrac{1}{\sigma} dx$ $x = t ~ u = \dfrac{t - \mu}{\sigma}$ to get $F_X(t) = \int_{-\infty}^{(t-\mu)/\sigma} \varphi (u) du = \phi (\dfrac{t - \mu}{\sigma})$ Example $8$: General gaussian calculation Suppose $X ~ N$(3,16) (i.e., $\mu = 3$ and $\sigma^2 = 16$). Determine $P(-1 \le X \le 11)$ and $P(|X - 3| > 4)$. Solution 1. $F_X(11) - F_X(-1) = \phi(\dfrac{11 - 3}{4}) - \phi(\dfrac{-1 - 3}{4}) = \phi(2) - \phi(-1) = 0.8185$ 2. $P(X - 3 < -4) + P(X - 3 >4) = F_X(-1) + [1 - F_X(7)] = \phi(-1) + 1 - \phi(1) = 0.3174$ In each case the problem reduces to that in Example. We have m-functions gaussian and gaussdensity to calculate values of the distribution and density function for any reasonable value of the parameters. The following are solutions of example 7.1.7 and example 7.1.8, using the m-function gaussian. Example $9$: Example 7.1.7 and Example 7.1.8 (continued) >> P1 = gaussian(0,1,2) - gaussian(0,1,-1) P1 = 0.8186 >> P2 = 2*(1 - gaussian(0,1,1)) P2 = 0.3173 >> P1 = gaussian(3,16,11) - gaussian(3,16,-1) P2 = 0.8186 >> P2 = gaussian(3,16,-1)) + 1 - (gaussian(3,16,7) P2 = 0.3173 The differences in these results and those above (which used tables) are due to the roundoff to four places in the tables. Beta $(r, s)$, $r > 0$, $s > 0$. $f_X(t) = \dfrac{\Gamma(r + s)}{\Gamma(r) \Gamma(s)} t^{r - 1} (1 - t)^{s - 1}$ $0 < t < 1$ Analysis is based on the integrals $\int_{0}^{1} u^{r - 1} (1 - u)^{s - 1} du = \dfrac{\Gamma (r) \Gamma (s)}{\Gamma (r + s)}$ with $\Gamma(t + 1) = t \Gamma (t)$ Figure 7.6 and Figure 7.7 show graphs of the densities for various values of $r, s$. The usefulness comes in approximating densities on the unit interval. By using scaling and shifting, these can be extended to other intervals. The special case $r = s = 1$ gives the uniform distribution on the unit interval. The Beta distribution is quite useful in developing the Bayesian statistics for the problem of sampling to determine a population proportion. If $r, s$ are integers, the density function is a polynomial. For the general case we have two m-functions, beta and betadbn to perform the calculatons. Figure 7.6. The Beta $(r, s)$ density for $r = 2, s = 1, 2, 10$. Figure 7.7. The Beta $(r, s)$ density for $r = 5, s = 2, 5, 10$. Weibull $(\alpha, \lambda, v)$ $F_X (t) = 1 - e^{-\lambda (t - v)^{\alpha}}$ $\alpha > 0$, $\lambda > 0$, $v \ge 0$, $t \ge v$ The parameter $v$ is a shift parameter. Usually we assume $v = 0$. Examination shows that for α=1 the distribution is exponential ($\lambda$). The parameter α provides a distortion of the time scale for the exponential distribution. Figure 7.6 and Figure 7.7 show graphs of the Weibull density for some representative values of $\alpha$ and $\lambda$ ($v = 0$). The distribution is used in reliability theory. We do not make much use of it. However, we have m-functions weibull (density) and weibulld (distribution function) for shift parameter $v = 0$ only. The shift can be obtained by subtracting a constant from the $t$ values.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/07%3A_Distribution_and_Density_Functions/7.01%3A_Distribution_and_Density_Functions.txt
Binomial, Poisson, gamma, and Gaussian distributions The Poisson approximation to the binomial distribution The following approximation is a classical one. We wish to show that for small $p$ and sufficiently large $n$ $P(X = k) = C(n, k)p^k (1 - p)^{n - k} \approx e^{-np} \dfrac{np}{k!}$ Suppose $p = \mu/n$ with $n$ large and $\mu/n < 1$. Then, $P(X = k) = C(n, k) (\mu/n)^k (1 - \mu/n)^{n-k} = \dfrac{n(n - 1) \cdot \cdot \cdot (n - k + 1)}{n^k} (1 - \dfrac{\mu}{n})^{-k} (1 - \dfrac{\mu}{n})^n \dfrac{\mu^k}{k!}$ The first factor in the last expression is the ratio of polynomials in $n$ of the same degree $k$, which must approach one as $n$ becomes large. The second factor approaches one as $n$ becomes large. According to a well known property of the exponential $(1 - \dfrac{\mu}{n})^n \to e^{-\mu}$ as $n \to \infty$. The result is that for large $n$, $P(X = k) \approx e^{-\mu} \dfrac{\mu^k}{k!}$, where $\mu = np$. The Poisson and Gamma Distributions Suppose $Y~$ Poisson ($\lambda t$). Now $X~$ gamma ($\alpha, \lambda$) iff $P(X \le t) = \dfrac{\lambda^{\alpha}}{\Gamma (\alpha)} \int_{0}^{1} x^{\alpha - 1} e^{-\lambda x}\ dx = \dfrac{1}{\Gamma (\alpha)} \int_{0}^{t} (\lambda x)^{\alpha - 1} e^{\lambda x} d(\lambda x) = \dfrac{1}{\Gamma (\alpha)} \int_{0}^{\lambda t} u^{\alpha - 1} e^{-\mu}\ du$ A well known definite integral, obtained by integration by parts, is $int_{\alpha}^{\infty} t^{n -1} e^{-t}dt = \Gamma (n) e^{-a} \sum_{k = 1}^{n - 1} \dfrac{a^k}{k!}$ with $\Gamma (n) = (n - 1)!$. Noting that $1 = e^{-a}e^{a} = e^{-a} \sum_{k = 0}^{\infty} \dfrac{a^k}{k!}$ we find after some simple algebra that $\dfrac{1}{\Gamma(n)} \int_{0}^{a} t^{n -1} e^{-t}\ dt = e^{-a} \sum_{k = n}^{\infty} \dfrac{a^k}{k!}$ For $a = \lambda t$ and $\alpha = n$, we have the following equality iff $X~$ gamma ($\alpha, \lambda$) $P(X \le t) = \dfrac{1}{\Gamma(n)} \int_{0}^{\lambda t} u^{n -1}d^{-u}\ du = e^{-\lambda t} \sum_{k = n}^{\infty} \dfrac{(\lambda t)^k}{k!}$ Now $P(Y \ge n) = e^{-\lambda t} \sum_{k = n}^{\infty} \dfrac{(\lambda t)^k}{k!}$ iff $Y~$ Poisson ($\lambda t$. The Gaussian (normal) approximation The central limit theorem, referred to in the discussion of the Gaussian or normal distribution above, suggests that the binomial and Poisson distributions should be approximated by the Gaussian. The number of successes in n trials has the binomial (n,p) distribution. This random variable may be expressed $X = \sum_{i = 1}^{n} I_{E_i}$ Since the mean value of $X$ is $np$ and the variance is $npq$, the distribution should be approximately $N(np, npq)$. Figure 7.2.8. Gaussian approximation to the binomial. Use of the generating function shows that the sum of independent Poisson random variables is Poisson. Now if $X~) Poisson (\(\mu$), then $X$ may be considered the sum of $n$ independent random variables, each Poisson ($\mu/n$). Since the mean value and the variance are both $\mu$, it is reasonable to suppose that suppose that $X$ is approximately $N(\mu, \mu)$. It is generally best to compare distribution functions. Since the binomial and Poisson distributions are integer-valued, it turns out that the best Gaussian approximation is obtained by making a “continuity correction.” To get an approximation to a density for an integer-valued random variable, the probability at $t = k$ is represented by a rectangle of height $p_k$ and unit width, with $k$ as the midpoint. Figure 1 shows a plot of the “density” and the corresponding Gaussian density for $n = 300$, $p = 0.1$. It is apparent that the Gaussian density is offset by approximately 1/2. To approximate the probability $X \le k$, take the area under the curve from $k$ + 1/2; this is called the continuity correction. Use of m-procedures to compare We have two m-procedures to make the comparisons. First, we consider approximation of the Figure 7.2.9. Gaussian approximation to the Poisson distribution function $\mu$ = 10. Figure 7.2.10. Gaussian approximation to the Poisson distribution function $\mu$ = 100. Poisson ($\mu$) distribution. The m-procedure poissapp calls for a value of $\mu$, selects a suitable range about $k = \mu$ and plots the distribution function for the Poisson distribution (stairs) and the normal (Gaussian) distribution (dash dot) for $N(\mu, \mu)$. In addition, the continuity correction is applied to the gaussian distribution at integer values (circles). Figure 7.2.10 shows plots for $\mu$ = 10. It is clear that the continuity correction provides a much better approximation. The plots in Figure 7.2.11 are for $\mu$ = 100. Here the continuity correction provides the better approximation, but not by as much as for the smaller $\mu$. Figure 7.2.11. Poisson and Gaussian approximation to the binomial: $n$ = 1000, $p$ = 0.03. Figure 7.2.12. Poisson and Gaussian approximation to the binomial: $n$ = 50, $p$ = 0.6. The m-procedure bincomp compares the binomial, gaussian, and Poisson distributions. It calls for values of $n$ and $p$, selects suitable $k$ values, and plots the distribution function for the binomial, a continuous approximation to the distribution function for the Poisson, and continuity adjusted values of the gaussian distribution function at the integer values. Figure 7.2.11 shows plots for $n = 1000$, $p = 0.03$. The good agreement of all three distribution functions is evident. Figure 7.2.12 shows plots for $n = 50, p = 0.6$. There is still good agreement of the binomial and adjusted gaussian. However, the Poisson distribution does not track very well. The difficulty, as we see in the unit Variance, is the difference in variances--$npq$ for the binomial as compared with $np$ for the Poisson. Approximation of a real random variable by simple random variables Simple random variables play a significant role, both in theory and applications. In the unit Random Variables, we show how a simple random variable is determined by the set of points on the real line representing the possible values and the corresponding set of probabilities that each of these values is taken on. This describes the distribution of the random variable and makes possible calculations of event probabilities and parameters for the distribution. A continuous random variable is characterized by a set of possible values spread continuously over an interval or collection of intervals. In this case, the probability is also spread smoothly. The distribution is described by a probability density function, whose value at any point indicates "the probability per unit length" near the point. A simple approximation is obtained by subdividing an interval which includes the range (the set of possible values) into small enough subintervals that the density is approximately constant over each subinterval. A point in each subinterval is selected and is assigned the probability mass in its subinterval. The combination of the selected points and the corresponding probabilities describes the distribution of an approximating simple random variable. Calculations based on this distribution approximate corresponding calculations on the continuous distribution. Before examining a general approximation procedure which has significant consequences for later treatments, we consider some illustrative examples. Example $10$: Simple approximation to Poisson A random variable with the Poisson distribution is unbounded. However, for a given parameter value μ, the probability for $k \ge n$, $n$ sufficiently large, is negligible. Experiment indicates $n = \mu + 6\sqrt{\mu}$ (i.e., six standard deviations beyond the mean) is a reasonable value for $5 \le \mu \le 200$. Solution >> mu = [5 10 20 30 40 50 70 100 150 200]; >> K = zeros(1,length(mu)); >> p = zeros(1,length(mu)); >> for i = 1:length(mu) K(i) = floor(mu(i)+ 6*sqrt(mu(i))); p(i) = cpoisson(mu(i),K(i)); end >> disp([mu;K;p*1e6]') 5.0000 18.0000 5.4163 % Residual probabilities are 0.000001 10.0000 28.0000 2.2535 % times the numbers in the last column. 20.0000 46.0000 0.4540 % K is the value of k needed to achieve 30.0000 62.0000 0.2140 % the residual shown. 40.0000 77.0000 0.1354 50.0000 92.0000 0.0668 70.0000 120.0000 0.0359 100.0000 160.0000 0.0205 150.0000 223.0000 0.0159 200.0000 284.0000 0.0133 An m-procedure for discrete approximation If $X$ is bounded, absolutely continuous with density functon $f_X$, the m-procedure tappr sets up the distribution for an approximating simple random variable. An interval containing the range of $X$ is divided into a specified number of equal subdivisions. The probability mass for each subinterval is assigned to the midpoint. If $dx$ is the length of the subintervals, then the integral of the density function over the subinterval is approximated by $f_X(t_i) dx$. where $t_i$ is the midpoint. In effect, the graph of the density over the subinterval is approximated by a rectangle of length $dx$ and height $f_X(t_i)$. Once the approximating simple distribution is established, calculations are carried out as for simple random variables. Example $11$: a numerical example Suppose $f_X(t) = 3t^2$, $0 \le t \le 1$. Determine $P(0.2 \le X \le 0.9)$. Solution In this case, an analytical solution is easy. $F_X(t) = t^3$ on the interval [0, 1], so $P = 0.9^3 - 0.2^3 = 0.7210$. We use tappr as follows. >> tappr Enter matrix [a b] of x-range endpoints [0 1] Enter number of x approximation points 200 Enter density as a function of t 3*t.^2 Use row matrices X and PX as in the simple case >> M = (X >= 0.2)&(X <= 0.9); >> p = M*PX' p = 0.7210 Because of the regularity of the density and the number of approximation points, the result agrees quite well with the theoretical value. The next example is a more complex one. In particular, the distribution is not bounded. However, it is easy to determine a bound beyond which the probability is negligible. Figure 7.2.13. Distribution function for Example 7.2.12. Example $12$: Radial tire mileage The life (in miles) of a certain brand of radial tires may be represented by a random variable $X$ with density $f_X(t) = \begin{cases} t^2/a^3 & \text{for}\ \ 0 \le t < a \ (b/a) e^{-k(t-a)} \text{for}\ \ a \le t \end{cases}$ where $a = 40,000$, $b = 20/3$, and $k = 1/4000$. Determine $P(X \ge 45,000$. >> a = 40000; >> b = 20/3; >> k = 1/4000; >> % Test shows cutoff point of 80000 should be satisfactory >> tappr Enter matrix [a b] of x-range endpoints [0 80000] Enter number of x approximation points 80000/20 Enter density as a function of t (t.^2/a^3).*(t < 40000) + ... (b/a)*exp(k*(a-t)).*(t >= 40000) Use row matrices X and PX as in the simple case >> P = (X >= 45000)*PX' P = 0.1910 % Theoretical value = (2/3)exp(-5/4) = 0.191003 >> cdbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See Figure 7.2.14 for plot In this case, we use a rather large number of approximation points. As a consequence, the results are quite accurate. In the single-variable case, designating a large number of approximating points usually causes no computer memory problem. The general approximation procedure We show now that any bounded real random variable may be approximated as closely as desired by a simple random variable (i.e., one having a finite set of possible values). For the unbounded case, the approximation is close except in a portion of the range having arbitrarily small total probability. We limit our discussion to the bounded case, in which the range of $X$ is limited to a bounded interval $I = [a, b]$. Suppose $I$ is partitioned into $n$ subintervals by points $t_i$, $1 \le i \le n - 1$, with $a = t_0$ and $b = t_n$. Let $M_i = [t_{i- 1}, t_i)$ be the $i$th subinterval, $1 \le i \le n - 1$ and $M_n = [t_{n - 1}, t_n]$ (see Figure 7.14). Now random variable $X$ may map into any point in the interval, and hence into any point in each subinterval $M_i$. Let $E_i X^{-1} (M_i)$ be the set of points mapped into $M_i$ by $X$. Then the $E_i$ form a partition of the basic space $\Omega$. For the given subdivision, we form a simple random variable $X_s$ as follows. In each subinterval, pick a point $s_i$, $t_{i - 1} \le s_i \le t_i$. Consider the simple random variable $X_s = \sum_{i = 1}^{n} s_i I_{E_i}$. Figure 7.2.14. Partition of the interval $I$ including the range of $X$ Figure 7.2.15. Refinement of the partition by additional subdividion points. This random variable is in canonical form. If $\omega \in E_i$, then $X(\omega) \in M_i$ and $X_s (\omega) = s_i$. Now the absolute value of the difference satisfies $|X(\omega) - X_s (\omega)| < t_i - t_{i - 1}$ the length of subinterval $M_i$ Since this is true for each $\omega$ and the corresponding subinterval, we have the important fact $|X(\omega) - X_s (\omega)|<$ maximum length of the $M_i$ By making the subintervals small enough by increasing the number of subdivision points, we can make the difference as small as we please. While the choice of the $s_i$ is arbitrary in each $M_i$, the selection of $s_i = t_{i - 1}$ (the left-hand endpoint) leads to the property $X_s(\omega) \le X(\omega) \forall \omega$. In this case, if we add subdivision points to decrease the size of some or all of the $M_i$, the new simple approximation $Y_s$ satisfies $X_s(\omega) = Y_s(\omega) \le X(\omega)$ $\forall \omega$ To see this, consider $t_i^{*} \in M_i$(see Figure 7.15). $M_i$ is partitioned into $M_i^{'} \bigcup M_i^{''}$ and $E_i$ is partitioned into $E_i^{'} \bigcup E_i^{''}$. $X$ maps $E_i^{'}$ into $M_i^{'}$ and $E_i^{''}$ into $M_i^{''}$. $Y_s$ maps $E_i^{'}$ into $t_i$ and maps $E_i^{''}$ into $t_i^{''}$ > t_i\). $X_s$ maps both $E_i^{'}$ and $E_i^{''}$ into $t_i$. Thus, the asserted inequality must hold for each $\omega$ By taking a sequence of partitions in which each succeeding partition refines the previous (i.e. adds subdivision points) in such a way that the maximum length of subinterval goes to zero, we may form a nondecreasing sequence of simple random variables $X_n$ which increase to $X$ for each $\omega$. The latter result may be extended to random variables unbounded above. Simply let $N$ th set of subdivision points extend from $a$ to $N$, making the last subinterval $[N, \infty)$. Subintervals from $a$ to $N$ are made increasingly shorter. The result is a nondecreasing sequence $\{X_N: 1 \le N\}$ of simple random variables, with $X_N(\omega) \to X(\omega)$ as $N \to \infty$, for each $\omega \in \Omega$. For probability calculations, we simply select an interval $I$ large enough that the probability outside $I$ is negligible and use a simple approximation over $I$.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/07%3A_Distribution_and_Density_Functions/7.02%3A_Distribution_Approximations.txt
Exercise $1$ (See Exercises 3 and 4 from "Problems on Random Variables and Probabilities"). The class $\{C_j: 1 \le j \le 10\}$ is a partition. Random variable $X$ has values {1, 3, 2, 3, 4, 2, 1, 3, 5, 2} on $C_1$ through $C_{10}$, respectively, with probabilities 0.08, 0.13, 0.06, 0.09, 0.14, 0.11, 0.12, 0.07, 0.11, 0.09. Determine and plot the distribution function $F_X$. Answer T = [1 3 2 3 4 2 1 3 5 2]; pc = 0.01*[8 13 6 9 14 11 12 7 11 9]; [X,PX] = csort(T,pc); ddbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot Exercise $2$ (See Exercise 6 from "Problems on Random Variables and Probabilities"). A store has eight items for sale. The prices are $3.50,$5.00, $3.50,$7.50, $5.00,$5.00, $3.50, and$7.50, respectively. A customer comes in. She purchases one of the items with probabilities 0.10, 0.15, 0.15, 0.20, 0.10 0.05, 0.10 0.15. The random variable expressing the amount of her purchase may be written $X = 3.5 I_{C_1} + 5.0 I_{C_2} + 3.5 I_{C_3} + 7.5 I_{C_4} + 5.0 I_{C_5} + 5.0 I_{C_6} + 3.5 I_{C_7} + 7.5 I_{C_8}$ Determine and plot the distribution function for $X$. Answer T = [3.5 5 3.5 7.5 5 5 3.5 7.5]; pc = 0.01*[10 15 15 20 10 5 10 15]; [X,PX] = csort(T,pc); ddbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot Exercise $3$ (See Exercise 12 from "Problems on Random Variables and Probabilities"). The class $\{A, B, C, D\}$ has minterm probabilities $pm = 0.001 *$ [5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302] Determine and plot the distribution function for the random variable $X = I_A + I_B + I_C + I_D$, which counts the number of the events which occur on a trial. Answer npr06_12 Minterm probabilities in pm, coefficients in c T = sum(mintable(4)); % Alternate solution. See Exercise 6.2.12 from "Problems on Random Variables and Probabilities" [X,PX] = csort(T,pm); ddbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot Exercise $4$ Suppose a is a ten digit number. A wheel turns up the digits 0 through 9 with equal probability on each spin. On ten spins what is the probability of matching, in order, k or more of the ten digits in $a$, $0 \le k \le 10$? Assume the initial digit may be zero. Answer $P =$ cbinom(10, 0.1, 0 : 10). Exercise $5$ In a thunderstorm in a national park there are 127 lightning strikes. Experience shows that the probability of of a lightning strike starting a fire is about 0.0083. What is the probability that $k$ fires are started, $k =$ 0,1,2,3? Answer P = ibinom(127,0.0083,0:3) P = 0.3470 0.3688 0.1945 0.0678 Exercise $6$ A manufacturing plant has 350 special lamps on its production lines. On any day, each lamp could fail with probability $p =$ 0.0017. These lamps are critical, and must be replaced as quickly as possible. It takes about one hour to replace a lamp, once it has failed. What is the probability that on any day the loss of production time due to lamp failaures is $k$ or fewer hours, $k = 0, 1, 2, 3, 4, 5$? Answer P = 1 - chinom(350, 0.0017, 1:6) = 0.5513 0.8799 0.9775 0.9968 0.9996 1.0000 Exercise $7$ Two hundred persons buy tickets for a drawing. Each ticket has probability 0.008 of winning. What is the probability of $k$ or fewer winners, $k = 2, 3, 4$? Answer P = 1 - cbinom(200,0.008,3:5) = 0.7838 0.9220 0.9768 Exercise $8$ Two coins are flipped twenty times. What is the probability the results match (both heads or both tails) $k$ times, $0 \le k \le 20$? Answer P = ibinom(20,1/2,0:20) Exercise $9$ Thirty members of a class each flip a coin ten times. What is the probability that at least five of them get seven or more heads? Answer p = cbinom(10,0.5,7) = 0.1719 P = cbinom(30,p,5) = 0.6052 Exercise $10$ For the system in Exercise 6, call a day in which one or more failures occur among the 350 lamps a “service day.” Since a Bernoulli sequence “starts over” at any time, the sequence of service/nonservice days may be considered a Bernoulli sequence with probability p1, the probability of one or more lamp failures in a day. 1. Beginning on a Monday morning, what is the probability the first service day is the first, second, third, fourth, fifth day of the week? 2. What is the probability of no service days in a seven day week? Answer p1 = 1 - (1 - 0.0017)^350 = 0.4487 k = 1:5; (prob given day is a service day) 1. P = p1*(1 - p1).^(k-1) = 0.4487 0.2474 0.1364 0.0752 0.0414 2. P0 = (1 - p1)^7 = 0.0155 Exercise $11$ For the system in Exercise 6 and Exercise 10 assume the plant works seven days a week. What is the probability the third service day occurs by the end of 10 days? Solve using the negative binomial distribution; repeat using the binomial distribution. Answer p1 = 1 - (1 - 0.0017)^350 = 0.4487 • P = sum(nbinom(3,p1,3:10)) = 0.8990 • Pa = cbinom(10,p1,3) = 0.8990 Exercise $12$ A residential College plans to raise money by selling “chances” on a board. Fifty chances are sold. A player pays $10 to play; he or she wins$30 with probability $p =$ 0.2. The profit to the College is $X = 50 \cdot 10 - 30 N$, where $N$ is the number of winners Determine the distribution for $X$ and calculate $P(X > 0)$, $P(X \ge 200)$, and $P(X \ge 300)$ Answer N = 0:50; PN = ibinom(50,0.2,0:50); X = 500 - 30*N; Ppos = (X>0)*PN' Ppos = 0.9856 P200 = (X>=200)*PN' P200 = 0.5836 P300 = (X>=300)*PN' P300 = 0.1034 Exercise $13$ A single six-sided die is rolled repeatedly until either a one or a six turns up. What is the probability that the first appearance of either of these numbers is achieved by the fifth trial or sooner? Answer P = 1 - (2/3)^5 = 0.8683 Exercise $14$ Consider a Bernoulli sequence with probability $p =$ 0.53 of success on any component trial. 1. The probability the fourth success will occur no later than the tenth trial is determined by the negative binomial distribution. Use the procedure nbinom to calculate this probability . 2. Calculate this probability using the binomial distribution. Answer 1. P = sum(nbinom(4,0.53,4:10)) = 0.8729 2. Pa = cbinom(10,0.53,4) = 0.8729 Exercise $15$ Fifty percent of the components coming off an assembly line fail to meet specifications for a special job. It is desired to select three units which meet the stringent specifications. Items are selected and tested in succession. Under the usual assumptions for Bernoulli trials, what is the probability the third satisfactory unit will be found on six or fewer trials? Answer P = cbinom(6,0.5,3) = 0.6562 Exercise $16$ The number of cars passing a certain traffic count position in an hour has Poisson (53) distribution. What is the probability the number of cars passing in an hour lies between 45 and 55 (inclusive)? What is the probability of more than 55? Answer P1 = cpoisson(53,45) - cpoisson(53,56) = 0.5224 P2 = cpoisson(53,56) = 0.3581 Exercise $17$ Compare $P(X \le k)$ and $P(Y \le k)$ for $X~$ binomial(5000, 0.001) and $Y~$ Poisson (5), for $0 \le k \le 10$. Do this directly with ibinom and ipoisson. Then use the m-procedure bincomp to obtain graphical results (including a comparison with the normal distribution). Answer k = 0:10; Pb = 1 - cbinom(5000,0.001,k+1); Pp = 1 - cpoisson(5,k+1); disp([k;Pb;Pp]') 0 0.0067 0.0067 1.0000 0.0404 0.0404 2.0000 0.1245 0.1247 3.0000 0.2649 0.2650 4.0000 0.4404 0.4405 5.0000 0.6160 0.6160 6.0000 0.7623 0.7622 7.0000 0.8667 0.8666 8.0000 0.9320 0.9319 9.0000 0.9682 0.9682 10.0000 0.9864 0.9863 bincomp Enter the parameter n 5000 Enter the parameter p 0.001 Binomial-- stairs Poisson-- -.-. Adjusted Gaussian-- o o o gtext('Exercise 17') Exercise $18$ Suppose $X~$ binomial (12, 0.375), $Y~$ Poisson (4.5), and $Z~$ exponential (1/4.5). For each random variable, calculate and tabulate the probability of a value at least $k$, for integer values $3 \le k \le 8$. Answer k = 3:8; Px = cbinom(12,0.375,k); Py = cpoisson(4.5,k); Pz = exp(-k/4.5); disp([k;Px;Py;Pz]') 3.0000 0.8865 0.8264 0.5134 4.0000 0.7176 0.6577 0.4111 5.0000 0.4897 0.4679 0.3292 6.0000 0.2709 0.2971 0.2636 7.0000 0.1178 0.1689 0.2111 8.0000 0.0390 0.0866 0.1690 Exercise $19$ The number of noise pulses arriving on a power circuit in an hour is a random quantity having Poisson (7) distribution. What is the probability of having at least 10 pulses in an hour? What is the probability of having at most 15 pulses in an hour? Answer P1 = cpoisson(7,10) = 0.1695 P2 = 1 - cpoisson(7,16) = 0.9976 Exercise $20$ The number of customers arriving in a small specialty store in an hour is a random quantity having Poisson (5) distribution. What is the probability the number arriving in an hour will be between three and seven, inclusive? What is the probability of no more than ten? Answer P1 = cpoisson(5,3) - cpoisson(5,8) = 0.7420 P2 = 1 - cpoisson(5,11) = 0.9863 Exercise $21$ Random variable $X~$ binomial (1000, 0.1). 1. Determine $P(X \ge 80)$, $P(X \ge 100)$, $P(X \ge 120)$ 2. Use the appropriate Poisson distribution to approximate these values. Answer k = [80 100 120]; P = cbinom(1000,0.1,k) P = 0.9867 0.5154 0.0220 P1 = cpoisson(100,k) P1 = 0.9825 0.5133 0.0282 Exercise $22$ The time to failure, in hours of operating time, of a televesion set subject to random voltage surges has the exponential (0.002) distribution. Suppose the unit has operated successfully for 500 hours. What is the (conditional) probability it will operate for another 500 hours? Answer $P(X > 500 + 500|X > 500) = P(X > 500) = e^{-0.002 \cdot 500} = 0.3679$ Exercise $23$ For $X~$ exponential ($\lambda$), determine $P(X \ge 1/\lambda)$, $P(X \ge 2/\lambda)$. Answer $P(X > k\lambda) = e^{-\lambda k/ \lambda} = e^{-k}$ Exercise $24$ Twenty “identical” units are put into operation. They fail independently. The times to failure (in hours) form an iid class, exponential (0.0002). This means the “expected” life is 5000 hours. Determine the probabilities that at least $k$, for $k$ = 5,8,10,12,15, will survive for 5000 hours. Answer p = exp(-0.0002*5000) p = 0.3679 k = [5 8 10 12 15]; P = cbinom(20,p,k) P = 0.9110 0.4655 0.1601 0.0294 0.0006 Exercise $25$ Let $T~$ gamma (20, 0.0002) be the total operating time for the units described in Exercise 24. 1. Use the m-function for the gamma distribution to determine $P(T \le 100,000)$. 2. Use the Poisson distribution to determine $P(T \le 100,000)$. Answer P1 = gammadbn(20,0.0002,100000) = 0.5297 P2 = cpoisson(0.0002*100000,20) = 0.5297 Exercise $26$ The sum of the times to failure for five independent units is a random variable $X~$ gamma (5, 0.15). Without using tables or m-programs, determine $P(X le 25)$. Answer $P(X \le 25) = P(Y \ge 5)$, $Y~$ Poisson $(0.15 \cdot 25 = 3.75)$ $P(Y \ge 5) = 1 - P(Y \le 4) = 1 - e^{-3.35} (1 + 3.75 + \dfrac{3.75^2}{2} + \dfrac{3.75^3}{3!} + \dfrac{3.75^4}{24}) = 0.3225$ Exercise $27$ Interarrival times (in minutes) for fax messages on a terminal are independent, exponential ($\lambda =$ 0.1). This means the time $X$ for the arrival of the fourth message is gamma(4, 0.1). Without using tables or m-programs, utilize the relation of the gamma to the Poisson distribution to determine $P \le 30$. Answer $P(X \le 30) = P(Y \ge 4)$, $Y~$ poisson ($0.2 \cdot 30 = 3$) $P(Y \ge 4) = 1 - P(Y \le 3) = 1 - e^{-3} (1 + 3 + \dfrac{3^2}{2} + \dfrac{3^3}{3!}) = 0.3528$ Exercise $28$ Customers arrive at a service center with independent interarrival times in hours, which have exponential (3) distribution. The time $X$ for the third arrival is thus gamma (3, 3). Without using tables or m-programs, determine $P(X \le 2)$. Answer $P(X \le 2) = P(Y \ge 3)$, $Y ~$ poisson ($3 \cdot 2 = 6$) $P(Y \ge 3) = 1 - P(Y \le 2) = 1 - e^{-6} (1 + 6 + 36/2) = 0.9380$ Exercise $29$ Five people wait to use a telephone, currently in use by a sixth person. Suppose time for the six calls (in minutes) are iid, exponential (1/3). What is the distribution for the total time $Z$ from the present for the six calls? Use an appropriate Poisson distribution to determine $P(Z \le 20)$. Answer $Z~$ gamma (6, 1/3). $P(Z \le 20) = P(Y \ge 6)$, $Y~$ poisson$(1/3 \cdot 20)$ $P(Y \ge 6)$ = cpoisson(20/3, 6) = 0.6547 Exercise $30$ A random number generator produces a sequence of numbers between 0 and 1. Each of these can be considered an observed value of a random variable uniformly distributed on the interval [0, 1]. They assume their values independently. A sequence of 35 numbers is generated. What is the probability 25 or more are less than or equal to 0.71? (Assume continuity. Do not make a discrete adjustment.) Answer p = cbinom(35,0.71,25) = 0.5620 Exercise $31$ Five “identical” electronic devices are installed at one time. The units fail independently, and the time to failure, in days, of each is a random variable exponential (1/30). A maintenance check is made each fifteen days. What is the probability that at least four are still operating at the maintenance check? Answer p = exp(-15/30) = 0.6065 P = cbinom(5,p,4) = 0.3483 Exercise $32$ Suppose $X ~ N$(4, 81). That is, $X$ has gaussian distribution with mean $\mu$ = 4 and variance $\sigma^2$ = 81. 1. Use a table of standardized normal distribution to determine $P(2 < X < 8)$ and $P(|X - 4| \le 5)$. 2. Calculate the probabilities in part (a) with the m-function gaussian. Answer a. $P(2 < X < 8) = \phi((8 - 4)/9) - \phi ((2 - 4)/9)$ = $\phi (4/9) + \phi (2/9) - 1 = 0.6712 + 0.5875 - 1 = 0.2587$ $P(|X - 4| \le 5) = 2\phi(5/9) - 1 = 1.4212 - 1 = 0.4212$ b. P1 = gaussian(4,81,8) - gaussian(4,81,2) P1 = 0.2596 P2 = gaussian(4,81,9) - gaussian(4,84,-1) P2 = 0.4181 Exercise $33$ Suppose $X ~ N(5, 81)$. That is, $X$ has gaussian distribution with $\mu$ = 5 and $\sigma^2$ = 81. Use a table of standardized normal distribution to determine $P(3 < X < 9)$ and $P(|X - 5| le 5)$. Check your results using the m-function gaussian. Answer $P(3 < X < 9) = \phi ((9 - 5)/9) - \phi ((3 - 5)/9) = \phi(4/9) + \phi(2/9) - 1 = 0.6712 + 0.5875 - 1 = 0.2587$ $P(|X - 5| \le 5) = 2 \phi(5/9) - 1 = 1.4212 - 1 = 0.4212$ P1 = gaussian(5,81,9) - gaussian(5,81,3) P1 = 0.2596 P2 = gaussian(5,81,10) - gaussian(5,84,0) P2 = 0.4181 Exercise $34$ Suppose $X ~ N(3, 64)$. That is, $X$ has gaussian distribution with $\mu$ = 3 and $\sigma^2$ = 64. Use a table of standardized normal distribution to determine $P(1 < X < 9)$ and $P(|X - 3| le 4)$. Check your results with the m-function gaussian. Answer $P(1 < X < 9) = \phi((9 - 3)/8) - \phi(1 - 3)/9) =$ $\phi(0.75) + \phi(0.25) - 1 = 0.7734 + 0.5987 - 1 = 0.3721$ $P(|X - 3| \le 4) = 2 \phi(4/8) - 1 = 1.3829 - 1 = 0.3829$ P1 = gaussian(3,64,9) - gaussian(3,64,1) P1 = 0.3721 P2 = gaussian(3,64,7) - gaussian(3,64,-1) P2 = 0.3829 Exercise $35$ Items coming off an assembly line have a critical dimension which is represented by a random variable $~ N$ (10, 0.01). Ten items are selected at random. What is the probability that three or more are within 0.05 of the mean value $\mu$. Answer p = gaussian(10,0.01,10.05) - gaussian(10,0.01,9.95) p = 0.3829 P = cbinom(10,p,3) P = 0.8036 Exercise $36$ The result of extensive quality control sampling shows that a certain model of digital watches coming off a production line have accuracy, in seconds per month, that is normally distributed with $\mu$ = 5 and $\sigma^2$ = 300. To achieve a top grade, a watch must have an accuracy within the range of -5 to +10 seconds per month. What is the probability a watch taken from the production line to be tested will achieve top grade? Calculate, using a standardized normal table. Check with the m-function gaussian. Answer $P(-5 \le X \le 10) = \phi(5/ \sqrt{300}) + \phi(10/\sqrt{300}) - 1 = \phi(0.289) + \phi(0.577) - 1 = 0.614 + 0.717 - 1 = 0.331$ $P =$ gaussian(5, 300, 10) - gaussian(5, 300, -5) = 0.3317 Exercise $37$ Use the m-procedure bincomp with various values of $n$ from 10 to 500 and $p$ from 0.01 to 0.7, to observe the approximation of the binomial distribution by the Poisson. Answer Experiment with the m-procedure bincomp. Exercise $38$ Use the m-procedure poissapp to compare the Poisson and gaussian distributions. Use various values of $\mu$ from 10 to 500. Answer Experiment with the m-procedure poissapp. Exercise $39$ Random variable $X$ has density $f_X(t) = \dfrac{3}{2} t^2$, $-1 \le t \le 1$ (and zero elsewhere). 1. Determine $P(-0.5 \le X < 0.8)$, $P(|X| > 0.5)$, $P(|X - 0.25) \le 0.5)$. 2. Determine an expression for the distribution function. 3. Use the m-procedures tappr and cdbn to plot an approximation to the distribution function. Answer 1. $\dfrac{3}{2} \int t^2 = t^3/2$ $P1 = 0.5 * (0.8^3 - (-0.5)^3) = 0.3185$ $P2 = 2 \int_{0.5}^{1} \dfrac{3}{2} t^2 = (1 - (-0.5)^3) = 7/8$ $P3 = P(|X - 0.25| \le 0.5) = P(-0.25 \le X \le 0.75) = \dfrac{1}{2}[(3/4)^3 - (-1/4)^3] = 7/32$ 2. $F_X (t) = \int_{-1}^{1} f_X = \dfrac{1}{2} (t^3 + 1)$ 3. tappr Enter matrix [a b] of x-range endpoints [-1 1] Enter number of x approximation points 200 Enter density as a function of t 1.5*t.^2 Use row matrices X and PX as in the simple case cdbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot Exercise $40$ Random variable $X$ has density function $f_X(t) = t - \dfrac{3}{8}t^3$, $0 \le t \le 2$ (and zero elsewhere). 1. Determine $P(X \le 0.5)$, $P(0.5 \le X < 1.5)$, $P(|X - 1| < 1/4)$. 2. Determine an expression for the distribution function. 3. Use the m-procedures tappr and cdbn to plot an approximation to the distribution function. Answer 1. $\int (t - \dfrac{3}{8} t^2) = \dfrac{t^2}{2} - \dfrac{t^3}{8}$ $P1 = 0.5^2/2 - 0.5^3/8 = 7/64$ $P2 = 1.5^2 /2 - 1.5^3 /8 - 7/64 = 19/32$ $P3 = 79/256)$ 2. $F_X (t) = \dfrac{t^2}{2} - \dfrac{t^3}{8}$, $0 \le t \le 2$ 3. tappr Enter matrix [a b] of x-range endpoints [0 2] Enter number of x approximation points 200 Enter density as a function of t t - (3/8)*t.^2 Use row matrices X and PX as in the simple case cdbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot Exercise $41$ Random variable $X$ has density function $f_X (t) = \begin{cases} (6/5) t^2 & \text{for } 0 \le t \le 1 \ (6/5) (2 -t) & \text{for } 1 < t \le 2 \end{cases} = I[0, 1] (t) \dfrac{6}{5} t^2 + I_{(1, 2]} (t) \dfrac{6}{5} (2 -t)$ 1. Determine $P(X \le 0.5)$, $P(0.5 \le X < 1.5)$, $P(|X - 1| < 1/4)$. 2. Determine an expression for the distribution function. 3. Use the m-procedures tappr and cdbn to plot an approximation to the distribution function. Answer 1. $P1 = \dfrac{6}{5} \int_{0}^{1/2} t^2 = 1/20$ $P2 = \dfrac{6}{5} \int_{1/2}^{1} t^2 + \dfrac{6}{5} \int_{1}^{3/2} (2 - t) = 4/5$ $P3) = \dfrac{6}{5} \int_{3/4}^{1} t^2 + \dfrac{6}{5} \int_{1}^{5/4} (2 - t) = 79/160$ 2. $F_X (t) = \int_{0}^{1} f_X = I_{[0, 1]} (t) \dfrac{2}{5}t^3 + I_{(1, 2]} (t) [-\dfrac{7}{5} + \dfrac{6}{5} (2t - \dfrac{t^2}{2})]$ 3. tappr Enter matrix [a b] of x-range endpoints [0 2] Enter number of x approximation points 400 Enter density as a function of t (6/5)*(t<=1).*t.^2 + ... (6/5)*(t>1).*(2 - t) Use row matrices X and PX as in the simple case cdbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % See MATLAB plot
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/07%3A_Distribution_and_Density_Functions/7.03%3A_Problems_on_Distribution_and_Density_Functions.txt
A single, real-valued random variable is a function (mapping) from the basic space $\Omega$ to the real line. That is, to each possible outcome $\omega$ of an experiment there corresponds a real value $t = X(\omega)$. The mapping induces a probability mass distribution on the real line, which provides a means of making probability calculations. The distribution is described by a distribution function $F_X$. In the absolutely continuous case, with no point mass concentrations, the distribution may also be described by a probability density function $f_X$. The probability density is the linear density of the probability mass along the real line (i.e., mass per unit length). The density is thus the derivative of the distribution function. For a simple random variable, the probability distribution consists of a point mass $p_i$ at each possible value $t_i$ of the random variable. Various m-procedures and m-functions aid calculations for simple distributions. In the absolutely continuous case, a simple approximation may be set up, so that calculations for the random variable are approximated by calculations on this simple distribution. Often we have more than one random variable. Each can be considered separately, but usually they have some probabilistic ties which must be taken into account when they are considered jointly. We treat the joint case by considering the individual random variables as coordinates of a random vector. We extend the techniques for a single random variable to the multidimensional case. To simplify exposition and to keep calculations manageable, we consider a pair of random variables as coordinates of a two-dimensional random vector. The concepts and results extend directly to any finite number of random variables considered jointly. Random variables considered jointly; random vectors As a starting point, consider a simple example in which the probabilistic interaction between two random quantities is evident. Example 8.1.1: A selection problem Two campus jobs are open. Two juniors and three seniors apply. They seem equally qualified, so it is decided to select them by chance. Each combination of two is equally likely. Let $X$ be the number of juniors selected (possible values 0, 1, 2) and $Y$ be the number of seniors selected (possible values 0, 1, 2). However there are only three possible pairs of values for $(X, Y)$: (0, 2), (1, 1), or (2, 0). Others have zero probability, since they are impossible. Determine the probability for each of the possible pairs. Solution There are $C(5, 2) = 10$ equally likely pairs. Only one pair can be both juniors. Six pairs can be one of each. There are $C(3, 2) = 3$ ways to select pairs of seniors. Thus $P(X = 0, Y = 2) = 3/10$, $P(X = 1, Y = 1) = 6/10$, $P(X = 2, Y = 0) = 1/10$ These probabilities add to one, as they must, since this exhausts the mutually exclusive possibilities. The probability of any other combination must be zero. We also have the distributions for the random variables conisidered individually. $X =$ [0 1 2] $PX =$ [3/10 6/10 1/10] $Y =$ [0 1 2] $PY =$ [1/10 6/10 3/10] We thus have a joint distribution and two individual or marginal distributions. We formalize as follows: A pair $\{X, Y\}$ of random variables considered jointly is treated as the pair of coordinate functions for a two-dimensional random vector $W = (X, Y)$. To each $\omega \in \Omega$, $W$ assigns the pair of real numbers $(t, u)$, where $X(\omega) = t$ and $Y(\omega) = u$. If we represent the pair of values $\{t, u\}$ as the point $(t, u)$ on the plane, then $W(\omega) = (t, u)$, so that $W = (X, Y): \Omega \to$ R$^2$ is a mapping from the basic space $\Omega$ to the plane $R^2$. Since $W$ is a function, all mapping ideas extend. The inverse mapping $W^{-1}$ plays a role analogous to that of the inverse mapping $X^{-1}$ for a real random variable. A two-dimensional vector W is a random vector iff $W^{-1}(Q)$ is an event for each reasonable set (technically, each Borel set) on the plane. A fundamental result from measure theory ensures $W = (X, Y)$ is a random vector iff each of the coordinate functions $X$ and $Y$ is a random variable. In the selection example above, we model $X$ (the number of juniors selected) and $Y$ (the number of seniors selected) as random variables. Hence the vector-valued function Induced distribution and the joint distribution function In a manner parallel to that for the single-variable case, we obtain a mapping of probability mass from the basic space to the plane. Since $W^{-1}(Q)$ is an event for each reasonable set $Q$ on the plane, we may assign to $Q$ the probability mass $P_{XY} (Q) = P[W^{-1}(Q)] = P[(X, Y)^{-1} (Q)]$ Because of the preservation of set operations by inverse mappings as in the single-variable case, the mass assignment determines $P_{XY}$ as a probability measure on the subsets of the plane $R^2$. The argument parallels that for the single-variable case. The result is the probability distribution induced by $W = (X, Y)$. To determine the probability that the vector-valued function $W = (X, Y)$ takes on a (vector) value in region $Q$, we simply determine how much induced probability mass is in that region. Example 8.1.2: Induced distribution and probability calculations To determine $P(1 \le X \le , Y > 0)$, we determine the region for which the first coordinate value (which we call $t$) is between one and three and the second coordinate value (which we call $u$) is greater than zero. This corresponds to the set $Q$ of points on the plane with $1 \le t \le 3$ and $u > 0$. Geometrically, this is the strip on the plane bounded by (but not including) the horizontal axis and by the vertical lines $t = 1$ and $t = 3$(included). The problem is to determine how much probability mass lies in that strip. How this is achieved depends upon the nature of the distribution and how it is described. As in the single-variable case, we have a distribution function. Definition: Joint Distribution Function The joint distribution function $F_{XY}$ for $W = (X, Y)$ is given by $F_{XY} (t, u) = P(X \le t, Y \le u) \quad \forall (t, u) \in R^2$ This means that $F_{XY} (t, u)$ is equal to the probability mass in the region $Q_{tu}$ on the plane such that the first coordinate is less than or equal to $t$ and the second coordinate is less than or equal to $u$. Formally, we may write $F_{XY} (t, u) = P[(X, Y) \in Q_{tu}]$, where \Q_{tu} = \{(r, s) : r \le t, s \le u\}\) Now for a given point ($a, b$), the region $Q_{ab}$ is the set of points ($t, u$) on the plane which are on or to the left of the vertical line through ($t$, 0)and on or below the horizontal line through (0, $u$) (see Figure 1 for specific point $t = a, u = b$). We refer to such regions as semiinfinite intervals on the plane. The theoretical result quoted in the real variable case extends to ensure that a distribution on the plane is determined uniquely by consistent assignments to the semiinfinite intervals $Q_{tu}$. Thus, the induced distribution is determined completely by the joint distribution function. Figure 8.1.1. The region $Q_{ab}$ for the value $F_{XY} (a, b)$. Distribution function for a discrete random vector The induced distribution consists of point masses. At point ($t_i, u_j)$ in the range of $W =(X, Y)$ there is probability mass $P_{ij} = P[W = (t, u_j)] = P(X = t_i, Y = u_j)$. As in the general case, to determine $[P(X, Y) \in Q]$ we determine how much probability mass is in the region. In the discrete case (or in any case where there are point mass concentrations) one must be careful to note whether or not the boundaries are included in the region, should there be mass concentrations on the boundary. Figure 8.1.2. The joint distribution for Example 8.1.3. Example 8.1.3: distribution function for the selection problem in Example 8.1.1 The probability distribution is quite simple. Mass 3/10 at (0,2), 6/10 at (1,1), and 1/10 at (2,0). This distribution is plotted in Figure 8.2. To determine (and visualize) the joint distribution function, think of moving the point $(t, u)$ on the plane. The region \Q_{tu}\) is a giant “sheet” with corner at $t, u)$. The value of $F_{XY} (t, u)$ is the amount of probability covered by the sheet. This value is constant over any grid cell, including the left-hand and lower boundariies, and is the value taken on at the lower left-hand corner of the cell. Thus, if $(t, u)$ is in any of the three squares on the lower left hand part of the diagram, no probability mass is covered by the sheet with corner in the cell. If $(t, u)$ is on or in the square having probability 6/10 at the lower left-hand corner, then the sheet covers that probability, and the value of $F_{XY} (t, u) = 6/10$. The situation in the other cells may be checked out by this procedure. Distribution function for a mixed distribution Example 8.1.4: A Mixed Distribution The pair $\{X, Y\}$ produces a mixed distribution as follows (see Figure 8.3) Point masses 1/10 at points (0,0), (1,0), (1,1), (0,1) Mass 6/10 spread uniformly over the unit square with these vertices The joint distribution function is zero in the second, third, and fourth quadrants. • If the point $(t, u)$ is in the square or on the left and lower boundaries, the sheet covers the point mass at (0,0) plus 0.6 times the area covered within the square. Thus in this region $F_{XY} (t, u) = \dfrac{1}{10} (1 + 6tu)$ • If the pont $(t, u)$ is above the square (including its upper boundary) but to the left of the line $t = 1$, the sheet covers two point masses plus the portion of the mass in the square to the left of the vertical line through $(t, u)$. In this case $F_{XY} (t, u) = \dfrac{1}{10} (2 + 6t)$ • If the point $(t, u)$ is to the right of the square (including its boundary) with $0 \le u < 1$, the sheet covers two point masses and the portion of the mass in the square below the horizontal line through $(t, u)$, to give F_{XY} (t, u) = \dfrac{1}{10} (2 + 6u)\) • If $(t, u)$ is above and to the right of the square (i.e., both $1 \le t$ and $1 \le u$). then all probability mass is covered and $F_{XY} (t, u) = 1$ in this region. Figure 8.3. Mixed joint distribution for Example 8.4. Marginal Distributions If the joint distribution for a random vector is known, then the distribution for each of the component random variables may be determined. These are known as marginal distributions. In general, the converse is not true. However, if the component random variables form an independent pair, the treatment in that case shows that the marginals determine the joint distribution. To begin the investigation, note that $F_X (t) = P(X \le t) = P(X \le t, Y < \infty)$ i.e. $Y$ can take any of its possible values. Thus $F_X(t) = F_{XY}(t, \infty) = \text{lim}_{u \to \infty} F_{XY} (t, u)$ This may be interpreted with the aid of Figure 8.1.4. Consider the sheet for point $(t, u)$. Figure 8.1.4. Construction for obtaining the marginal distribution for $X$. If we push the point up vertically, the upper boundary of $Q_{tu}$ is pushed up until eventually all probability mass on or to the left of the vertical line through $(t, u)$ is included. This is the total probability that $X \le t$. Now $F_X(t)$ describes probability mass on the line. The probability mass described by $F_X(t)$ is the same as the total joint probability mass on or to the left of the vertical line through $(t, u)$. We may think of the mass in the half plane being projected onto the horizontal line to give the marginal distribution for $X$. A parallel argument holds for the marginal for $Y$. $F_{Y} (u) = P(Y \le u) = F_{XY} (\infty, u) =$ mass on or below horizontal line through ($t, u$) This mass is projected onto the vertical axis to give the marginal distribution for $Y$. Marginals for a joint discrete distribution Consider a joint simple distribution. $P(X = t_i) = \sum_{j = 1}^{m} P(X = t_i, Y = u_j)$ and $P(Y = u_j) = \sum_{i = 1}^{n} P(X = t_i, Y = u_j)$ Thus, all the probability mass on the vertical line through ($t_i, 0$) is projected onto the point $t_i$ on a horizontal line to give $P(X = t_i)$. Similarly, all the probability mass on a horizontal line through $(0, u_j)$ is projected onto the point $u_j$ on a vertical line to give $P(Y = u_j)$. Example 8.1.5: Marginals for a discrete distribution The pair $\{X, Y\}$ produces a joint distribution that places mass 2/10 at each of the five points (0, 0), (1, 1), (2, 0), (2, 2), (3, 1) (See Figure 8.1.5) The marginal distribution for $X$ has masses 2/10, 2/10, 4/10, 2/10 at points $t =$ 0, 1, 2, 3, respectively. Similarly, the marginal distribution for Y has masses 4/10, 4/10, 2/10 at points $u =$ 0, 1, 2, respectively. Figure 8.1.5. Marginal distribution for Example 8.1.1. Consider again the joint distribution in Example 8.4. The pair $\{X, Y\}$ produces a mixed distribution as follows: Point masses 1/10 at points (0,0), (1,0), (1,1), (0,1) Mass 6/10 spread uniformly over the unit square with these vertices The construction in Figure 8.1.6 shows the graph of the marginal distribution function $F_X$. There is a jump in the amount of 0.2 at $t = 0$, corresponding to the two point masses on the vertical line. Then the mass increases linearly with $t$, slope 0.6, until a final jump at $t = 1$ in the amount of 0.2 produced by the two point masses on the vertical line. At $t = 1$, the total mass is “covered” and $F_X(t)$ is constant at one for $t \ge 1$. By symmetry, the marginal distribution for $Y$ is the same. Figure 8.1.6. Marginal distribution for Example 8.1.6
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/08%3A_Random_Vectors_and_Joint_Distributions/8.01%3A_Random_Vectors_and_Joint_Distributions.txt
m-procedures for a pair of simple random variables We examine, first, calculations on a pair of simple random variables $X, Y$ considered jointly. These are, in effect, two components of a random vector $W = (X, Y)$, which maps from the basic space $\Omega$ to the plane. The induced distribution is on the $(t, u)$-plane. Values on the horizontal axis ($t$-axis) correspond to values of the first coordinate random variable $X$ and values on the vertical axis (u-axis) correspond to values of $Y$. We extend the computational strategy used for a single random variable. First, let us review the one-variable strategy. In this case, data consist of values $t_i$ and corresponding probabilities arranged in matrices $X = [t_1, t_2, \cdot\cdot\cdot, t_n]$ and $PX = [P(X = t_1), P(X = t_2), \cdot\cdot\cdot, P(X = t_n)]$ To perform calculations on $Z = g(X)$, we we use array operations on $X$ to form a matrix $G = [g(t_1) g(t_2) \cdot\cdot\cdot g(t_n)]$ which has $g(t_i)$ in a position corresponding to $P(X = t_i)$ in matrix $PX$. Basic problem. Determine $P(g(X) \in M)$, where $M$ is some prescribed set of values. • Use relational operations to determine the positions for which $g(t_i) \in M$. These will be in a zero-one matrix $N$, with ones in the desired positions. • Select the $P(X = t_i)$ in the corresponding positions and sum. This is accomplished by one of the MATLAB operations to determine the inner product of $N$ and $PX$ We extend these techniques and strategies to a pair of simple random variables, considered jointly. The data for a pair $\{X, Y\}$ of random variables are the values of $X$ and $Y$, which we may put in row matrices $X = [t_1 t_2 \cdot\cdot\cdot t_n]$ and $Y = [u_1 u_2 \cdot\cdot\cdot u_m]$ and the joint probabilities $P(X = t_i, Y = u_j)$ in a matrix $P$. We usually represent the distribution graphically by putting probability mass $P(X = t_i, Y = u_j)$ at the point $(t_i, u_j)$ on the plane. This joint probability may is represented by the matrix $P$ with elements arranged corresponding to the mass points on the plane. Thus $P$ has elememt $P(X = t_i, Y = u_j)$ at the $(t_i, u_j)$ position To perform calculations, we form computational matrices $t$ and $u$ such that — $t$ has element $t_i$ at each $(t_i, u_j)$ position (i.e., at each point on the $i$th column from the left) — $u$ has element $u_j$ at each $(t_i, u_j)$ position (i.e., at each point on the $j$th row from the bottom) MATLAB array and logical operations on $t, u, P$ perform the specified operations on $t_i, u_j$, and $P(X = t_i, Y = u_j)$ at each $(t_i, u_j)$ position, in a manner analogous to the operations in the single-variable case. Formation of the t and u matrices is achieved by a basic setup m-procedure called jcalc. The data for this procedure are in three matrices: $X = [t_1, t_2, \cdot\cdot\cdot, t_n]$ is the set of values for random variable $X$ $Y = [u_1, u_2, \cdot\cdot\cdot, u_m]$ is the set of values for random variable $Y$, and $P = [p_{ij}]$, where $p_{ij} = P(X = t_i, Y = u_j)$. We arrange the joint probabilities as on the plane, with $X$-values increasing to the right and Y-values increasing upward. This is different from the usual arrangement in a matrix, in which values of the second variable increase downward. The m-procedure takes care of this inversion. The m-procedure forms the matrices $t$ and $u$, utilizing the MATLAB function meshgrid, and computes the marginal distributions for $X$ and $Y$. In the following example, we display the various steps utilized in the setup procedure. Ordinarily, these intermediate steps would not be displayed. Example 8.2.7: Setup and basic calculations >> jdemo4 % Call for data in file jdemo4.m >> jcalc % Call for setup procedure Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P >> disp(P) % Optional call for display of P 0.0360 0.0198 0.0297 0.0209 0.0180 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0264 0.0270 0.0405 0.0285 0.0132 >> PX % Optional call for display of PX PX = 0.1512 0.1800 0.2700 0.1900 0.2088 >> PY % Optional call for display of PY PY = 0.1356 0.4300 0.3100 0.1244 - - - - - - - - - - % Steps performed by jcalc >> PX = sum(P) % Calculation of PX as performed by jcalc PX = 0.1512 0.1800 0.2700 0.1900 0.2088 >> PY = fliplr(sum(P')) % Calculation of PY (note reversal) PY = 0.1356 0.4300 0.3100 0.1244 >> [t,u] = meshgrid(X,fliplr(Y)); % Formation of t, u matrices (note reversal) >> disp(t) % Display of calculating matrix t -3 0 1 3 5 % A row of X-values for each value of Y -3 0 1 3 5 -3 0 1 3 5 -3 0 1 3 5 >> disp(u) % Display of calculating matrix u 2 2 2 2 2 % A column of Y-values (increasing 1 1 1 1 1 % upward) for each value of X 0 0 0 0 0 -2 -2 -2 -2 -2 Suppose we wish to determine the probability $P(X^2 - 3Y \ge 1)$. Using array operations on $t$ and $u$, we obtain the matrix $G = [g(t_i, u_j)]$. >> G = t.^2 - 3*u % Formation of G = [g(t_i,u_j)] matrix G = 3 -6 -5 3 19 6 -3 -2 6 22 9 0 1 9 25 15 6 7 15 31 >> M = G >= 1 % Positions where G >= 1 M = 1 0 0 1 1 1 0 0 1 1 1 0 1 1 1 1 1 1 1 1 >> pM = M.*P % Selection of probabilities pM = 0.0360 0 0 0.0209 0.0180 0.0372 0 0 0.0589 0.0744 0.0516 0 0.1161 0.0817 0.1032 0.0264 0.0270 0.0405 0.0285 0.0132 >> PM = total(pM) % Total of selected probabilities PM = 0.7336 % P(g(X,Y) >= 1) In Example 8.1.3 from "Random Vectors and Joint Distributions" we note that the joint distribution function $F_{XY}$ is constant over any grid cell, including the left-hand and lower boundaries, at the value taken on at the lower left-hand corner of the cell. These lower left-hand corner values may be obtained systematically from the joint probability matrix P by a two step operation. • Take cumulative sums upward of the columns of $P$. • Take cumulative sums of the rows of the resultant matrix. This can be done with the MATLAB function cumsum, which takes column cumulative sums downward. By flipping the matrix and transposing, we can achieve the desired results. Example 8.2.8: Calculation of FXY values for Example 8.3 from "Random Vectors and Joint Distributions" >> P = 0.1*[3 0 0; 0 6 0; 0 0 1]; >> FXY = flipud(cumsum(flipud(P))) % Cumulative column sums upward FXY = 0.3000 0.6000 0.1000 0 0.6000 0.1000 0 0 0.1000 >> FXY = cumsum(FXY')' % Cumulative row sums FXY = 0.3000 0.9000 1.0000 0 0.6000 0.7000 0 0 0.1000 Figure 8.2.7. The joint distribution for Example 8.1.3 in "Random Vectors and Joint Distributions'. Comparison with Example 8.3 from "Random Vectors and Joint Distributions" shows agreement with values obtained by hand. The two step procedure has been incorprated into an m-procedure jddbn. As an example, return to the distribution in Example Example 8.7 Example 8.2.9: Joint distribution function for example 8.7 >> jddbn Enter joint probability matrix (as on the plane) P To view joint distribution function, call for FXY >> disp(FXY) 0.1512 0.3312 0.6012 0.7912 1.0000 0.1152 0.2754 0.5157 0.6848 0.8756 0.0780 0.1824 0.3390 0.4492 0.5656 0.0264 0.0534 0.0939 0.1224 0.1356 These values may be put on a grid, in the same manner as in Figure 8.1.2 for Example 8.1.3 in "Random Vectors and Joint Distributions". As in the case of canonic for a single random variable, it is often useful to have a function version of the procedure jcalc to provide the freedom to name the outputs conveniently. function[x,y,t,u,px,py,p] = jcalcf(X,Y,P) The quantities $x, y, t, u, px, py$, and $p$ may be given any desired names. Joint absolutely continuous random variables In the single-variable case, the condition that there are no point mass concentrations on the line ensures the existence of a probability density function, useful in probability calculations. A similar situation exists for a joint distribution for two (or more) variables. For any joint mapping to the plane which assigns zero probability to each set with zero area (discrete points, line or curve segments, and countable unions of these) there is a density function. Definition If the joint probability distribution for the pair $\{X, Y\}$ assigns zero probability to every set of points with zero area, then there exists a joint density function $f_{XY}$ with the property $P[(X, Y) \in Q] = \int \int_{Q} f_{XY}$ We have three properties analogous to those for the single-variable case: (f1) $f_{XY} \ge 0$ (f2) $\int \int_{R^2} f_{XY} = 1$ (f3) $F_{XY} (t,u) = \int_{-\infty}^{1} \int_{-\infty}^{u} f_{XY}$ At every continuity point for $f_{XY}$, the density is the second partial $f_{XY} (t, u) = \dfrac{\partial^2 F_{XY} (t, u)}{\partial t \partial u}$ Now $F_X (t) = F_{XY} (t, \infty) = \int_{-\infty}^{t} \int_{-\infty}^{\infty} f_{XY} (r, s) dsdr$ A similar expression holds for $F_Y(u)$. Use of the fundamental theorem of calculus to obtain the derivatives gives the result $f_X(t) = \int_{-\infty}^{\infty} f_{XY}(t, s) ds$ and $f_Y(u) = \int_{-\infty}^{\infty} f_{XY} (r, u) du$ Marginal densities. Thus, to obtain the marginal density for the first variable, integrate out the second variable in the joint density, and similarly for the marginal for the second variable. Example 8.2.10: Marginal density functions Let $f_{XY} (t, u) = 8tu$ $0 \le u \le t \le 1$. This region is the triangle bounded by $u = 0, u = t$, and $t = 1$(see Figure 8.2.8) $f_X(t) = \int f_{XY} (t, u) du = 8t \int_{0}^{1} u du = 4t^3$, $0 \le t \le 1$ $f_Y(u) = \int f_{XY} (t, u) dt = 8u \int_{u}^{1} t dt = 4u(1 - u^2)$, $0 \le u \le 1$ $P(0.5 \le X \le 0.75, Y > 0.5) = P[(X, Y) \in Q]$ where $Q$ is the common part of the triangle with the strip between $t = 0.5$ and $t = 0.75$ and above the line $u = 0.5$. This is the small triangle bounded by $u = 0.5$, $u = t$, and $t = 0.75$. Thus $p = 8 \int_{1/2}^{3/4} \int_{1/2}^{t} tu du dt = 25/256 \approx 0.0977$ Figure 8.2.8. Distribution for Example 8.2.10 Example 8.2.11: Marginal distribution with compound expression The pair $\{X, Y\}$ has joint density \f_{XY}(t, u) = \dfrac{6}{37} (t + 2u)\) on the region bounded by $t = 0, t = 2, u = 0$ and $u = \text{max} \{1, t\}$ (see Figure 8.9). Determine the marginal density $f_X$. Solution Examination of the figure shows that we have different limits for the integral with respect to $u$ for $0 \le t \le 1$ and for $1 < t \le 2$. • For $0 \le t \le 1$ $f_x(t) = \dfrac{6}{37} \int_{0}^{1} (t + 2u) du = \dfrac{6}{37} (t + 1)$ • For $1 < t \le 2$ $f_X (t) = \dfrac{6}{37} \int_{0}^{1} (t + 2u) du = \dfrac{12}{37} t^2$ We may combine these into a single expression in a manner used extensively in subsequent treatments. Suppose $M = [0, 1]$ and $N = (1, 2]$. Then $I_M(t) = 1$ for $t \in M$ (i.e., $0 \le t \le 1$) and zero elsewhere. Likewise, $I_{N} (t) = 1$ for $t \in N$ and zero elsewhere. We can, therefore express $f_X$ by Figure 8.2.9. Marginal distribution for Example 8.2.11 Discrete approximation in the continuous case For a pair $\{X, Y\}$ with joint density $f_{XY}$, we approximate the distribution in a manner similar to that for a single random variable. We then utilize the techniques developed for a pair of simple random variables. If we have $n$ approximating values $t_i$ for $X$ and $m$ approximating values $u_j$ for $Y$, we then have $n \cdot m$ pairs $(t_i, u_j)$, corresponding to points on the plane. If we subdivide the horizontal axis for values of $X$, with constant increments $dx$, as in the single-variable case, and the vertical axis for values of $Y$, with constant increments $dy$, we have a grid structure consisting of rectangles of size $dx \cdot dy$. We select $t_i$ and $u_j$ at the midpoint of its increment, so that the point $(t_i, u_j)$ is at the midpoint of the rectangle. If we let the approximating pair be $\{X^*, Y^*\}$, we assign $p_{ij} = P((X^*, Y^*) = (t_i, u_j)) = P(X^* = t_i, Y^* = u_j) = P((X, Y) \text{ in } ij \text{th rectangle})$ As in the one-variable case, if the increments are small enough, $P((X, Y) \in ij \text{th rectangle}) \approx dx \cdot dy \cdot f_{XY}(t_i, u_j)$ The m-procedure tuappr calls for endpoints of intervals which include the ranges of $X$ and $Y$ and for the numbers of subintervals on each. It then prompts for an expression for $f_{XY} (t, u)$, from which it determines the joint probability distribution. It calculates the marginal approximate distributions and sets up the calculating matrices $t$ and $u$ as does the m-process jcalc for simple random variables. Calculations are then carried out as for any joint simple pair. Example 8.2.12: Approximation to a joint continuous distribution $f_{XY} (t, u) = 3$ on $0 \le u \le t^2 \le 1$ Determine $P(X \le 0.8, Y > 0.1)$. >> tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density 3*(u <= t.^2) Use array operations on X, Y, PX, PY, t, u, and P >> M = (t <= 0.8)&(u > 0.1); >> p = total(M.*P) % Evaluation of the integral with p = 0.3355 % Maple gives 0.3352455531 The discrete approximation may be used to obtain approximate plots of marginal distribution and density functions. Figure 8.2.10. Marginal density and distribution function for Example 8.2.13 Example 8.2.13: Approximate plots of marginal density and distribution functions $f_{XY} (t, u) = 3u$ on the triangle bounded by $u = 0$, $u \le 1 + t$, and $u \le 1 - t$. >> tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density 3*u.*(u<=min(1+t,1-t)) Use array operations on X, Y, PX, PY, t, u, and P >> fx = PX/dx; % Density for X (see Figure 8.2.10) % Theoretical (3/2)(1 - |t|)^2 >> fy = PY/dy; % Density for Y >> FX = cumsum(PX); % Distribution function for X (Figure 8.2.10) >> FY = cumsum(PY); % Distribution function for Y >> plot(X,fx,X,FX) % Plotting details omitted These approximation techniques useful in dealing with functions of random variables, expectations, and conditional expectation and regression.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/08%3A_Random_Vectors_and_Joint_Distributions/8.02%3A_Random_Vectors_and_MATLAB.txt
Exercise $1$ Two cards are selected at random, without replacement, from a standard deck. Let $X$ be the number of aces and $Y$ be the number of spades. Under the usual assumptions, determine the joint distribution and the marginals. Answer Let $X$ be the number of aces and $Y$ be the number of spades. Define the events $AS_i$, $A_i$, $S_i$, and $N_i$, $i = 1, 2$ of drawing ace of spades, other ace, spade (other than the ace), and neither on the i selection. Let $P(i, k) = P(X = i, Y = k)$. $P(0, 0) = P(N_1N_2) = \dfrac{36}{52} \cdot \dfrac{35}{51} = \dfrac{1260}{2652}$ $P(0, 1) = P(N_1S_2 \bigvee S_1N_2) = \dfrac{36}{52} \cdot \dfrac{12}{51} + \dfrac{12}{52} \cdot \dfrac{36}{51} = \dfrac{864}{2652}$ $P(0, 2) = P(S_1 S_2) = \dfrac{12}{52} \cdot \dfrac{11}{51} = \dfrac{132}{2652}$ $P(1, 0) = P(A_N_2 \bigvee N_1 S_2) = \dfrac{3}{52} \cdot \dfrac{36}{51} + \dfrac{36}{52} \cdot \dfrac{3}{51} = \dfrac{216}{2652}$ $P(1, 1) = P(A_1S_2 \bigvee S_1A_2 \bigvee AS_1N_2 \bigvee N_1AS_2) = \dfrac{3}{52} \cdot \dfrac{12}{51} + \dfrac{12}{52} \cdot \dfrac{3}{51} + \dfrac{1}{52} \cdot \dfrac{36}{51} + \dfrac{36}{52} \cdot \dfrac{1}{51} = \dfrac{144}{2652}$ $P(1, 2) = P(AS_1S_2 \bigvee S_1AS_2) = \dfrac{1}{52} \cdot \dfrac{12}{51} + \dfrac{12}{52} \cdot \dfrac{1}{51} = \dfrac{24}{2652}$ $P(2, 0) = P(A_1A_2) = \dfrac{3}{52} \cdot \dfrac{2}{51} = \dfrac{6}{2652}$ $P(2, 1) = P(AS_1A_2 \bigvee A_1AS_2) = \dfrac{1}{52} \cdot \dfrac{3}{51} + \dfrac{3}{52} \cdot \dfrac{1}{51} = \dfrac{6}{2652}$ $P(2, 2) = P(\emptyset) = 0$ % type npr08_01 % file npr08_01.m % Solution for Exercise 8.3.1. X = 0:2; Y = 0:2; Pn = [132 24 0; 864 144 6; 1260 216 6]; P = Pn/(52*51); disp('Data in Pn, P, X, Y') npr08_01 % Call for mfile Data in Pn, P, X, Y % Result PX = sum(P) PX = 0.8507 0.1448 0.0045 PY = fliplr(sum(P')) PY = 0.5588 0.3824 0.0588 Exercise $2$ Two positions for campus jobs are open. Two sophomores, three juniors, and three seniors apply. It is decided to select two at random (each possible pair equally likely). Let $X$ be the number of sophomores and $Y$ be the number of juniors who are selected. Determine the joint distribution for the pair $\{X, Y\}$ and from this determine the marginals for each. Answer Let $A_i, B_i, C_i$ be the events of selecting a sophomore, junior, or senior, respectively, on the $i$th trial. Let $X$ be the number of sophomores and $Y$ be the number of juniors selected. Set $P(i, k) = P(X = i, Y = k)$ $P(0, 0) = P(C_1C_2) = \dfrac{3}{8} \cdot \dfrac{2}{7} = \dfrac{6}{56}$ $P(0, 1) = P(B_1C_2) + P(C_1B_2) = \dfrac{3}{8} \cdot \dfrac{3}{7} + \dfrac{3}{8} \cdot \dfrac{3}{7} = \dfrac{18}{56}$ $P(0, 2) = P(B_1B_2) = \dfrac{3}{8} \cdot \dfrac{2}{7} = \dfrac{6}{56}$ $P(1, 0) = P(A_1C_2) + P(C_1A_2) = \dfrac{2}{8} \cdot \dfrac{3}{7} + \dfrac{3}{8} \cdot \dfrac{2}{7} = \dfrac{12}{56}$ $P(1, 1) = P(A_1B_2) + P(B_1A_2) = \dfrac{2}{8} \cdot \dfrac{3}{7} + \dfrac{3}{8} \cdot \dfrac{2}{7} = \dfrac{12}{56}$ $P(2, 0) = P(A_1A_2) = \dfrac{2}{8} \cdot \dfrac{1}{7} = \dfrac{2}{56}$ $P(1, 2) = P(2, 1) = P(2, 2) = 0$ $PX =$ [30/56 24/56 2/56] $PY =$ [20/56 30/56 6/56] % file npr08_02.m % Solution for Exercise 8.3.2. X = 0:2; Y = 0:2; Pn = [6 0 0; 18 12 0; 6 12 2]; P = Pn/56; disp('Data are in X, Y,Pn, P') npr08_02 Data are in X, Y,Pn, P PX = sum(P) PX = 0.5357 0.4286 0.0357 PY = fliplr(sum(P')) PY = 0.3571 0.5357 0.1071 Exercise $3$ A die is rolled. Let $X$ be the number that turns up. A coin is flipped $X$ times. Let $Y$ be the number of heads that turn up. Determine the joint distribution for the pair $\{X, Y\}$. Assume $P(X = k) = 1/6$ for $1 \le k \le 6$ and for each $k$, $P(Y = j|X = k)$ has the binomial ($k$, 1/2) distribution. Arrange the joint matrix as on the plane, with values of $Y$ increasing upward. Determine the marginal distribution for $Y$. (For a MATLAB based way to determine the joint distribution see Example 14.1.7 from "Conditional Expectation, Regression") Answer $P(X = i, Y = k) = P(X = i) P(Y = k|X = i) = (1/6) P(Y = k|X = i)$. % file npr08_03.m % Solution for Exercise 8.3.3. X = 1:6; Y = 0:6; P0 = zeros(6,7); % Initialize for i = 1:6 % Calculate rows of Y probabilities P0(i,1:i+1) = (1/6)*ibinom(i,1/2,0:i); end P = rot90(P0); % Rotate to orient as on the plane PY = fliplr(sum(P')); % Reverse to put in normal order disp('Answers are in X, Y, P, PY') npr08_03 % Call for solution m-file Answers are in X, Y, P, PY disp(P) 0 0 0 0 0 0.0026 0 0 0 0 0.0052 0.0156 0 0 0 0.0104 0.0260 0.0391 0 0 0.0208 0.0417 0.0521 0.0521 0 0.0417 0.0625 0.0625 0.0521 0.0391 0.0833 0.0833 0.0625 0.0417 0.0260 0.0156 0.0833 0.0417 0.0208 0.0104 0.0052 0.0026 disp(PY) 0.1641 0.3125 0.2578 0.1667 0.0755 0.0208 0.0026 Exercise $4$ As a variation of Exercise 8.3.3., Suppose a pair of dice is rolled instead of a single die. Determine the joint distribution for the pair $\{X, Y\}$ and from this determine the marginal distribution for $Y$. Answer % file npr08_04.m % Solution for Exercise 8.3.4. X = 2:12; Y = 0:12; PX = (1/36)*[1 2 3 4 5 6 5 4 3 2 1]; P0 = zeros(11,13); for i = 1:11 P0(i,1:i+2) = PX(i)*ibinom(i+1,1/2,0:i+1); end P = rot90(P0); PY = fliplr(sum(P')); disp('Answers are in X, Y, PY, P') npr08_04 Answers are in X, Y, PY, P disp(P) Columns 1 through 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0005 0 0 0 0 0 0.0013 0.0043 0 0 0 0 0.0022 0.0091 0.0152 0 0 0 0.0035 0.0130 0.0273 0.0304 0 0 0.0052 0.0174 0.0326 0.0456 0.0380 0 0.0069 0.0208 0.0347 0.0434 0.0456 0.0304 0.0069 0.0208 0.0312 0.0347 0.0326 0.0273 0.0152 0.0139 0.0208 0.0208 0.0174 0.0130 0.0091 0.0043 0.0069 0.0069 0.0052 0.0035 0.0022 0.0013 0.0005 Columns 8 through 11 0 0 0 0.0000 0 0 0.0000 0.0001 0 0.0001 0.0003 0.0004 0.0002 0.0008 0.0015 0.0015 0.0020 0.0037 0.0045 0.0034 0.0078 0.0098 0.0090 0.0054 0.0182 0.0171 0.0125 0.0063 0.0273 0.0205 0.0125 0.0054 0.0273 0.0171 0.0090 0.0034 0.0182 0.0098 0.0045 0.0015 0.0078 0.0037 0.0015 0.0004 0.0020 0.0008 0.0003 0.0001 0.0002 0.0001 0.0000 0.0000 disp(PY) Columns 1 through 7 0.0269 0.1025 0.1823 0.2158 0.1954 0.1400 0.0806 Columns 8 through 13 0.0375 0.0140 0.0040 0.0008 0.0001 0.0000 Exercise $5$ Suppose a pair of dice is rolled. Let $X$ be the total number of spots which turn up. Roll the pair an additional $X$ times. Let $Y$ be the number of sevens that are thrown on the $X$ rolls. Determine the joint distribution for the pair $\{X, Y\}$ and from this determine the marginal distribution for $Y$. What is the probability of three or more sevens? Answer % file npr08_05.m % Data and basic calculations for Exercise 8.3.5. PX = (1/36)*[1 2 3 4 5 6 5 4 3 2 1]; X = 2:12; Y = 0:12; P0 = zeros(11,13); for i = 1:11 P0(i,1:i+2) = PX(i)*ibinom(i+1,1/6,0:i+1); end P = rot90(P0); PY = fliplr(sum(P')); disp('Answers are in X, Y, P, PY') npr08_05 Answers are in X, Y, P, PY disp(PY) Columns 1 through 7 0.3072 0.3660 0.2152 0.0828 0.0230 0.0048 0.0008 Columns 8 through 13 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 Exercise $6$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_06.m): $X =$ [-2.3 -0.7 1.1 3.9 5.1] $Y =$ = [1.3 2.5 4.1 5.3] Determine the marginal distribution and the corner values for $F_{XY}$. Determine $P(X + Y > 2)$ and $P(X \ge Y)$. Answer npr08_06 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P disp([X;PX]') -2.3000 0.2300 -0.7000 0.1700 1.1000 0.2000 3.9000 0.2020 5.1000 0.1980 disp([Y;PY]') 1.3000 0.2980 2.5000 0.3020 4.1000 0.1900 5.3000 0.2100 jddbn Enter joint probability matrix (as on the plane) P To view joint distribution function, call for FXY disp(FXY) 0.2300 0.4000 0.6000 0.8020 1.0000 0.1817 0.3160 0.4740 0.6361 0.7900 0.1380 0.2400 0.3600 0.4860 0.6000 0.0667 0.1160 0.1740 0.2391 0.2980 P1 = total((t+u>2).*P) P1 = 0.7163 P2 = total((t>=u).*P) P2 = 0.2799 Exercise $7$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_07.m): $P(X = i, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Determine the marginal distributions and the corner values for $F_{XY}$. Determine $P(1 \le X \le 4, Y > 4)$ and $P(|X - Y| \le 2)$. Answer npr08_07 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P disp([X;PX]') -3.1000 0.1500 -0.5000 0.2200 1.2000 0.3300 2.4000 0.1200 3.7000 0.1100 4.9000 0.0700 disp([Y;PY]') -3.8000 0.1929 -2.0000 0.3426 4.1000 0.2706 7.5000 0.1939 jddbn Enter joint probability matrix (as on the plane) P To view joint distribution function, call for FXY disp(FXY) 0.1500 0.3700 0.7000 0.8200 0.9300 1.0000 0.1410 0.3214 0.5920 0.6904 0.7564 0.8061 0.0915 0.2719 0.4336 0.4792 0.5089 0.5355 0.0510 0.0994 0.1720 0.1852 0.1852 0.1929 M = (1<=t)&(t<=4)&(u>4); P1 = total(M.*P) P1 = 0.3230 P2 = total((abs(t-u)<=2).*P) P2 = 0.3357 Exercise $8$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_08.m): $P(X = t, Y = u)$ t = 1 3 5 7 9 11 13 15 17 19 u = 12 0.0156 0.0191 0.0081 0.0035 0.0091 0.0070 0.0098 0.0056 0.0091 0.0049 10 0.0064 0.0204 0.0108 0.0040 0.0054 0.0080 0.0112 0.0064 0.0104 0.0056 9 0.0196 0.0256 0.0126 0.0060 0.0156 0.0120 0.0168 0.0096 0.0056 0.0084 5 0.0112 0.0182 0.0108 0.0070 0.0182 0.0140 0.0196 0.0012 0.0182 0.0038 3 0.0060 0.0260 0.0162 0.0050 0.0160 0.0200 0.0280 0.0060 0.0160 0.0040 -1 0.0096 0.0056 0.0072 0.0060 0.0256 0.0120 0.0268 0.0096 0.0256 0.0084 -3 0.0044 0.0134 0.0180 0.0140 0.0234 0.0180 0.0252 0.0244 0.0234 0.0126 -5 0.0072 0.0017 0.0063 0.0045 0.0167 0.0090 0.0026 0.0172 0.0217 0.0223 Determine the marginal distributions. Determine $F_{XY} (10, 6)$ and $P(X > Y)$. Answer npr08_08 Data are in X, Y, P jcalc - - - - - - - - - Use array operations on matrices X, Y, PX, PY, t, u, and P disp([X;PX]') 1.0000 0.0800 3.0000 0.1300 5.0000 0.0900 7.0000 0.0500 9.0000 0.1300 11.0000 0.1000 13.0000 0.1400 15.0000 0.0800 17.0000 0.1300 19.0000 0.0700 disp([Y;PY]') -5.0000 0.1092 -3.0000 0.1768 -1.0000 0.1364 3.0000 0.1432 5.0000 0.1222 9.0000 0.1318 10.0000 0.0886 12.0000 0.0918 F = total(((t<=10)&(u<=6)).*P) F = 0.2982 P = total((t>u).*P) P = 0.7390 Exercise $9$ Data were kept on the effect of training time on the time to perform a job on a production line. $X$ is the amount of training, in hours, and $Y$ is the time to perform the task, in minutes. The data are as follows (in m-file npr08_09.m): $P(X = t, Y = u)$ t = 1 1.5 2 2.5 3 u = 5 0.039 0.011 0.005 0.001 0.001 4 0.065 0.070 0.050 0.015 0.010 3 0.031 0.061 0.137 0.051 0.033 2 0.012 0.049 0.163 0.058 0.039 1 0.003 0.009 0.045 0.025 0.017 Determine the marginal distributions. Determine $F_{XY}(2, 3)$ and $P(Y/X \ge 1.25)$. Answer npr08_09 Data are in X, Y, P jcalc - - - - - - - - - - - - Use array operations on matrices X, Y, PX, PY, t, u, and P disp([X;PX]') 1.0000 0.1500 1.5000 0.2000 2.0000 0.4000 2.5000 0.1500 3.0000 0.1000 disp([Y;PY]') 1.0000 0.0990 2.0000 0.3210 3.0000 0.3130 4.0000 0.2100 5.0000 0.0570 F = total(((t<=2)&(u<=3)).*P) F = 0.5100 P = total((u./t>=1.25).*P) P = 0.5570 For the joint densities in Exercises 10-22 below 1. Sketch the region of definition and determine analytically the marginal density functions $f_X$ and $f_Y$. 2. Use a discrete approximation to plot the marginal density $f_X$ and the marginal distribution function $F_X$. 3. Calculate analytically the indicated probabilities. 4. Determine by discrete approximation the indicated probabilities. Exercise $10$ $f_{XY}(t, u) = 1$ for $0 \le t \le 1$, $0 \le u \le 2(1 - t)$. $P(X > 1/2, Y > 1), P(0 \le X \le 1/2, Y > 1/2), P(Y \le X)$ Answer Region is triangle with vertices (0, 0), (1, 0), (0, 2). $f_{X} (t) = \int_{0}^{2(1-t)} du = 2(1 - t)$, $0 \le t \le 1$ $f_{Y} (u) = \int_{0}^{1 - u/2} dt = 1 - u/2$, $0 \le u \le 2$ $M1 = \{(t, u):t > 1/2, u> 1\}$ lies outside the trianlge $P((X, Y) \in M1) = 0$ $M2 = \{(t, u): 0 \le t \le 1/2, u > 1/2\}$ has area in the triangle = 1/2 $M3$ = the region in the triangle under $u = t$, which has area 1/3 tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 400 Enter expression for joint density (t<=1)&(u<=2*(1-t)) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) % Figure not reproduced M1 = (t>0.5)&(u>1); P1 = total(M1.*P) P1 = 0 % Theoretical = 0 M2 = (t<=0.5)&(u>0.5); P2 = total(M2.*P) P2 = 0.5000 % Theoretical = 1/2 P3 = total((u<=t).*P) P3 = 0.3350 % Theoretical = 1/3 Exercise $11$ $f_{XY} (t, u) = 1/2$ on the square with vertices at (1, 0), (2, 1), (1, 2), (0, 1). $P(X > 1, Y > 1), P(X \le 1/2, 1 < Y), P(Y \le X)$ Answer The region is bounded by lines $u = 1 + t$, $u = 1 - t$, $u = 3 - t$, and $u = t - 1$ $f_X (t) = I_{[0,1]} (t) 0.5 \int_{1 - t}^{1 + t} du + I_{(1, 2]} (t) 0.5 \int_{t - 1}^{3 - t} du = I_{(1, 2]} (t) (2 - t) = f_Y(t)$ by symmetry $M1 = \{(t, u): t > 1, u > 1\}$ has area in the trangle = 1/2, so $PM1 = 1/4$ $M2 = \{(t, u): t \le 1/2, u > 1\}$ has area in the trangle = 1/8\), so $PM2 = 1/16$ $M3 = \{(t, u): u \le t\}$ has area in the trangle = 1, so $PM3 = 1/2$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density 0.5*(u<=min(1+t,3-t))& ... (u>=max(1-t,t-1)) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) % Plot not shown M1 = (t>1)&(u>1); PM1 = total(M1.*P) PM1 = 0.2501 % Theoretical = 1/4 M2 = (t<=1/2)&(u>1); PM2 = total(M2.*P) PM2 = 0.0631 % Theoretical = 1/16 = 0.0625 M3 = u<=t; PM3 = total(M3.*P) PM3 = 0.5023 % Theoretical = 1/2 Exercise $12$ $f_{XY} (t, u) = 4t(1 - u)$ for $0 \le t \le 1$, $0 \le u \le 1$. $P(1/2 < X < 3/4, Y > 1/2)$, $P(X \le 1/2, Y > 1/2)$, $P(Y \le X)$ Answer Region is the unit square, $f_X (t) = \int_{0}^{1} 4t(1 - u) du = 2t$, $0 \le t \le 1$ $f_Y(u) = \int_{0}^{1} 4t(1 - u) dt = 2(1 - u)$, $0 \le u \le 1$ $P1 = \int_{1/2}^{3/4} \int_{1/2}^{1} 4t (1 - u) du dt = 5/64$ $P2 = \int_{0}^{1/2} \int_{1/2}^{1} 4t(1 - u) dudt = 1/16$ $P3 = \int_{0}^{1} \int_{0}^{t} 4t(1 - u) du dt = 5/6$ tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density 4*t.*(1 - u) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) % Plot not shown M1 = (1/2<t)&(t<3/4)&(u>1/2); P1 = total(M1.*P) P1 = 0.0781 % Theoretical = 5/64 = 0.0781 M2 = (t<=1/2)&(u>1/2); P2 = total(M2.*P) P2 = 0.0625 % Theoretical = 1/16 = 0.0625 M3 = (u<=t); P3 = total(M3.*P) P3 = 0.8350 % Theoretical = 5/6 = 0.8333 Exercise $13$ $f_{XY} (t, u) = \dfrac{1}{8} (t + u)$ for $0 \le t \le 2$, $0 \le u \le 2$. $P(X > 1/2, Y > 1/2), P(0 \le X \le 1, Y > 1), P(Y \le X)$ Answer Region is the square $0 \le t \le 2$, $0 \le u \le 2$ $f_X (t) = \dfrac{1}{8} \int_{0}^{2} (t + u) = \dfrac{1}{4} ( t + 1) = f_Y(t)$, $0 \le t \le 2$ $P1 = \int_{1/2}^{2} \int_{1/2}^{2} (t + u) dudt = 45/64$ $P2 = \int_{0}^{1} \int_{1}^{2} (t + u) du dt = 1/4$ $P3 = \int_{0}^{2} \int_{0}^{1} (t + u) dudt = 1/2$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (1/8)*(t+u) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) M1 = (t>1/2)&(u>1/2); P1 = total(M1.*P) P1 = 0.7031 % Theoretical = 45/64 = 0.7031 M2 = (t<=1)&(u>1); P2 = total(M2.*P) P2 = 0.2500 % Theoretical = 1/4 M3 = u<=t; P3 = total(M3.*P) P3 = 0.5025 % Theoretical = 1/2 Exercise $14$ $f_{XY}(t, u) = 4ue^{-2t}$ for $0 \le t, 0 \le u \le 1$ $P(X \le 1, Y > 1), P(X > 0, 1/2 < Y < 3/4), P(X < Y)$ Answer Region is strip by $t = 0, u = 0, u = 1$ $f_X(t) = 2e^{-2t}$, $0 \le t$, $f_Y(u) = 2u$, $0 \le u \le 1$, $f_{XY} = f_X f_Y$ $P1 = 0$, $P2 = \int_{0.5}^{\infty} 2e^{-2t} dt \int_{1/2}^{3/4} 2udu = e^{-1} 5/16$ $P3 = 4 \int_{0}^{1} \int_{t}^{1} ue^{-2t} dudt = \dfrac{3}{2} e^{-2} + \dfrac{1}{2} = 0.7030$ tuappr Enter matrix [a b] of X-range endpoints [0 3] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density 4*u.*exp(-2*t) Use array operations on X, Y, PX, PY, t, u, and P M2 = (t > 0.5)&(u > 0.5)&(u<3/4); p2 = total(M2.*P) p2 = 0.1139 % Theoretical = (5/16)exp(-1) = 0.1150 p3 = total((t<u).*P) p3 = 0.7047 % Theoretical = 0.7030 Exercise $15$ $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$. $F_{XY} (1, 1)$, $P(X \le 1, Y > 1)$, $P(|X - Y| < 1)$ Answer Region bounded by $t = 0$, $t = 2$, $u = 0$, $u = 1 + t$ $f_X (t) = \dfrac{3}{88} \int_{0}^{1 + t} (2t + 3u^2) du = \dfrac{3}{88}(1 + t)(1 + 4t + t^2) = \dfrac{3}{88} ( 1 + 5t + 5t^2 + t^3)$, $0 \le t \le 2$ $f_Y(u) = I_{[0,1]} (u) \dfrac{3}{88} \int_{0}^{2} (2t + 3u^2) dt + I_{(1, 3]} (u) \dfrac{3}{88} \int_{u - 1}^{2} (2t + 3u^2) dt =$ $I_{[0,1]} (u) \dfrac{3}{88} (6u^2 + 4) + I_{(1,3]} (t) \dfrac{3}{88} (3 + 2u + 8u^2 - 3u^3)$ $F_{XY}(1, 1) = \int_{0}^{1} \int_{0}^{1} f_{XY} (t, u) dudt = 3/44$ $P1 = \int_{0}^{1} \int_{1}^{1 + t} f_{XY} (t, u)dudt = 41/352$ $P2 = \int_{0}^{1} \int_{1}^{1 + t} f_{XY} (t, u) dudt = 329/352$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 3] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density (3/88)*(2*t+3*u.^2).*(u<=1+t) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) MF = (t<=1)&(u<=1); F = total(MF.*P) F = 0.0681 % Theoretical = 3/44 = 0.0682 M1 = (t<=1)&(u>1); P1 = total(M1.*P) P1 = 0.1172 % Theoretical = 41/352 = 0.1165 M2 = abs(t-u)<1; P2 = total(M2.*P) P2 = 0.9297 % Theoretical = 329/352 = 0.9347 Exercise $16$ $f_{XY} (t, u) = 12t^2u$ on the parallelogram with vertices (-1, 0), (0, 0), (1, 1), (0, 1). $P(X \le 1/2, Y > 0), P(X < 1/2, Y \le 1/2), P(Y \ge 1/2)$ Answer Region bounded by $u = 0$, $u = t$, $u = 1$, $u = t + 1$ $f_X (t) = I_{[-1, 0]} (t) 12 \int_{0}^{t + 1} t^2 u du + I_{(0, 1]} (t) 12 \int_{t}^{1} t^2 u du = I_{[-1, 0]} (t) 6t^2 (t + 1)^2 + I_{(0, 1]}(t) 6t^2(1 - t^2)$ $f_Y(u) = 12\int_{u - 1}^{t} t^2 udu + 12u^3 - 12u^2 + 4u$, $0 \le u \le 1$ $P1 = 1 - 12 \int_{1/2}^{1} \int_{t}^{1} t^2 ududt = 33/80$, $P2 = 12 \int_{0}^{1/2} \int_{u - 1}^{u} t^2 udtdu = 3/16$ $P3 = 1 - P2 = 13/16$ tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density 12*u.*t.^2.*((u<=t+1)&(u>=t)) Use array operations on X, Y, PX, PY, t, u, and P p1 = total((t<=1/2).*P) p1 = 0.4098 % Theoretical = 33/80 = 0.4125 M2 = (t<1/2)&(u<=1/2); p2 = total(M2.*P) p2 = 0.1856 % Theoretical = 3/16 = 0.1875 P3 = total((u>=1/2).*P) P3 = 0.8144 % Theoretical = 13/16 = 0.8125 Exercise $17$ $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min}\ \{1, 2 - t\}$ $P(X \le 1, Y \le 1), P(X > 1), P(X < Y)$ Answer Region is bounded by $t = 0, u = 0, u = 2, u = 2 - t$ $f_X (t) = I_{[0, 1]} (t) \dfrac{24}{11} \int_{0}^{1} tudu + I_{(1, 2]} (t) \dfrac{24}{11} \int_{0}^{2 - t} tudu =$ $I_{[0, 1]} (t) \dfrac{12}{11} t + I_{(1, 2]} (t) \dfrac{12}{11} t(2 - t)^2$ $f_Y (u) = \dfrac{24}{11} \int_{0}^{2 - u} tudt = \dfrac{12}{11} u(u - 2)^2$, $0 \le u \le 1$ $P1 = \dfrac{24}{11} \int_{0}^{1} \int_{0}^{1} tududt = 6/11$ $P2 = \dfrac{24}{11} \int_{1}^{2} \int_{0}^{2 - t} tududt = 5/11$ $P3 = \dfrac{24}{11} \int_{0}^{1} \int_{t}^{1} tududt = 3/11$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density (24/11)*t.*u.*(u<=2-t) Use array operations on X, Y, PX, PY, t, u, and P M1 = (t<=1)&(u<=1); P1 = total(M1.*P) P1 = 0.5447 % Theoretical = 6/11 = 0.5455 P2 = total((t>1).*P) P2 = 0.4553 % Theoretical = 5/11 = 0.4545 P3 = total((t<u).*P) P3 = 0.2705 % Theoretical = 3/11 = 0.2727 Exercise $18$ $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max}\ \{2 - t, t\}$ $P(X \ge 1, Y \ge 1), P(Y \le 1), P(Y \le X)$ Answer Region is bounded by $t = 0, t = 2, u = 0, u = 2 - t$ $(0 \le t \le 1)$, $u = t (1 < t \le 2)$ $f_X(t) = I_{[0,1]} (t) \dfrac{3}{23} \int_{0}^{2 - t} (t + 2u) du + I_{(1, 2]} (t) \dfrac{3}{23} \int_{0}^{t} (t + 2u) du = I_{[0, 1]} (t) \dfrac{6}{23} (2 - t) + I_{(1, 2]} (t) \dfrac{6}{23}t^2$ $f_Y(u) = I_{[0, 1]} (u) \dfrac{3}{23} \int_{0}^{2} (t + 2u) du + I_{(1, 2]} (u) [\dfrac{3}{23} \int_{0}^{2 - u} (t + 2u) dt + \dfrac{3}{23} \int_{u}^{2} (t + 2u) dt]=$ $I_{[0,1]} (u) \dfrac{6}{23} (2u + 1) + I_{(1, 2]} (u) \dfrac{3}{23} (4 + 6u - 4u^2)$ $P1 = \dfrac{3}{23} \int_{1}^{2} \int_{1}^{t} (t + 2u) du dt = 13/46$, $P2 = \dfrac{3}{23} \int_{0}^{2} \int_{0}^{1} (t + 2u) du dt = 12/23$ $P3 = \dfrac{3}{23} \int_{0}^{2} \int_{0}^{t} (t + 2u) dudt = 16/23$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (3/23)*(t+2*u).*(u<=max(2-t,t)) Use array operations on X, Y, PX, PY, t, u, and P M1 = (t>=1)&(u>=1); P1 = total(M1.*P) P1 = 0.2841 13/46 % Theoretical = 13/46 = 0.2826 P2 = total((u<=1).*P) P2 = 0.5190 % Theoretical = 12/23 = 0.5217 P3 = total((u<=t).*P) P3 = 0.6959 % Theoretical = 16/23 = 0.6957 Exercise $19$ $f_{XY} (t, u) = \dfrac{12}{179} (3t^2 + u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{1 + t, 2\}$ $P(X \ge 1, Y \ge 1), P(X \le 1, Y \le 1), P(Y < X)$ Answer Region has two parts: (1) $0 \le t \le 1, 0 \le u \le 2$ (2) $1 < t \le 2, 0 \le u \le 3 - t$ $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{179} \int_{0}^{2} (3t^2 + u) du + I_{(1, 2]} (t) \dfrac{12}{179} \int_{0}^{3 - t} (3t^2 + u) du =$ $I_{[0, 1]} (t) \dfrac{24}{179} (3t^2 + 1) + I_{(1, 2]} (t) \dfrac{6}{179} (9 - 6t + 19t^2 - 6t^3)$ $f_Y(u) = I_{[0, 1]} (u) \dfrac{12}{179} \int_{0}^{2}(3t^2 + u) dt + I_{(1, 2]} (u) \dfrac{12}{179} \int_{0}^{3 - u} (3t^2 + u) dt =$ $I_{[0, 1]} (u) \dfrac{24}{179} (4 + u) + I_{(1, 2]} (u) \dfrac{12}{179} (27 - 24u + 8u^2 - u^3)$ $P1 = \dfrac{12}{179} \int{1}^{2} \int_{1}^{3 - t} (3t^2 + u) du dt = 41/179$ $P2 = \dfrac{12}{179} \int_{0}^{1} \int_{0}^{1} (3t^2 + u) dudt = 18/179$ $P3 = \dfrac{12}{179} \int_{0}^{3/2} \int_{0}^{t} (3t^2 + u) dudt + \dfrac{12}{179} \int_{3/2}^{2} \int_{0}^{3 - t} (3t^2 + u) dudt = 1001/1432$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (12/179)*(3*t.^2+u).* ... (u<=min(2,3-t)) Use array operations on X, Y, PX, PY, t, u, and P fx = PX/dx; FX = cumsum(PX); plot(X,fx,X,FX) M1 = (t>=1)&(u>=1); P1 = total(M1.*P) P1 = 2312 % Theoretical = 41/179 = 0.2291 M2 = (t<=1)&(u<=1); P2 = total(M2.*P) P2 = 0.1003 % Theoretical = 18/179 = 0.1006 M3 = u<=min(t,3-t); P3 = total(M3.*P) P3 = 0.7003 % Theoretical = 1001/1432 = 0.6990 Exercise $20$ $f_{XY} (t, u) = \dfrac{12}{227} (3t + 2tu)$ for $0 \le t \le 2$, $0 \le u \le \text{min} \{1 + t, 2\}$ $P(X \le 1/2, Y \le 3/2), P(X \le 1.5, Y > 1), P(Y < X)$ Answer Region is in two parts: 1. $0 \le t \le 1$, $0 \le u \le 1 + t$ 2. $1 < t \le 2$, $0 \le u \le 2$ $f_X(t) = I_{[0,1]} (t) \int_{0}^{1+t} f_{XY} (t, u) du + I_{(1, 2]} (t) \int_{0}^{2} f_{XY} (t, u) du =$ $I_{[0, 1]} (t) \dfrac{12}{227} (t^3 + 5t^2 + 4t) + I_{(1, 2]} (t) \dfrac{120}{227} t$ $f_Y(u) = I_{[0, 1]} (u) \int_{0}^{2} f_{XY} (t, u) dt + I_{(1, 2]} (u) \int_{u - 1}^{2} f_{XY} (t, u) dt =$ $I_{[0, 1]} (u) \dfrac{24}{227} (2u + 3) + I_{(1, 2]} (u) \dfrac{6}{227} (2u + 3) (3 + 2u - u^2)$ $= I_{[0, 1]} (u) \dfrac{24}{227} (2u + 3) + I_{(1, 2]} (u) \dfrac{6}{227} (9 + 12 u + u^2 - 2u^3)$ $P1 = \dfrac{12}{227} \int_{0}^{1/2} \int_{0}^{1 + t} (3t + 2tu) du dt = 139/3632$ $P2 = \dfrac{12}{227} \int_{0}^{1} \int_{1}^{1 + t} (3t + 2tu) dudt + \dfrac{12}{227} \int_{1}^{3/2} \int_{1}^{2} (3t + 2tu) du dt = 68/227$ $P3 = \dfrac{12}{227} \int_{0}^{2} \int_{1}^{t} (3t + 2tu) dudt = 144/227$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (12/227)*(3*t+2*t.*u).* ... (u<=min(1+t,2)) Use array operations on X, Y, PX, PY, t, u, and P M1 = (t<=1/2)&(u<=3/2); P1 = total(M1.*P) P1 = 0.0384 % Theoretical = 139/3632 = 0.0383 M2 = (t<=3/2)&(u>1); P2 = total(M2.*P) P2 = 0.3001 % Theoretical = 68/227 = 0.2996 M3 = u<t; P3 = total(M3.*P) P3 = 0.6308 % Theoretical = 144/227 = 0.6344 Exercise $21$ $f_{XY} (t, u) = \dfrac{2}{13} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{min}\ \{2t, 3 - t\}$ $P(X < 1), P(X \ge 1, Y \le 1), P(Y \le X/2)$ Answer Region bounded by $t = 2, u = 2t$ $(0 \le t \le 1)$, $3 - t$ $(1 \le t \le 2)$ $f_X(t) = I_{[0, 1]} (t) \dfrac{2}{13} \int_{0}^{2t} (t + 2u) du + I_{(1, 2]} (t) \dfrac{2}{13} \int_{0}^{3 - t} (t + 2u) du = I_{[0, 1]} (t) \dfrac{12}{13} t^2 + I_{(1, 2]} (t) \dfrac{6}{13} (3 - t)$ $f_Y (u) = I_{[0, 1]} (u) \dfrac{2}{13} \int_{u/2}^{2} (t + 2u) dt + I_{(1, 2]} (u) \dfrac{2}{13} \int_{u/2}^{3 - u} (t + 2u) dt =$ $I_{[0, 1]} (u) (\dfrac{4}{13} + \dfrac{8}{13}u - \dfrac{9}{52} u^2) + I_{(1, 2]} (u) (\dfrac{9}{13} + \dfrac{6}{13} u - \dfrac{21}{52} u^2)$ $P1 = \int_{0}^{1} \int_{0}^{2t} (t + 2u) dudt = 4/13$ $P2 = \int_{1}^{2} \int_{0}^{1} (t + 2u)dudt = 5/13$ $P3 = \int_{0}^{2} \int_{0}^{u/2} (t + 2u) dudt = 4/13$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 400 Enter number of Y approximation points 400 Enter expression for joint density (2/13)*(t+2*u).*(u<=min(2*t,3-t)) Use array operations on X, Y, PX, PY, t, u, and P P1 = total((t<1).*P) P1 = 0.3076 % Theoretical = 4/13 = 0.3077 M2 = (t>=1)&(u<=1); P2 = total(M2.*P) P2 = 0.3844 % Theoretical = 5/13 = 0.3846 P3 = total((u<=t/2).*P) P3 = 0.3076 % Theoretical = 4/13 = 0.3077 Exercise $22$ $f_{XY} (t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2u^2$ for $0 \le u \le 1$. $P(1/2 \le X \le 3/2, Y \le 1/2)$ Answer Region is rectangle bounded by $t = 0$, $t = 2$, $u = 0$, $u = 1$ $f_{XY} (t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2 u^2$, $0 \le u \le 1$ $f_X (t) = I_{[0, 1]} (t) \dfrac{3}{8} \int_{0}^{1} (t^2 + 2u) du + I_{(1, 2]} (t) \dfrac{9}{14} \int_{0}^{1} t^2 u^2 du = I_{[0,1]} (t) \dfrac{3}{8} (t^2 + 1) + I_{(1, 2]} (t) \dfrac{3}{14} t^2$ $f_Y(u) = \dfrac{3}{8} \int_{0}^{1} (t^2 + 2u0 dt + \dfrac{9}{14} \int_{1}^{2} t^2 u^2 dt = \dfrac{1}{8} + \dfrac{3}{4} u + \dfrac{3}{2} u^2$ $0 \le u \le 1$ $P1 = \dfrac{3}{8} \int_{1/2}^{1} \int_{0}^{1/2} (t^2 + 2u) dudt + \dfrac{9}{14} \int_{1}^{3/2} \int_{0}^{1/2} t^2 u^2 dudt = 55/448$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density (3/8)*(t.^2+2*u).*(t<=1) ... + (9/14)*(t.^2.*u.^2).*(t > 1) Use array operations on X, Y, PX, PY, t, u, and P M = (1/2<=t)&(t<=3/2)&(u<=1/2); P = total(M.*P) P = 0.1228 % Theoretical = 55/448 = 0.1228
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/08%3A_Random_Vectors_and_Joint_Distributions/8.03%3A_Problems_on_Random_Vectors_and_Joint_Distributions.txt
The concept of independence for classes of events is developed in terms of a product rule. In this unit, we extend the concept to classes of random variables. Independent pairs Recall that for a random variable $X$, the inverse image $X^{-1} (M)$ (i.e., the set of all outcomes $\omega \in \Omega$ which are mapped into $M$ by $X$) is an event for each reasonable subset $M$ on the real line. Similarly, the inverse image $Y^{-1}(N)$ is an event determined by random variable $Y$ for each reasonable set $N$. We extend the notion of independence to a pair of random variables by requiring independence of the events they determine. More precisely, Definition A pair $\{X, Y\}$ of random variables is (stochastically) independent iff each pair of events $\{X^{-1} (M), Y^{-1} (N)\}$ is independent. This condition may be stated in terms of the product rule $P(X \in M, Y \le N) = P(X \in M) P(Y \in N)$ for all (Borel) sets $M, N$ Independence implies \begin{align*} F_{XY} (t, u) &= P(X \in (-\infty, t], Y \in (-\infty, u]) \[4pt] &= P(X \in (-\infty, t]) P(Y \in (-\infty, u]) \[4pt] &= F_X (t) F_Y (u) \quad \forall t, u \end{align*} Note that the product rule on the distribution function is equivalent to the condition the product rule holds for the inverse images of a special class of sets $\{M, N\}$ of the form $M = (-\infty, t]$ and $N = (-\infty, u]$. An important theorem from measure theory ensures that if the product rule holds for this special class it holds for the general class of $\{M, N\}$. Thus we may assert The pair $\{X, Y\}$ is independent iff the following product rule holds $F_{XY} (t, u) = F_X (t) F_Y (u) \quad \forall t, u$ Example 9.1.1: an independent pair Suppose $F_{XY} (t, u) = (1 - e^{-\infty} ) (1 - e^{-\beta u})$ $0 \le t$, $0 \le u$. Taking limits shows $F_X (t) = \lim_{u \to \infty} F_{XY} (t, u) = 1 - e^{-\alpha t} \nonumber$ and $F_Y(u) = \lim_{t \to \infty} F_{XY} (t, u) = 1- e^{-\beta u} \nonumber$ so that the product rule $F_{XY} (t, u) = F_X(t) F_Y(u)$ holds. The pair $\{X, Y\}$ is therefore independent. If there is a joint density function, then the relationship to the joint distribution function makes it clear that the pair is independent iff the product rule holds for the density. That is, the pair is independent iff $f_{XY} (t, u) = f_X (t) f_Y (u)$ $\forall t, u$ example 9.1.2: joint uniform distributin on a rectangle suppose the joint probability mass distributions induced by the pair $\{X, Y\}$ is uniform on a rectangle with sides $I_1 = [a, b]$ and $I_2 = [c, d]$. Since the area is $(b - a) (d - c)$, the constant value of $f_{XY}$ is $1/(b - a) (d - c)$. Simple integration gives $f_X(t) = \dfrac{1}{(b - a) (d - c)} \int_{c}^{d} du = \dfrac{1}{b - a} \quad a \le t \le b \nonumber$ and $f_Y(u) = \dfrac{1}{(b - a)(d - c)} \int_{a}^{b} dt = \dfrac{1}{d - c} \quad c \le u \le d \nonumber$ Thus it follows that $X$ is uniform on $[a, b]$. $Y$ is uniform on $[c, d]$, and $f_{XY} (t, u) = f_X(t) f_Y(u)$ for all $t, u$, so that the pair $\{X, Y\}$ is independent. The converse is also true: if the pair is independent with $X$ uniform on $[a, b]$ and $Y$ is uniform on $[c, d]$, the pair has uniform joint distribution on $I_1 \times I_2$. The Joint Mass Distribution It should be apparent that the independence condition puts restrictions on the character of the joint mass distribution on the plane. In order to describe this more succinctly, we employ the following terminology. Definition If $M$ is a subset of the horizontal axis and $N$ is a subset of the vertical axis, then the cartesian product $M \times N$ is the (generalized) rectangle consisting of those points $(t, u)$ on the plane such that $t \in M$ and $u \in N$. example 9.1.3: Rectangle with interval sides The rectangle in Example 9.1.2 is the artesian product $I_1 \times I_2$, consisting of all those points $(t, u)$ such that $a \le t \le b$ and $c \le u \le d$ (i.e. $t \in I_1$ and $u \in I_2$). Figure 9.1.1. Joint distribution for an independent pair of random variables. We restate the product rule for independence in terms of cartesian product sets. $P(X \in M, Y \in N) = P((X, Y) \in M \times N) = P(X \in M) P(Y \in N)$ Reference to Figure 9.1.1 illustrates the basic pattern. If $M, N$ are intervals on the horizontal and vertical axes, respectively, then the rectangle $M \times N$ is the intersection of the vertical strip meeting the horizontal axis in $M$ with the horizontal strip meeting the vertical axis in $N$. The probability $X \in M$ is the portion of the joint probability mass in the vertical strip; the probability $Y \in N$ is the part of the joint probability in the horizontal strip. The probability in the rectangle is the product of these marginal probabilities. This suggests a useful test for nonindependence which we call the rectangle test. We illustrate with a simple example. Figure 9.1.2. Rectangle test for nonindependence of a pair of random variables. example 9.1.4: The rectangle test for nonindependence Supose probability mass is uniformly distributed over the square with vertices at (1,0), (2,1), (1,2), (0,1). It is evident from Figure 9.1.2 that a value of $X$ determines the possible values of $Y$ and vice versa, so that we would not expect independence of the pair. To establish this, consider the small rectangle $M \times N$ shown on the figure. There is no probability mass in the region. Yet $P(X \in M) > 0$ and $P(Y \in N) > 0$, so that $P(X \in M) P(Y \in N) > 0$, but $P((X, Y) \in M \times N) = 0$. The product rule fails; hence the pair cannot be stochastically independent. Remark. There are nonindependent cases for which this test does not work. And it does not provide a test for independence. In spite of these limitations, it is frequently useful. Because of the information contained in the independence condition, in many cases the complete joint and marginal distributions may be obtained with appropriate partial information. The following is a simple example. example 9.1.5: Joint and marginal probabilities from partial information Suppose the pair $\{X, Y\}$ is independent and each has three possible values. The following four items of information are available. $P(X = t_1) = 0.2$, $P(Y = u_1) = 0.3$, $P(X = t_1, Y = u_2) = 0.08$ $P(X = t_2, Y = u_1) = 0.15$ These values are shown in bold type on Figure 9.1.3. A combination of the product rule and the fact that the total probability mass is one are used to calculate each of the marginal and joint probabilities. For example $P(X = t_1) = 0.2$ and $P(X = t_1, Y = u_2) = P(X = t_1) P(Y = u_2) = 0.8$ implies $P(Y = u_2) = 0.4$. Then P(Y = u_3) = 1 - P(Y = u_1) - P(Y = u_2) = 0.3\). Others are calculated similarly. There is no unique procedure for solution. And it has not seemed useful to develop MATLAB procedures to accomplish this. Figure 9.1.3. Joint and marginal probabilities from partial information. example 9.1.6: The joint normal distribution A pair $\{X, Y\}$ has the joint normal distribution iff the joint density is $f_{XY} (t, u) = \dfrac{1}{2\pi \sigma_{X} \sigma_{Y} (1 - \rho^2)^{1/2}} e^{-Q(t,u)/2}$ where $Q(t, u) = \dfrac{1}{1 - \rho^2} [(\dfrac{t - \mu_X}{\sigma_X})^2 - 2 \rho (\dfrac{t - \mu_X}{\sigma_X}) (\dfrac{t - \mu_Y}{\sigma_Y}) + (\dfrac{t - \mu_Y}{\sigma_Y})^2]$ The marginal densities are obtained with the aid of some algebraic tricks to integrate the joint density. The result is that $X ~ N(\mu_X, \sigma_X^2)$ and $Y ~ N(\mu_Y, \sigma_Y^2)$. If the parameter $\rho$ is set to zero, the result is $f_{XY} (t, u) = f_X (t) f_Y(u)$ so that the pair is independent iff $\rho = 0$. The details are left as an exercise for the interested reader. Remark. While it is true that every independent pair of normally distributed random variables is joint normal, not every pair of normally distributed random variables has the joint normal distribution. Example 9.1.7: a normal pair not joint normally distributed We start with the distribution for a joint normal pair and derive a joint distribution for a normal pair which is not joint normal. The function $\varphi (t, u) = \dfrac{1}{2\pi} \text{exp } (-\dfrac{t^2}{2} - \dfrac{u^2}{2})$ is the joint normal density for an independent pair ($\rho = 0$) of standardized normal random variables. Now define the joint density for a pair $\{X, Y\}$ by $f_{XY} (t, u) = 2 \varphi (t, u)$ in the first and third quadrants, and zero elsewhere Both $X$ ~ $N(0,1)$ and $Y$ ~ $N(0,1)$. However, they cannot be joint normal, since the joint normal distribution is positive for all ($t, u$). Independent classes Since independence of random variables is independence of the events determined by the random variables, extension to general classes is simple and immediate. Definition A class $\{X_i: i \in J\}$ of random variables is (stochastically) independent iff the product rule holds for every finite subclass of two or more. Remark. The index set $J$ in the definition may be finite or infinite. For a finite class $\{X_i: 1 \le i \le n\}$, independence is equivalent to the product rule $F_{X_1 X_2 \cdot\cdot\cdot X_n} (t_1, t_2, \cdot\cdot\cdot, t_n) = \prod_{i = 1}^{n} F_{X_i} (t_i)$ for all $(t_1, t_2, \cdot\cdot\cdot, t_n)$ Since we may obtain the joint distribution function for any finite subclass by letting the arguments for the others be (i.e., by taking the limits as the appropriate $t_i$ increase without bound), the single product rule suffices to account for all finite subclasses. Absolutely continuous random variables If a class $\{X_i: i \in J\}$ is independent and the individual variables are absolutely continuous (i.e., have densities), then any finite subclass is jointly absolutely continuous and the product rule holds for the densities of such subclasses $f_{X_{i1}X_{i2} \cdot\cdot\cdot X_{im}} (t_{i1}, t_{i2}, \cdot\cdot\cdot, t_{im}) = \prod_{k = 1}^{m} f_{X_{ik}} (t_{ik})$ for all $(t_1, t_2, \cdot\cdot\cdot, t_n)$ Similarly, if each finite subclass is jointly absolutely continuous, then each individual variable is absolutely continuous and the product rule holds for the densities. Frequently we deal with independent classes in which each random variable has the same marginal distribution. Such classes are referred to as iid classes (an acronym for independent,identically distributed). Examples are simple random samples from a given population, or the results of repetitive trials with the same distribution on the outcome of each component trial. A Bernoulli sequence is a simple example. Simple random variables Consider a pair $\{X, Y\}$ of simple random variables in canonical form $X = \sum_{i = 1}^{n}t_i I_{A_i}$ $Y = \sum_{j = 1}^{m} u_j I_{B_j}$ Since $A_i = \{X = t_i\}$ and $B_j = \{Y = u_j\}$ the pair $\{X, Y\}$ is independent iff each of the pairs $\{A_i, B_j\}$ is independent. The joint distribution has probability mass at each point $(t_i, u_j)$ in the range of $W = (X, Y)$. Thus at every point on the grid, $P(X = t_i, Y = u_j) = P(X = t_i) P(Y = u_j)$ According to the rectangle test, no gridpoint having one of the $t_i$ or $u_j$ as a coordinate has zero probability mass . The marginal distributions determine the joint distributions. If $X$ has $n$ distinct values and $Y$ has $m$ distinct values, then the n+m marginal probabilities suffice to determine the m·n joint probabilities. Since the marginal probabilities for each variable must add to one, only $n - 1) + (m - 1) = m + n - 2$ values are needed. Suppose $X$ and $Y$ are in affine form. That is, $X =a_0 + \sum_{i = 1}^{n} a_i I_{E_i}$ $Y = b_0 + \sum_{j = 1}^{m} b_j I_{E_j}$ Since $A_r = \{X = t_r\}$ is the union of minterms generated by the $E_i$ and $B_j = \{Y = u_s\}$ is the union of minterms generated by the $F_j$, the pair $\{X, Y\}$ is independent iff each pair of minterms $\{M_a, N_b\}$ generated by the two classes, respectivly, is independent. Independence of the minterm pairs is implied by independence of the combined class $\{E_i, F_j: 1 \le i \le n, 1 \le j \le m\}$ Calculations in the joint simple case are readily handled by appropriate m-functions and m-procedures. MATLAB and independent simple random variables In the general case of pairs of joint simple random variables we have the m-procedure jcalc, which uses information in matrices $X, Y$ and $P$ to determine the marginal probabilities and the calculation matrices $t$ and $u$. In the independent case, we need only the marginal distributions in matrices $X$, $PX$, $Y$ and $PY$ to determine the joint probability matrix (hence the joint distribution) and the calculation matrices $t$ and $u$. If the random variables are given in canonical form, we have the marginal distributions. If they are in affine form, we may use canonic (or the function form canonicf) to obtain the marginal distributions. Once we have both marginal distributions, we use an m-procedure we call icalc. Formation of the joint probability matrix is simply a matter of determining all the joint probabilities $p(i, j) = P(X = t_i, Y = u_j) = P(X = t_i) P(Y = u_j)$ Once these are calculated, formation of the calculation matrices $t$ and $u$ is achieved exactly as in jcalc. Example 9.1.8: Use of icalc to set up for joint calculations X = [-4 -2 0 1 3]; Y = [0 1 2 4]; PX = 0.01*[12 18 27 19 24]; PY = 0.01*[15 43 31 11]; icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P disp(P) % Optional display of the joint matrix 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 disp(t) % Calculation matrix t -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 -4 -2 0 1 3 disp(u) % Calculation matrix u 4 4 4 4 4 2 2 2 2 2 1 1 1 1 1 0 0 0 0 0 M = (t>=-3)&(t<=2); % M = [-3, 2] PM = total(M.*P) % P(X in M) PM = 0.6400 N = (u>0)&(u.^2<=15); % N = {u: u > 0, u^2 <= 15} PN = total(N.*P) % P(Y in N) PN = 0.7400 Q = M&N; % Rectangle MxN PQ = total(Q.*P) % P((X,Y) in MxN) PQ = 0.4736 p = PM*PN p = 0.4736 % P((X,Y) in MxN) = P(X in M)P(Y in N) As an example, consider again the problem of joint Bernoulli trials described in the treatment of 4.3 Composite trials. Example 9.1.9: The joint Bernoulli trial of Example 4.9 1 Bill and Mary take ten basketball free throws each. We assume the two seqences of trials are independent of each other, and each is a Bernoulli sequence. Mary: Has probability 0.80 of success on each trial. Bill: Has probability 0.85 of success on each trial. What is the probability Mary makes more free throws than Bill? Solution Let $X$ be the number of goals that Mary makes and $Y$ be the number that Bill makes. Then $X$ ~ binomial (10, 0.8) and $Y$ ~ binomial (10, 0.85). X = 0:10; Y = 0:10; PX = ibinom(10,0.8,X); PY = ibinom(10,0.85,Y); icalc Enter row matrix of X-values X % Could enter 0:10 Enter row matrix of Y-values Y % Could enter 0:10 Enter X probabilities PX % Could enter ibinom(10,0.8,X) Enter Y probabilities PY % Could enter ibinom(10,0.85,Y) Use array operations on matrices X, Y, PX, PY, t, u, and P PM = total((t>u).*P) PM = 0.2738 % Agrees with solution in Example 9 from "Composite Trials". Pe = total((u==t).*P) % Additional information is more easily Pe = 0.2276 % obtained than in the event formulation Pm = total((t>=u).*P) % of Example 9 from "Composite Trials". Pm = 0.5014 Example 9.1.10: Sprinters time trials Twelve world class sprinters in a meet are running in two heats of six persons each. Each runner has a reasonable chance of breaking the track record. We suppose results for individuals are independent. First heat probabilities: 0.61 0.73 0.55 0.81 0.66 0.43 Second heat probabilities: 0.75 0.48 0.62 0.58 0.77 0.51 Compare the two heats for numbers who break the track record. Solution Let $X$ be the number of successes in the first heat and $Y$ be the number who are successful in the second heat. Then the pair $\{X, Y\}$ is independent. We use the m-function canonicf to determine the distributions for $X$ and for $Y$, then icalc to get the joint distribution. c1 = [ones(1,6) 0]; c2 = [ones(1,6) 0]; P1 = [0.61 0.73 0.55 0.81 0.66 0.43]; P2 = [0.75 0.48 0.62 0.58 0.77 0.51]; [X,PX] = canonicf(c1,minprob(P1)); [Y,PY] = canonicf(c2,minprob(P2)); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P Pm1 = total((t>u).*P) % Prob first heat has most Pm1 = 0.3986 Pm2 = total((u>t).*P) % Prob second heat has most Pm2 = 0.3606 Peq = total((t==u).*P) % Prob both have the same Peq = 0.2408 Px3 = (X>=3)*PX' % Prob first has 3 or more Px3 = 0.8708 Py3 = (Y>=3)*PY' % Prob second has 3 or more Py3 = 0.8525 As in the case of jcalc, we have an m-function version icalcf [x, y, t, u, px, py, p] = icalcf(X, Y, PX, PY)\) We have a related m-function idbn for obtaining the joint probability matrix from the marginal probabilities. Its formation of the joint matrix utilizes the same operations as icalc. Example 9.1.11: A numerical example PX = 0.1*[3 5 2]; PY = 0.01*[20 15 40 25]; P = idbn(PX,PY) P = 0.0750 0.1250 0.0500 0.1200 0.2000 0.0800 0.0450 0.0750 0.0300 0.0600 0.1000 0.0400 An m- procedure itest checks a joint distribution for independence. It does this by calculating the marginals, then forming an independent joint test matrix, which is compared with the original. We do not ordinarily exhibit the matrix $P$ to be tested. However, this is a case in which the product rule holds for most of the minterms, and it would be very difficult to pick out those for which it fails. The m-procedure simply checks all of them. idemo1 % Joint matrix in datafile idemo1 P = 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0169 0.0273 0.0065 0.0091 0.0095 0.0299 0.0208 0.0052 0.0084 0.0020 0.0028 0.0060 0.0092 0.0064 0.0169 0.0273 0.0065 0.0091 0.0195 0.0299 0.0208 0.0104 0.0168 0.0040 0.0056 0.0120 0.0184 0.0128 0.0078 0.0126 0.0030 0.0042 0.0190 0.0138 0.0096 0.0117 0.0189 0.0045 0.0063 0.0135 0.0207 0.0144 0.0091 0.0147 0.0035 0.0049 0.0105 0.0161 0.0112 0.0065 0.0105 0.0025 0.0035 0.0075 0.0115 0.0080 0.0143 0.0231 0.0055 0.0077 0.0165 0.0253 0.0176 itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent % Result of test To see where the product rule fails, call for D disp(D) % Optional call for D 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Next, we consider an example in which the pair is known to be independent. jdemo3 % call for data in m-file disp(P) % call to display P 0.0132 0.0198 0.0297 0.0209 0.0264 0.0372 0.0558 0.0837 0.0589 0.0744 0.0516 0.0774 0.1161 0.0817 0.1032 0.0180 0.0270 0.0405 0.0285 0.0360 itest Enter matrix of joint probabilities P The pair {X,Y} is independent % Result of test The procedure icalc can be extended to deal with an independent class of three random variables. We call the m-procedure icalc3. The following is a simple example of its use. Example 9.1.14: Calculations for three independent random variables X = 0:4; Y = 1:2:7; Z = 0:3:12; PX = 0.1*[1 3 2 3 1]; PY = 0.1*[2 2 3 3]; PZ = 0.1*[2 2 1 3 2]; icalc3 Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P G = 3*t + 2*u - 4*v; % W = 3X + 2Y -4Z [W,PW] = csort(G,P); % Distribution for W PG = total((G>0).*P) % P(g(X,Y,Z) > 0) PG = 0.3370 Pg = (W>0)*PW' % P(Z > 0) Pg = 0.3370 An m-procedure icalc4 to handle an independent class of four variables is also available. Also several variations of the m-function mgsum and the m-function diidsum are used for obtaining distributions for sums of independent random variables. We consider them in various contexts in other units. Approximation for the absolutely continuous case In the study of functions of random variables, we show that an approximating simple random variable $X_s$ of the type we use is a function of the random variable $X$ which is approximated. Also, we show that if $\{X, Y\}$ is an independent pair, so is $\{g(X), h(Y)\}$ for any reasonable functions $g$ and $h$. Thus if $\{X, Y\}$ is an independent pair, so is any pair of approximating simple functions $\{X_s, Y_s\}$ of the type considered. Now it is theoretically possible for the approximating pair $\{X_s, Y_s\}$ to be independent, yet have the approximated pair $\{X, Y\}$ not independent. But this is highly unlikely. For all practical purposes, we may consider $\{X, Y\}$ to be independent iff $\{X_s, Y_s\}$ is independent. When in doubt, consider a second pair of approximating simple functions with more subdivision points. This decreases even further the likelihood of a false indication of independence by the approximating random variables. Example 9.1.15: An independent pair Suppose $X$ ~ exponential (3) and $Y$ ~ exponential (2) with $f_{XY} (t, u) = 6e^{-3t} e^{-2u} = 6e^{-(3t+2u)}$ $t \ge 0, u \ge 0$ Since $e^{-12} \approx 6 \times 10^{-6}$, we approximate $X$ for values up to 4 and $Y$ for values up to 6. tuappr Enter matrix [a b] of X-range endpoints [0 4] Enter matrix [c d] of Y-range endpoints [0 6] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density 6*exp(-(3*t + 2*u)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent Example 9.1.16: Test for independence The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 4tu$ $0 \le t \le 1$, $0 \le u \le 1$. It is easy enough to determine the marginals in this case. By symmetry, they are the same. $f_X(t) = 4t \int_{0}^{1} udu = 2t$, $0 \le t \le 1$ so that $f_{XY} = f_X f_Y$ which ensures the pair is independent. Consider the solution using tuappr and itest. tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 4*t.*u Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/09%3A_Independent_Classes_of_Random_Variables/9.01%3A_Independent_Classes_of_Random_Variables.txt
Exercise $1$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_06.m): $X =$ [-2.3 -0.7 1.1 3.9 5.1] $Y =$ [1.3 2.5 4.1 5.3] $P=\left[\begin{array}{lllll} 0.0483 & 0.0357 & 0.0420 & 0.0399 & 0.0441 \ 0.0437 & 0.0323 & 0.0380 & 0.0361 & 0.0399 \ 0.0713 & 0.0527 & 0.0620 & 0.0609 & 0.0551 \ 0.0667 & 0.0493 & 0.0580 & 0.0651 & 0.0589 \end{array}\right]$ Determine whether or not the pair $\{X, Y\}$ is independent. Answer npr08_06 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 Exercise $2$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr09_02.m): $X =$ [-3.9 -1.7 1.5 2.8 4.1] $Y =$ [-2 1 2.6 5.1] $P=\left[\begin{array}{lllll} 0.0589 & 0.0342 & 0.0304 & 0.0456 & 0.0209 \ 0.0961 & 0.0556 & 0.0498 & 0.0744 & 0.0341 \ 0.0682 & 0.0398 & 0.0350 & 0.0528 & 0.0242 \ 0.0868 & 0.0504 & 0.0448 & 0.0672 & 0.0308 \end{array}\right]$ Determine whether or not the pair $\{X, Y\}$ is independent. Answer npr09_02 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 0 0 0 Exercise $3$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_07.m): $P(X = t, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Determine whether or not the pair $\{X, Y\}$ is independent. Answer npr08_07 Data are in X, Y, P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D disp(D) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 For the distributions in Exercises 4-10 below 1. Determine whether or not the pair is independent. 2. Use a discrete approximation and an independence test to verify results in part (a). Exercise $4$ $f_{XY} (t, u) = 1/\pi$ on the circle with radius one, center at (0,0). Answer Not independent by the rectangle test. tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [-1 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density (1/pi)*(t.^2 + u.^2<=1) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D % Not practical-- too large Exercise $5$ $f_{XY} (t, u) = 1/2$ on the square with vertices at (1, 0), (2, 1), (1, 2), (0, 1) (see Exercise 11 from "Problems on Random Vectors and Joint Distributions"). Answer Not independent, by the rectangle test. tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (1/2)*(u<=min(1+t,3-t)).* ... (u>=max(1-t,t-1)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D Exercise $6$ $f_{XY} (t, u) = 4t (1 - u)$ for $0 \le t \le 1$, $0 \le u \le 1$ (see Exercise 12 from "Problems on Random Vectors and Joint Distributions"). From the solution for Exercise 12 from "Problems on Random Vectors and Joint Distributions" we have $f_X (t) = 2t$, $0 \le t \le 1$, $f_Y(u) = 2(1 - u)$, $0 \le u \le 1$, $f_{XY} = f_X f_Y$ so the pair is independent. Answer tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 4*t.*(1-u) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent Exercise $7$ $f_{XY} = \dfrac{1}{8} (t + u)$ for $0 \le t \le 2$, $0 \le u \le 2$ (see Exercise 13 from "Problems on Random Vectors and Joint Distributions"). From the solution of Exercise 13 from "Problems on Random Vectors and Joint Distributions" we have $f_X (t) = f_Y(t) = \dfrac{1}{4} (t + 1)$, $0 \le t \le 2$ so $f_{XY} \ne f_X f_Y$ which implies the pair is not independent. Answer tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density (1/8)*(t+u) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D Exercise $8$ $f_{XY} (t, u) = 4ue^{-2t}$ for $0 \le t, 0 \le u \le 1$ (see Exercise 14 from "Problems on Random Vectors and Joint Distributions"). From the solution for Exercise 14 from "Problems on Random Vectors and Joint Distribution" we have $f_X (t) = 2e^{-2t}$, $0 \le t$, $f_Y(u) = 2u$, $0 \le u \le 1$ so that $f_{XY} = f_X f_Y$ and the pair is independent. Answer tuappr Enter matrix [a b] of X-range endpoints [0 5] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 500 Enter number of Y approximation points 100 Enter expression for joint density 4*u.*exp(-2*t) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is independent % Product rule holds to within 10^{-9} Exercise $9$ $f_{XY} (t, u) = 12t^2 u$ on the parallelogram with vertices (-1, 0), (0, 0), (1, 1), (0, 1) (see Exercise 16 from "Problems on Random Vectors and Joint Distributions"). Answer Not independent by the rectangle test. tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 100 Enter expression for joint density 12*t.^2.*u.*(u<=min(t+1,1)).* ... (u>=max(0,t)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D Exercise $10$ $f_{XY} = \dfrac{24}{11}tu$ for $0 \le t \le 2$, $0 \le u \le \text{min} \{1, 2-t\}$ (see Exercise 17 from "Problems on Random Vectors and Joint Distributions"). Answer By the rectangle test, the pair is not independent. tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 100 Enter expression for joint density (24/11)*t.*u.*(u<=min(1,2-t)) Use array operations on X, Y, PX, PY, t, u, and P itest Enter matrix of joint probabilities P The pair {X,Y} is NOT independent To see where the product rule fails, call for D Exercise $11$ Two software companies, MicroWare and BusiCorp, are preparing a new business package in time for a computer trade show 180 days in the future. They work independently. MicroWare has anticipated completion time, in days, exponential (1/150). BusiCorp has time to completion, in days, exponential (1/130). What is the probability both will complete on time; that at least one will complete on time; that neither will complete on time? Answer p1 = 1 - exp(-180/150) p1 = 0.6988 p2 = 1 - exp(-180/130) p2 = 0.7496 Pboth = p1*p2 Pboth = 0.5238 Poneormore = 1 - (1 - p1)*(1 - p2) % 1 - Pneither Poneormore = 0.9246 Pneither = (1 - p1)*(1 - p2) Pneither = 0.0754 Exercise $12$ Eight similar units are put into operation at a given time. The time to failure (in hours) of each unit is exponential (1/750). If the units fail independently, what is the probability that five or more units will be operating at the end of 500 hours? Answer p = exp(-500/750); % Probability any one will survive P = cbinom(8,p,5) % Probability five or more will survive P = 0.3930 Exercise $13$ The location of ten points along a line may be considered iid random variables with symmytric triangular distribution on [1,3]. What is the probability that three or more will lie within distance 1/2 of the point $t = 2$? Answer Geometrically, $p = 3/4$, so that P = cbinom(10,p,3) = 0.9996. Exercise $14$ A Christmas display has 200 lights. The times to failure are iid, exponential (1/10000). The display is on continuously for 750 hours (approximately one month). Determine the probability the number of lights which survive the entire period is at least 175, 180, 185, 190. Answer p = exp(-750/10000) p = 0.9277 k = 175:5:190; P = cbinom(200,p,k); disp([k;P]') 175.0000 0.9973 180.0000 0.9449 185.0000 0.6263 190.0000 0.1381 Exercise $15$ A critical module in a network server has time to failure (in hours of machine time) exponential (1/3000). The machine operates continuously, except for brief times for maintenance or repair. The module is replaced routinely every 30 days (720 hours), unless failure occurs. If successive units fail independently, what is the probability of no breakdown due to the module for one year? Answer p = exp(-720/3000) p = 0.7866 % Probability any unit survives P = p^12 % Probability all twelve survive (assuming 12 periods) P = 0.056 Exercise $16$ Joan is trying to decide which of two sales opportunities to take. • In the first, she makes three independent calls. Payoffs are $570,$525, and $465, with respective probabilities of 0.57, 0.41, and 0.35. • In the second, she makes eight independent calls, with probability of success on each call $p =$ 0.57. She realizes$150 profit on each successful sale. Let $X$ be the net profit on the first alternative and $Y$ be the net gain on the second. Assume the pair $\{X, Y\}$ is independent. 1. Which alternative offers the maximum possible gain? 2. Compare probabilities in the two schemes that total sales are at least $600,$900, $1000,$1100. 3. What is the probability the second exceeds the first— i.e., what is $P(Y > X)$? Answer $X = 570 I_A + 525 I_B + 465I_C$ with $[P(A) P(B) P(C)]$ = [0.57 0.41 0.35]. $Y = 150 S$. where $S~$ binomial (8, 0.57). c = [570 525 465 0]; pm = minprob([0.57 0.41 0.35]); canonic % Distribution for X Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution Y = 150*[0:8]; % Distribution for Y PY = ibinom(8,0.57,0:8); icalc % Joint distribution Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P xmax = max(X) xmax = 1560 ymax = max(Y) ymax = 1200 k = [600 900 1000 1100]; px = zeros(1,4); for i = 1:4 px(i) = (X>=k(i))*PX'; end py = zeros(1,4); for i = 1:4 py(i) = (Y>=k(i))*PY'; end disp([px;py]') 0.4131 0.7765 0.4131 0.2560 0.3514 0.0784 0.0818 0.0111 M = u > t; PM = total(M.*P) PM = 0.5081 % P(Y>X) Exercise $17$ Margaret considers five purchases in the amounts 5, 17, 21, 8, 15 dollars with respective probabilities 0.37, 0.22, 0.38, 0.81, 0.63. Anne contemplates six purchases in the amounts 8, 15, 12, 18, 15, 12 dollars. with respective probabilities 0.77, 0.52, 0.23, 0.41, 0.83, 0.58. Assume that all eleven possible purchases form an independent class. 1. What is the probability Anne spends at least twice as much as Margaret? 2. What is the probability Anne spends at least $30 more than Margaret? Answer cx = [5 17 21 8 15 0]; pmx = minprob(0.01*[37 22 38 81 63]); cy = [8 15 12 18 15 12 0]; pmy = minprob(0.01*[77 52 23 41 83 58]); [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = u >= 2*t; PM1 = total(M1.*P) PM1 = 0.3448 M2 = u - t >=30; PM2 = total(M2.*P) PM2 = 0.2431 Exercise $18$ James is trying to decide which of two sales opportunities to take. • In the first, he makes three independent calls. Payoffs are$310, $380, and$350, with respective probabilities of 0.35, 0.41, and 0.57. • In the second, he makes eight independent calls, with probability of success on each call p=0.57. He realizes $100 profit on each successful sale. Let $X$ be the net profit on the first alternative and $Y$ be the net gain on the second. Assume the pair $\{X, Y\}$ is independent. • Which alternative offers the maximum possible gain? • What is the probability the second exceeds the first— i.e., what is $P(Y > X)$? • Compare probabilities in the two schemes that total sales are at least$600, $700,$750. Answer cx = [310 380 350 0]; pmx = minprob(0.01*[35 41 57]); Y = 100*[0:8]; PY = ibinom(8,0.57,0:8); canonic Enter row vector of coefficients cx Enter row vector of minterm probabilities pmx Use row matrices X and PX for calculations Call for XDBN to view the distribution icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P xmax = max(X) xmax = 1040 ymax = max(Y) ymax = 800 PYgX = total((u>t).*P) PYgX = 0.5081 k = [600 700 750]; px = zeros(1,3); py = zeros(1,3); for i = 1:3 px(i) = (X>=k(i))*PX'; end for i = 1:3 py(i) = (Y>=k(i))*PY'; end disp([px;py]') 0.4131 0.2560 0.2337 0.0784 0.0818 0.0111 Exercise $19$ A residential College plans to raise money by selling “chances” on a board. There are two games: Game 1: Pay $5 to play; win$20 with probability $p_1$ =0.05 (one in twenty) Game 2: Pay $10 to play; win$30 with probability $p_2$ =0.2 (one in five) Thirty chances are sold on Game 1 and fifty chances are sold on Game 2. If $X$ and $Y$ are the profits on the respective games, then $X = 30 \cdot 5 - 20N_1$ and $Y = 50 \cdot 10 - 30 N_2$ where $N_1, N_2$ are the numbers of winners on the respective games. It is reasonable to suppose $N_1 ~$ binomial (30, 0.05) and $N_2~$ binomial (50, 0.2). It is reasonable to suppose the pair $\{N_1, N_2\}$ is independent, so that $\{X, Y\}$ is independent. Determine the marginal distributions for $X$ and $Y$ then use icalc to obtain the joint distribution and the calculating matrices. The total profit for the College is $Z = X + Y$. What is the probability the College will lose money? What is the probability the profit will be $400 or more, less than$200, between $200 and$450? Answer N1 = 0:30; PN1 = ibinom(30,0.05,0:30); x = 150 - 20*N1; [X,PX] = csort(x,PN1); N2 = 0:50; PN2 = ibinom(50,0.2,0:50); y = 500 - 30*N2; [Y,PY] = csort(y,PN2); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P G = t + u; Mlose = G < 0; Mm400 = G >= 400; Ml200 = G < 200; M200_450 = (G>=200)&(G<=450); Plose = total(Mlose.*P) Plose = 3.5249e-04 Pm400 = total(Mm400.*P) Pm400 = 0.1957 Pl200 = total(Ml200.*P) Pl200 = 0.0828 P200_450 = total(M200_450.*P) P200_450 = 0.8636 Exercise $20$ The class $\{X, Y, Z\}$ of random variables is iid (independent, identically distributed) with common distribution $X =$ [-5 -1 3 4 7] $PX =$ 0.01 * [15 20 30 25 10] Let $W = 3X - 4Y + 2Z$. Determine the distribution for $W$ and from this determine $P(W > 0)$ and $P(-20 \le W \le 10)$. Do this with icalc, then repeat with icalc3 and compare results. Answer Since icalc uses $X$ and $PX$ in its output, we avoid a renaming problem by using $x$ and $px$ for data vectors $X$ and $PX$. x = [-5 -1 3 4 7]; px = 0.01*[15 20 30 25 10]; icalc Enter row matrix of X-values 3*x Enter row matrix of Y-values -4*x Enter X probabilities px Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P a = t + u; [V,PV] = csort(a,P); icalc Enter row matrix of X-values V Enter row matrix of Y-values 2*x Enter X probabilities PV Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P b = t + u; [W,PW] = csort(b,P); P1 = (W>0)*PW' P1 = 0.5300 P2 = ((-20<=W)&(W<=10))*PW' P2 = 0.5514 icalc3 % Alternate using icalc3 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter X probabilities px Enter Y probabilities px Enter Z probabilities px Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P a = 3*t - 4*u + 2*v; [W,PW] = csort(a,P); P1 = (W>0)*PW' P1 = 0.5300 P2 = ((-20<=W)&(W<=10))*PW' P2 = 0.5514 Exercise $21$ The class $\{A, B, C, D, E, F\}$ is independent; the respective probabilities for these events are $\{0.46, 0.27, 0.33, 0.47, 0.37, 0.41\}$. Consider the simple random variables $X = 3I_A - 9I_B + 4I_C$, $Y = -2I_D + 6I_E + 2I_F - 3$, and $Z = 2X - 3Y$ Determine $P(Y > X)$, $P(Z > 0)$, $P(5 \le Z \le 25)$. Answer cx = [3 -9 4 0]; pmx = minprob(0.01*[42 27 33]); cy = [-2 6 2 -3]; pmy = minprob(0.01*[47 37 41]); [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P G = 2*t - 3*u; [Z,PZ] = csort(G,P); PYgX = total((u>t).*P) PYgX = 0.3752 PZpos = (Z>0)*PZ' PZpos = 0.5654 P5Z25 = ((5<=Z)&(Z<=25))*PZ' P5Z25 = 0.4745 Exercise $22$ Two players, Ronald and Mike, throw a pair of dice 30 times each. What is the probability Mike throws more “sevens” than does Ronald? Answer P = (ibinom(30,1/6,0:29))*(cbinom(30,1/6,1:30))' = 0.4307 Exercise $23$ A class has fifteen boys and fifteen girls. They pair up and each tosses a coin 20 times. What is the probability that at least eight girls throw more heads than their partners? Answer pg = (ibinom(20,1/2,0:19))*(cbinom(20,1/2,1:20))' pg = 0.4373 % Probability each girl throws more P = cbinom(15,pg,8) P = 0.3100 % Probability eight or more girls throw more Exercise $24$ Glenn makes five sales calls, with probabilities 0.37, 0.52, 0.48, 0.71, 0.63, of success on the respective calls. Margaret makes four sales calls with probabilities 0.77, 0.82, 0.75, 0.91, of success on the respective calls. Assume that all nine events form an independent class. If Glenn realizes a profit of $18.00 on each sale and Margaret earns$20.00 on each sale, what is the probability Margaret's gain is at least $10.00 more than Glenn's? Answer cg = [18*ones(1,5) 0]; cm = [20*ones(1,4) 0]; pmg = minprob(0.01*[37 52 48 71 63]); pmm = minprob(0.01*[77 82 75 91]); [G,PG] = canonicf(cg,pmg); [M,PM] = canonicf(cm,pmm); icalc Enter row matrix of X-values G Enter row matrix of Y-values M Enter X probabilities PG Enter Y probabilities PM Use array operations on matrices X, Y, PX, PY, t, u, and P H = u-t>=10; p1 = total(H.*P) p1 = 0.5197 Exercise $25$ Mike and Harry have a basketball shooting contest. • Mike shoots 10 ordinary free throws, worth two points each, with probability 0.75 of success on each shot. • Harry shoots 12 “three point” shots, with probability 0.40 of success on each shot. Let $X, Y$ be the number of points scored by Mike and Harry, respectively. Determine $P(X \ge 15)$, and $P(Y \ge 15)$, $P(X \ge Y)$. Answer X = 2*[0:10]; PX = ibinom(10,0.75,0:10); Y = 3*[0:12]; PY = ibinom(12,0.40,0:12); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P PX15 = (X>=15)*PX' PX15 = 0.5256 PY15 = (Y>=15)*PY' PY15 = 0.5618 G = t>=u; PG = total(G.*P) PG = 0.5811 Exercise $26$ Martha has the choice of two games. Game 1: Pay ten dollars for each “play.” If she wins, she receives$20, for a net gain of $10 on the play; otherwise, she loses her$10. The probability of a win is 1/2, so the game is “fair.” Game 2: Pay five dollars to play; receive $15 for a win. The probability of a win on any play is 1/3. Martha has$100 to bet. She is trying to decide whether to play Game 1 ten times or Game 2 twenty times. Let $W1$ and $W2$ be the respective net winnings (payoff minus fee to play). • Determine $P(W2 \ge W1)$ • Compare the two games further by calculating $P(W1 > 0)$ and $P(W2 > 0)$ Which game seems preferable? Answer W1 = 20*[0:10] - 100; PW1 = ibinom(10,1/2,0:10); W2 = 15*[0:20] - 100; PW2 = ibinom(20,1/3,0:20); P1pos = (W1>0)*PW1' P1pos = 0.3770 P2pos = (W2>0)*PW2' P2pos = 0.5207 icalc Enter row matrix of X-values W1 Enter row matrix of Y-values W2 Enter X probabilities PW1 Enter Y probabilities PW2 Use array operations on matrices X, Y, PX, PY, t, u, and P G = u >= t; PG = total(G.*P) PG = 0.5182 Exercise $27$ Jim and Bill of the men's basketball team challenge women players Mary and Ellen to a free throw contest. Each takes five free throws. Make the usual independence assumptions. Jim, Bill, Mary, and Ellen have respective probabilities $p =$ 0.82, 0.87, 0.80, and 0.85 of making each shot tried. What is the probability Mary and Ellen make a total number of free throws at least as great as the total made by the guys? Answer x = 0:5; PJ = ibinom(5,0.82,x); PB = ibinom(5,0.87,x); PM = ibinom(5,0.80,x); PE = ibinom(5,0.85,x); icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities PJ Enter Y probabilities PB Use array operations on matrices X, Y, PX, PY, t, u, and P H = t+u; [Tm,Pm] = csort(H,P); icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities PM Enter Y probabilities PE Use array operations on matrices X, Y, PX, PY, t, u, and P G = t+u; [Tw,Pw] = csort(G,P); icalc Enter row matrix of X-values Tm Enter row matrix of Y-values Tw Enter X probabilities Pm Enter Y probabilities Pw Use array operations on matrices X, Y, PX, PY, t, u, and P Gw = u>=t; PGw = total(Gw.*P) PGw = 0.5746 icalc4 % Alternate using icalc4 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter row matrix of W-values x Enter X probabilities PJ Enter Y probabilities PB Enter Z probabilities PM Enter W probabilities PE Use array operations on matrices X, Y, Z,W PX, PY, PZ, PW t, u, v, w, and P H = v+w >= t+u; PH = total(H.*P) PH = 0.5746
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/09%3A_Independent_Classes_of_Random_Variables/9.02%3A_Problems_on_Independent_Classes_of_Random_Variables.txt
Introduction Frequently, we observe a value of some random variable, but are really interested in a value derived from this by a function rule. If $X$ is a random variable and $g$ is a reasonable function (technically, a Borel function), then $Z = g(X)$ is a new random variable which has the value $g(t)$ for any $\omega$ such that $X(\omega) = t$. Thus $Z(\omega) = g(X(\omega))$. The problem; an approach We consider, first, functions of a single random variable. A wide variety of functions are utilized in practice. Example 10.1.1: A quality control problem In a quality control check on a production line for ball bearings it may be easier to weigh the balls than measure the diameters. If we can assume true spherical shape and $w$ is the weight, then diameter is $kw^{1/3}$, where $k$ is a factor depending upon the formula for the volume of a sphere, the units of measurement, and the density of the steel. Thus, if $X$ is the weight of the sampled ball, the desired random variable is $D = kX^{1/3}$. Example 10.1.2: Price breaks The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at $20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule: • 11-20,$18 each • 21-30, $16 each • 31-50,$15 each • 51-100, \$13 each If the number of purchasers is a random variable $X$, the total cost (in dollars) is a random quantity $Z = g(X)$ described by $g(X) = 200 + 18 I_{M1} (X) (X - 10) + (16 - 18) I_{M2} (X) (X - 20)$ $+ (15 - 16) I_{M3} (X) (X - 30) + (13 - 15) I_{M4} (X) (X - 50)$ where $M1 = [10, \infty)$, $M2 = [20, \infty)$, $M3 = [30, \infty)$, $M4 = [50, \infty)$ The function rule is more complicated than in Example 10.1.1, but the essential problem is the same. The problem If $X$ is a random variable, then $Z = g(X)$ is a new random variable. Suppose we have the distribution for $X$. How can we determine $P(Z \in M)$, the probability $Z$ takes a value in the set $M$? An approach to a solution We consider two equivalent approaches To find $P(X \in M)$. 1. Mapping approach. Simply find the amount of probability mass mapped into the set $M$ by the random variable $X$. • In the absolutely continuous case, calculate $\int_{M} f_X$. • In the discrete case, identify those values $t_i$ of $X$ which are in the set $M$ and add the associated probabilities. 2. Discrete alternative. Consider each value $t_i$ of $X$. Select those which meet the defining conditions for $M$ and add the associated probabilities. This is the approach we use in the MATLAB calculations. Note that it is not necessary to describe geometrically the set $M$; merely use the defining conditions. To find $P(g(X) \in M)$. 1. Mapping approach. Determine the set $N$ of all those t which are mapped into $M$ by the function $g$. Now if $X(\omega) \in N$, then $g(X(\omega)) \in M$, and if $g(X(\omega)) \in M$, then $X(\omega) \in N$. Hence $\{\omega: g(X(\omega)) \in M\} = \{\omega: X(\omega) \in N\}$ Since these are the same event, they must have the same probability. Once $N$ is identified, determine $P(X \in N)$ in the usual manner (see part a, above). • Discrete alternative. For each possible value $t_i$ of $X$, determine whether $g(t_i)$ meets the defining condition for $M$. Select those $t_i$ which do and add the associated probabilities. — □ Remark. The set $N$ in the mapping approach is called the inverse image $N = g^{-1} (M)$ Example 10.1.3: A discrete example Suppose $X$ has values -2, 0, 1, 3, 6, with respective probabilities 0.2, 0.1, 0.2, 0.3 0.2. Consider $Z = g(X) = (X + 1) (X - 4)$. Determine $P(Z > 0)$. Solution First solution. The mapping approach $g(t) = (t + 1) (t - 4)$. $N = \{t: g(t) > 0\}$ is the set of points to the left of –1 or to the right of 4. The $X$-values –2 and 6 lie in this set. Hence $P(g(X) > 0) = P(X = -2) + P(X = 6) = 0.2 + 0.2 = 0.4$ Second solution. The discrete alternative X = -2 0 1 3 6 P X = 0.2 0.1 0.2 0.3 0.2 Z = 6 -4 -6 -4 14 Z > 0 1 0 0 0 1 Picking out and adding the indicated probabilities, we have $P(Z > 0) = 0.2 + 0.2 = 0.4$ In this case (and often for “hand calculations”) the mapping approach requires less calculation. However, for MATLAB calculations (as we show below), the discrete alternative is more readily implemented. Example 10.1.4. An absolutely continuous example Suppose $X$ ~ uniform [–3,7]. Then $f_X(t) = 0.1$, $-3 \le t \le 7$ (and zero elsewhere). Let $Z = g(X) = (X + 1) (X - 4)$ Determine $P(Z > 0)$. Solution First we determine $N = \{t: g(t) > 0\}$. As in Example 10.1.3, $g(t) = (t+ 1) (t - 4) > 0$ for $t < -1$ or $t > 4)$. Because of the uniform distribution, the integral of the density over any subinterval of $\{X, Y\}$ is 0.1 times the length of that subinterval. Thus, the desired probability is $P(g(X) > 0) = 0.1 [(-1 - (-3)) + (7 - 4)] = 0.5$ We consider, next, some important examples. Example 10.1.5: The normal distribution and standardized normal distribution To show that if $X$ ~ $N(\mu, \sigma^2)$ then $Z = g(X) = \dfrac{X - \mu}{\sigma} ~ N(0, 1)$ VERIFICATION We wish to show the denity function for $Z$ is $\varphi (t) = \dfrac{1}{\sqrt{2\pi}} e^{-t^2/2}$ Now $g(t) = \dfrac{t - \mu} {\sigma} \le v$ iff $t \le \sigma v + \mu$ Hence, for given $M = (-\infty, v]$ the inverse image is $N = (-\infty, \sigma v + \mu]$, so that $F_Z (v) = P(Z \le v) = P(Z \in M) = P(X \in N) = P(X \le \sigma v + \mu) = F_X (\sigma v + \mu)$ Since the density is the derivative of the distribution function, $f_Z(v) = F_{Z}^{'} (v) = F_{X}^{'} (v) = F_{X}^{'} (\sigma v + \mu) \sigma = \sigma f_X (\sigma v + \mu)$ Thus $f_Z (v) = \dfrac{\sigma}{\sigma \sqrt{2\pi}} \text{exp} [-\dfrac{1}{2} (\dfrac{\sigma v + \mu - \mu}{\sigma})^2] = \dfrac{1}{\sqrt{2\pi}} e^{-v^2/2} = \varphi(v)$ We conclude that $Z$ ~ $N(0, 1)$. Example $1$ Suppose $X$ has distribution function $F_X$. If it is absolutely continuous, the corresponding density is $f_X$. Consider $Z = aX + b$. Here $g(t) = at + b$, an affine function (linear plus a constant). Determine the distribution function for $Z$ (and the density in the absolutely continuous case). Solution $F_Z (v) = P(Z \le v) = P(aX + b \le v)$ There are two cases • $a$ > 0: $F_Z (v) = P(X \le \dfrac{v - b}{a}) = F_X (\dfrac{v - b}{a})$ • $a$ < 0 $F_Z (v) = P(X \ge \dfrac{v - b}{a}) = P(X > \dfrac{v - b}{a}) + P(X = \dfrac{v - b}{a})$ So that $F_Z (v) = 1 - F_X (\dfrac{v - b}{a}) + P(X = \dfrac{v - b}{a})$ For the absolutely continuous case, $P(X = \dfrac{v - b}{a}) = 0$, and by differentiation • for $a > 0$ $f_Z (v) = \dfrac{1}{a} f_X (\dfrac{v - b}{a})$ • for $a < 0$ $f_Z (v) = -\dfrac{1}{a} f_X (\dfrac{v - b}{a})$ Since for $a < 0$, $-a = |a|$, the two cases may be combined into one formula. $f_Z (v) = \dfrac{1}{|a|} f_X (\dfrac{v-b}{a})$ Example 10.1.7: Completion of normal and standardized normal relationship Suppose $Z$ ~ $N(0, 1)$. show that $X = \sigma Z + \mu$ ($\sigma > 0$) is $N(\mu, \sigma^2)$. VERIFICATION Use of the result of Example 10.1.6 on affine functions shows that $f_{X} (t) = \dfrac{1}{\sigma} \varphi (\dfrac{t - \mu}{\sigma}) = \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp} [-\dfrac{1}{2} (\dfrac{t - \mu}{\sigma})^2]$ Example 10.1.8: Fractional power of a nonnegative random variable Suppose $X \ge 0$ and $Z = g(X) = X^{1/a}$ for $a > 1$. Since for $t \ge 0$, $t^{1/a}$ is increasing, we have $0 \le t^{1/a} \le v$ iff $0 \le t \le v^{a}$. Thus $F_Z (v) = P(Z \le v) = P(X \le v^{a}) = F_X (v^{a})$ In the absolutely continuous case $f_Z (v) = F_{Z}^{'} (v) = f_X (v^{a}) a v^{a - 1}$ Example 10.1.9: Fractional power of an exponentially distributed random variable Suppose $X$ ~ exponential ($\lambda$). Then $Z = X^{1/a}$ ~ Weibull $(a, \lambda, 0)$. According to the result of Example 10.1.8, $F_Z(t) = F_X (t^{a}) = 1- e^{-\lambda t^{a}}$ which is the distribution function for $Z$ ~ Weibull $(a, \lambda, 0)$. Example 10.1.10: A simple approximation as a function of X If $X$ is a random variable, a simple function approximation may be constructed (see Distribution Approximations). We limit our discussion to the bounded case, in which the range of $X$ is limited to a bounded interval $I = [a, b]$. Suppose $I$ is partitioned into $n$ subintervals by points $t_i$, $1 \le i \le n - 1$, with $a = t_0$ and $b = t_n$. Let $M_i = [t_{i - 1}, t_i)$ be the $i$th subinterval, $1 \le i \le n- 1$ and $M_n = [t_{n -1}, t_n]$. Let $E_i = X^{-1} (M_i)$ be the set of points mapped into $M_i$ by $X$. Then the $E_i$ form a partition of the basic space $\Omega$. For the given subdivision, we form a simple random variable $X_s$ as follows. In each subinterval, pick a point $s_i, t_{i - 1} \le s_i < t_i$. The simple random variable $X_s = \sum_{i = 1}^{n} s_i I_{E_i}$ approximates $X$ to within the length of the largest subinterval $M_i$. Now $I_{E_i} = I_{M_i} (X)$, since $I_{E_i} (\omega) = 1$ iff $X(\omega) \in M_i$ iff $I_{M_i} (X(\omega)) = 1$. We may thus write $X_s = \sum_{i = 1}^{n} s_i I_{M_i} (X)$, a function of $X$ Use of MATLAB on simple random variables For simple random variables, we use the discrete alternative approach, since this may be implemented easily with MATLAB. Suppose the distribution for $X$ is expressed in the row vectors $X$ and $PX$. • We perform array operations on vector $X$ to obtain $G = [g(t_1) g(t_2) \cdot\cdot\cdot g(t_n)]$ • We use relational and logical operations on $G$ to obtain a matrix $M$ which has ones for those $t_i$ (values of $X$) such that $g(t_i)$ satisfies the desired condition (and zeros elsewhere). • The zero-one matrix $M$ is used to select the the corresponding $p_i = P(X = t_i)$ and sum them by the taking the dot product of $M$ and $PX$. Example 10.1.11: Basic calculations for a function of a simple random variable X = -5:10; % Values of X PX = ibinom(15,0.6,0:15); % Probabilities for X G = (X + 6).*(X - 1).*(X - 8); % Array operations on X matrix to get G = g(X) M = (G > - 100)&(G < 130); % Relational and logical operations on G PM = M*PX' % Sum of probabilities for selected values PM = 0.4800 disp([X;G;M;PX]') % Display of various matrices (as columns) -5.0000 78.0000 1.0000 0.0000 -4.0000 120.0000 1.0000 0.0000 -3.0000 132.0000 0 0.0003 -2.0000 120.0000 1.0000 0.0016 -1.0000 90.0000 1.0000 0.0074 0 48.0000 1.0000 0.0245 1.0000 0 1.0000 0.0612 2.0000 -48.0000 1.0000 0.1181 3.0000 -90.0000 1.0000 0.1771 4.0000 -120.0000 0 0.2066 5.0000 -132.0000 0 0.1859 6.0000 -120.0000 0 0.1268 7.0000 -78.0000 1.0000 0.0634 8.0000 0 1.0000 0.0219 9.0000 120.0000 1.0000 0.0047 10.0000 288.0000 0 0.0005 [Z,PZ] = csort(G,PX); % Sorting and consolidating to obtain disp([Z;PZ]') % the distribution for Z = g(X) -132.0000 0.1859 -120.0000 0.3334 -90.0000 0.1771 -78.0000 0.0634 -48.0000 0.1181 0 0.0832 48.0000 0.0245 78.0000 0.0000 90.0000 0.0074 120.0000 0.0064 132.0000 0.0003 288.0000 0.0005 P1 = (G<-120)*PX ' % Further calculation using G, PX P1 = 0.1859 p1 = (Z<-120)*PZ' % Alternate using Z, PZ p1 = 0.1859 Example 10.1.12 $X = 10 I_A + 18 I_B + 10 I_C$ with $\{A, B, C\}$ independent and $P =$ [0.60.30.5]. We calculate the distribution for $X$, then determine the distribution for $Z = X^{1/2} - X + 50$ c = [10 18 10 0]; pm = minprob(0.1*[6 3 5]); canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution disp(XDBN) 0 0.1400 10.0000 0.3500 18.0000 0.0600 20.0000 0.2100 28.0000 0.1500 38.0000 0.0900 G = sqrt(X) - X + 50; % Formation of G matrix [Z,PZ] = csort(G,PX); % Sorts distinct values of g(X) disp([Z;PZ]') % consolidates probabilities 18.1644 0.0900 27.2915 0.1500 34.4721 0.2100 36.2426 0.0600 43.1623 0.3500 50.0000 0.1400 M = (Z < 20)|(Z >= 40) % Direct use of Z distribution M = 1 0 0 0 1 1 PZM = M*PZ' PZM = 0.5800 Remark. Note that with the m-function csort, we may name the output as desired. Example 10.1.13: Continuation of example 10.1.12, above. H = 2*X.^2 - 3*X + 1; [W,PW] = csort(H,PX) W = 1 171 595 741 1485 2775 PW = 0.1400 0.3500 0.0600 0.2100 0.1500 0.0900 Example 10.1.14: A discrete approximation Suppose $X$ has density function $f_X(t) = \dfrac{1}{2} (3t^2 + 2t)$ for $0 \le t \le 1$. Then $F_X (t) = \dfrac{1}{2} (t^3 + t^2)$. Let $Z = X^{1/2}$. We may use the approximation m-procedure tappr to obtain an approximate discrete distribution. Then we work with the approximating random variable as a simple random variable. Suppose we want $P(Z \le 0.8)$. Now $Z \le 0.8$ iff $X \le 0.8^2 = 0.64$. The desired probability may be calculated to be $P(Z \le 0.8) = F_X (0.64) = (0.64^3 + 0.64^2)/2 = 0.3359$ Using the approximation procedure, we have tappr Enter matrix [a b] of x-range endpoints [0 1] Enter number of x approximation points 200 Enter density as a function of t (3*t.^2 + 2*t)/2 Use row matrices X and PX as in the simple case G = X.^(1/2); M = G <= 0.8; PM = M*PX' PM = 0.3359 % Agrees quite closely with the theoretical
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/10%3A_Functions_of_Random_Variables/10.01%3A_Functions_of_a_Random_Variable.txt
Introduction The general mapping approach for a single random variable and the discrete alternative extends to functions of more than one variable. It is convenient to consider the case of two random variables, considered jointly. Extensions to more than two random variables are made similarly, although the details are more complicated. The general approach extended to a pair Consider a pair $\{X, Y\}$ having joint distribution on the plane. The approach is analogous to that for a single random variable with distribution on the line. To find $P((X, Y) \in Q)$. 1. Mapping approach. Simply find the amount of probability mass mapped into the set $Q$ on the plane by the random vector $W = (X, Y)$. • In the absolutely continuous case, calculate $\int \int_Q f_{XY}$. • In the discrete case, identify those vector values $(t_i, u_j)$ of $(X, Y)$ which are in the set $Q$ and add the associated probabilities. 2. Discrete alternative. Consider each vector value $(t_i, u_j)$ of $(X, Y)$. Select those which meet the defining conditions for $Q$ and add the associated probabilities. This is the approach we use in the MATLAB calculations. It does not require that we describe geometrically the region $Q$. To find $P(g(X,Y) \in M)$. $g$ is real valued and $M$ is a subset the real line. 1. Mapping approach. Determine the set $Q$ of all those $(t, u)$ which are mapped into $M$ by the function $g$. Now $W(\omega) = (X(\omega), Y(\omega)) \in Q$ iff $g((X(\omega), Y(\omega)) \in M$ Hence $\{\omega: g(X(\omega), Y(\omega)) \in M\} = \{\omega: (X(\omega), Y(\omega)) \in Q\}$ Since these are the same event, they must have the same probability. Once $Q$ is identified on the plane, determine $P((X, Y) \in Q)$ in the usual manner (see part a, above). • Discrete alternative. For each possible vector value $(t_i, u_j)$ of $(X, Y)$, determine whether $g(t_i, u_j)$ meets the defining condition for $M$. Select those $(t_i, u_j)$ which do and add the associated probabilities. We illustrate the mapping approach in the absolutely continuous case. A key element in the approach is finding the set $Q$ on the plane such that $g(X, Y) \in M$ iff $(X, Y) \in Q$. The desired probability is obtained by integrating $f_{XY}$ over $Q$. Figure 10.2.1. Distribution for Example 10.2.15. Example 10.2.15. A numerical example The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{6}{37} (t + 2u)$ on the region bounded by $t = 0$, $t = 2$, $u = 0$, $u = \text{max} \{1, t\}$ (see Figure 1). Determine $P(Y \le X) = P(X - Y \ge 0)$. Here $g(t, u) = t - u$ and $M = [0, \infty)$. Now $Q = \{(t, u) : t - u \ge 0\} = \{(t, u) : u \le t \}$ which is the region on the plane on or below the line $u = t$. Examination of the figure shows that for this region, $f_{XY}$ is different from zero on the triangle bounded by $t = 2$, $u = 0$, and $u = t$. The desired probability is $P(Y \le X) = \int_{0}^{2} \int_{0}^{t} \dfrac{6}{37} (t + 2u) du\ dt = 32/37 \approx 0.8649$ Example 10.2.16. The density for the sum X+Y Suppose the pair $\{X, Y\}$ has joint density $f_{XY}$. Determine the density for $Z = X + Y$ Solution $F_Z (v) = P(X + Y \le v) = P((X, Y) \in Q_v)$ where $Q_v = \{(t, u) : t + u \le v\} = \{(t, u): u \le v - t\}$ For any fixed $v$, the region $Q_v$ is the portion of the plane on or below the line $u = v - t$ (see Figure 10.2.2). Thus $F_Z (v) = \int \int_{Q} f_{XY} = \int_{-\infty}^{\infty} \int_{-\infty}^{v - t} f_{XY} (t, u) du\ dt$ Differentiating with the aid of the fundamental theorem of calculus, we get $f_Z (v) = \int_{\infty}^{\infty} f_{XY} (t, v - t)\ dt$ This integral expresssion is known as a convolution integral. Figure 10.2.2. Region $Q_v$ for $X + Y \le v$. Example 10.2.17. Sum of joint uniform random variables Suppose the pair $\{X, Y\}$ has joint uniform density on the unit square $0 \le t \le 1, 0 \le u \le 1$.. Determine the density for $Z = X + Y$. Solution $F_Z (v)$ is the probability in the region $Q_v: u \le v - t$. Now $P_{XY} (Q_v) = 1 - P_{XY} (Q_{v}^{c})$, where the complementary set $Q_{v}^{c}$ is the set of points above the line. As Figure 3 shows, for $v \le 1$, the part of $Q_v$ which has probability mass is the lower shaded triangular region on the figure, which has area (and hence probability) $v^2$/2. For $v$ > 1, the complementary region $Q_{v}^{c}$ is the upper shaded region. It has area $(2 - v)^2/2$. so that in this case, $P_{XY} (Q_v) = 1 - (2 - v)^2/2$. Thus, $F_Z (v) = \dfrac{v^2}{2}$ for $0 \le v \le 1$ and $F_Z (v) = 1 - \dfrac{(2 - v)^2}{2}$ for $1 \le v \le 2$ Differentiation shows that $Z$ has the symmetric triangular distribution on [0, 2], since $f_Z (v) = v$ for $0 \le v \le 1$ and $f_Z(v) = (2 - v)$ for $1 \le v \le 2$ With the use of indicator functions, these may be combined into a single expression $f_Z (v) = I_{[0, 1]} (v) v + I_{(1, 2]} (2 - v)$ Figure 10.2.3. Geometry for sum of joint uniform random variables. ALTERNATE Solution Since $f_{XY} (t, u) = I_{[0, 1]} (t) I_{[0, 1]} (u)$, we have $f_{XY} (t, v - t) = I_{[0, 1]} (t) I_{[0, 1]} (v - t)$. Now $0 \le v - t \le 1$ iff $v - 1 \le t \le v$, so that $f_{XY} (t, v - t) = I_{[0, 1]} (v) I_{[0, v]} (t) + I_{(1, 2]} (v) I_{[v - 1, 1]} (t)$ Integration with respect to $t$ gives the result above. Independence of functions of independent random variables Suppose $\{X, Y\}$ is an independent pair. Let $Z = g(X), W = h(Y)$. Since $Z^{-1} (M) = X^{-1} [g^{-1} (M)]$ and $W^{-1} (N) = Y^{-1} [h^{-1} (N)]$ the pair $\{Z^{-1} (M), W^{-1} (N)\}$ is independent for each pair $\{M, N\}$. Thus, the pair $\{Z, W\}$ is independent. If $\{X, Y\}$ is an independent pair and $Z = g(X)$, $W = g(X)$, then the pair $\{Z, W\}$ is independent. However, if $Z = g(X, Y)$ and $W = h(X, Y)$, then in general $\{Z, W\}$ is not independent. This is illustrated for simple random variables with the aid of the m-procedure jointzw at the end of the next section. Example 10.2.18. Independence of simple approximations to an independent pair Suppose $\{X, Y\}$ is an independent pair with simple approximations $X_s$ and $Y_s$ as described in Distribution Approximations. $X_s = \sum_{i = 1}^{n} t_i I_{E_i} = \sum_{i = 1}^{n} t_i I_{M_i} (X)$ and $Y_s = \sum_{j = 1}^{m} u_j I_{F_j} = \sum_{j = 1}^{m} u_j I_{N_j} (Y)$ As functions of $X$ and $Y$, respectively, the pair $\{X_s, Y_s\}$ is independent. Also each pair $\{I_{M_i}(X), I_{N_j} (Y)\}$ is independent. Use of MATLAB on pairs of simple random variables In the single-variable case, we use array operations on the values of $X$ to determine a matrix of values of $g(X)$. In the two-variable case, we must use array operations on the calculating matrices $t$ and $u$ to obtain a matrix $G$ whose elements are $g(t_i, u_j)$. To obtain the distribution for $Z = g(X, Y)$, we may use the m-function csort on $G$ and the joint probability matrix $P$. A first step, then, is the use of jcalc or icalc to set up the joint distribution and the calculating matrices. This is illustrated in the following example. Example 10.2.19. % file jdemo3.m % data for joint simple distribution X = [-4 -2 0 1 3]; Y = [0 1 2 4]; P = [0.0132 0.0198 0.0297 0.0209 0.0264; 0.0372 0.0558 0.0837 0.0589 0.0744; 0.0516 0.0774 0.1161 0.0817 0.1032; 0.0180 0.0270 0.0405 0.0285 0.0360]; jdemo3 % Call for data jcalc % Set up of calculating matrices t, u. Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P G = t.^2 -3*u; % Formation of G = [g(ti,uj)] M = G >= 1; % Calculation using the XY distribution PM = total(M.*P) % Alternately, use total((G>=1).*P) PM = 0.4665 [Z,PZ] = csort(G,P); PM = (Z>=1)*PZ' % Calculation using the Z distribution PM = 0.4665 disp([Z;PZ]') % Display of the Z distribution -12.0000 0.0297 -11.0000 0.0209 -8.0000 0.0198 -6.0000 0.0837 -5.0000 0.0589 -3.0000 0.1425 -2.0000 0.1375 0 0.0405 1.0000 0.1059 3.0000 0.0744 4.0000 0.0402 6.0000 0.1032 9.0000 0.0360 10.0000 0.0372 13.0000 0.0516 16.0000 0.0180 We extend the example above by considering a function $W = h(X, Y)$ which has a composite definition. Example 10.2.20. Continuation of example 10.2.19 Let $W = \begin{cases} X & \text{ for } X + Y \ge 1 \ X^2 + Y^2 & \text{ for } X + Y < 1 \end{cases}$ Determine the distribution for $W$ H = t.*(t+u>=1) + (t.^2 + u.^2).*(t+u<1); % Specification of h(t,u) [W,PW] = csort(H,P); % Distribution for W = h(X,Y) disp([W;PW]') -2.0000 0.0198 0 0.2700 1.0000 0.1900 3.0000 0.2400 4.0000 0.0270 5.0000 0.0774 8.0000 0.0558 16.0000 0.0180 17.0000 0.0516 20.0000 0.0372 32.0000 0.0132 ddbn % Plot of distribution function Enter row matrix of values W Enter row matrix of probabilities PW print % See Figure 10.2.4 Figure 10.2.4. Distribution for random variable $W$ in Example 10.2.20. Joint distributions for two functions of $(X, Y)$ In previous treatments, we use csort to obtain the marginal distribution for a single function $Z = g(X, Y)$. It is often desirable to have the joint distribution for a pair $Z = g(X, Y)$ and $W = h(X, Y)$. As special cases, we may have $Z = X$ or $W = Y$. Suppose $Z$ has values [$z_1$ $z_2$ $\cdot\cdot\cdot$ $z_c$] and $W$ has calues [$w_1$ $w_2$ $\cdot\cdot\cdot$ $w_c$] The joint distribution requires the probability of each pair, $P(W = w_i, Z = z_j)$. Each such pair of values corresponds to a set of pairs of $X$ and $Y$ values. To determine the joint probability matrix $PZW$ for $(Z, W)$ arranged as on the plane, we assign to each position $(i, j)$ the probability $P(W = w_i, Z=z_j)$, with values of $W$ increasing upward. Each pair of ($W, Z$) values corresponds to one or more pairs of ($Y, X$) values. If we select and add the probabilities corresponding to the latter pairs, we have $P(W = w_i, Z = z_j)$. This may be accomplished as follows: Set up calculation matrices $t$ and $u$ as with jcalc. Use array arithmetic to determine the matrices of values $G = [g(t, u)]$ and $H = [h(t, u)]$. Use csort to determine the $Z$ and $W$ value matrices and the $PZ$ and $PW$ marginal probability matrices. For each pair $(w_i, z_j)$, use the MATLAB function find to determine the positions a for which (H==W(i))&(G==Z(j)) Assign to the ($i, j$) position in the joint probability matrix $PZW$ for ($Z, W$) the probability PZW(i, j) = total (P(a)) We first examine the basic calculations, which are then implemented in the m-procedure jointzw. Example 10.2.21. Illustration of the basic joint calculations % file jdemo7.m P = [0.061 0.030 0.060 0.027 0.009; 0.015 0.001 0.048 0.058 0.013; 0.040 0.054 0.012 0.004 0.013; 0.032 0.029 0.026 0.023 0.039; 0.058 0.040 0.061 0.053 0.018; 0.050 0.052 0.060 0.001 0.013]; X = -2:2; Y = -2:3; jdemo7 % Call for data in jdemo7.m jcalc % Used to set up calculation matrices t, u - - - - - - - - - - H = u.^2 % Matrix of values for W = h(X,Y) H = 9 9 9 9 9 4 4 4 4 4 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 4 4 4 4 4 G = abs(t) % Matrix of values for Z = g(X,Y) G = 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 2 1 0 1 2 [W,PW] = csort(H,P) % Determination of marginal for W W = 0 1 4 9 PW = 0.1490 0.3530 0.3110 0.1870 [Z,PZ] = csort(G,P) % Determination of marginal for Z Z = 0 1 2 PZ = 0.2670 0.3720 0.3610 r = W(3) % Third value for W r = 4 s = Z(2) % Second value for Z s = 1 To determine $P(W = 4, Z = 1)$, we need to determine the ($t, u$) positions for which this pair of ($W, Z$) values is taken on. By inspection, we find these to be (2,2), (6,2), (2,4), and (6,4). Then $P(W = 4, Z = 1)$ is the total probability at these positions. This is 0.001 + 0.052 + 0.058 + 0.001 = 0.112. We put this probability in the joint probability matrix $PZW$ at the $W = 4, Z = 1$ position. This may be achieved by MATLAB with the following operations. [i,j] = find((H==W(3))&(G==Z(2))); % Location of (t,u) positions disp([i j]) % Optional display of positions 2 2 6 2 2 4 6 4 a = find((H==W(3))&(G==Z(2))); % Location in more convenient form P0 = zeros(size(P)); % Setup of zero matrix P0(a) = P(a) % Display of designated probabilities in P P0 = 0 0 0 0 0 0 0.0010 0 0.0580 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0520 0 0.0010 0 PZW = zeros(length(W),length(Z)) % Initialization of PZW matrix PZW(3,2) = total(P(a)) % Assignment to PZW matrix with PZW = 0 0 0 % W increasing downward 0 0 0 0 0.1120 0 0 0 0 PZW = flipud(PZW) % Assignment with W increasing upward PZW = 0 0 0 0 0.1120 0 0 0 0 0 0 0 The procedure jointzw carries out this operation for each possible pair of $W$ and $Z$ values (with the flipud operation coming only after all individual assignments are made). example 10.2.22. joint distribution for z = g(x,y) = ||x| - y| and w = h(x, y) = |xy| % file jdemo3.m data for joint simple distribution X = [-4 -2 0 1 3]; Y = [0 1 2 4]; P = [0.0132 0.0198 0.0297 0.0209 0.0264; 0.0372 0.0558 0.0837 0.0589 0.0744; 0.0516 0.0774 0.1161 0.0817 0.1032; 0.0180 0.0270 0.0405 0.0285 0.0360]; jdemo3 % Call for data jointzw % Call for m-program Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): abs(abs(t)-u) Enter expression for h(t,u): abs(t.*u) Use array operations on Z, W, PZ, PW, v, w, PZW disp(PZW) 0.0132 0 0 0 0 0 0.0264 0 0 0 0 0 0.0570 0 0 0 0.0744 0 0 0 0.0558 0 0 0.0725 0 0 0 0.1032 0 0 0 0.1363 0 0 0 0.0817 0 0 0 0 0.0405 0.1446 0.1107 0.0360 0.0477 EZ = total(v.*PZW) EZ = 1.4398 ez = Z*PZ' % Alternate, using marginal dbn ez = 1.4398 EW = total(w.*PZW) EW = 2.6075 ew = W*PW' % Alternate, using marginal dbn ew = 2.6075 M = v > w; % P(Z>W) PM = total(M.*PZW) PM = 0.3390 At noted in the previous section, if $\{X, Y\}$ is an independent pair and $Z = g(X)$, $W = h(Y)$, then the pair {$Z, W$} is independent. However, if $Z = g(X, Y)$ and $W = h(X, Y)$, then in general the pair {$Z, W$} is not independent. We may illustrate this with the aid of the m-procedure jointzw Example 10.2.23. Functions of independent random variables jdemo3 itest Enter matrix of joint probabilities P The pair {X,Y} is independent % The pair {X,Y} is independent jointzw Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): t.^2 - 3*t % Z = g(X) Enter expression for h(t,u): abs(u) + 3 % W = h(Y) Use array operations on Z, W, PZ, PW, v, w, PZW itest Enter matrix of joint probabilities PZW The pair {X,Y} is independent % The pair {g(X),h(Y)} is independent jdemo3 % Refresh data jointzw Enter joint prob for (X,Y): P Enter values for X: X Enter values for Y: Y Enter expression for g(t,u): t+u % Z = g(X,Y) Enter expression for h(t,u): t.*u % W = h(X,Y) Use array operations on Z, W, PZ, PW, v, w, PZW itest Enter matrix of joint probabilities PZW The pair {X,Y} is NOT independent % The pair {g(X,Y),h(X,Y)} is not indep To see where the product rule fails, call for D % Fails for all pairs Absolutely continuous case: analysis and approximation As in the analysis Joint Distributions, we may set up a simple approximation to the joint distribution and proceed as for simple random variables. In this section, we solve several examples analytically, then obtain simple approximations. Example 10.2.24. Distribution for a product Suppose the pair $\{X, Y\}$ has joint density $f_{XY}$. Let $Z = XY$. Determine $Q_v$ such that $P(Z \le v) = P((X, Y) \in Q_v)$. Figure 10.2.5 Solution $Q_v = \{(t, u) : tu \le v\} = \{(t, u): t > 0, u \le v/t\} \bigvee \{(t, u) : t < 0, u \ge v/t\}\}$ Figure 10.2.6. Product of $X, Y$ with uniform joint distribution on the unit square. Example 10.2.25. $\{X, Y\}$ ~ uniform on unit square $f_{XY} (t, u) = 1$. Then (see Figure 10.2.6) $P(XY \le v) = \int \int_{Q_v} 1 du\ dt$ where $Q_v = \{(t, u): 0 \le t \le 1, 0 \le u \le \text{min } \{1, v/t\}\}$ Integration shows $F_Z (v) = P(XY \le v) = v(1 - \text{ln } (v))$ so that $f_Z (v) = - \text{ln } (v) = \text{ln } (1/v)$, $0 < v \le 1$ For $v = 0.5$, $F_Z (0.5) = 0.8466$. % Note that although f = 1, it must be expressed in terms of t, u. tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (u>=0)&(t>=0) Use array operations on X, Y, PX, PY, t, u, and P G = t.*u; [Z,PZ] = csort(G,P); p = (Z<=0.5)*PZ' p = 0.8465 % Theoretical value 0.8466, above Example 10.2.26. Continuation of example 5 from "Random Vectors and Joint Distributions" The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{6}{37} (t + 2u)$ on the region bounded by $t = 0$, $t = 2$ and $u = \text{max } \{1, t\}$(see Figure 7). Let $Z = XY$. Determine $P(Z \le 1)$. Figure 10.2.7. Area of integration for Example 10.2.26 . Analytic Solution $P(Z \le 1) = P((X, Y) \in Q)$ where $Q = \{(t, u): u \le 1/t\}$ Reference to Figure 10.2.7 shows that $P((X, Y) \in Q = \dfrac{6}{37} \int_{0}^{1} \int_{0}^{1} (t + 2u) du\ dt + \dfrac{6}{37} \int_{1}^{2} \int_{0}^{1/t} (t + 2u) du\ dt = 9/37 + 9/37 = 18/37 \approx 0.4865$ APPROXIMATE Solution tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (6/37)*(t + 2*u).*(u<=max(t,1)) Use array operations on X, Y, PX, PY, t, u, and P Q = t.*u<=1; PQ = total(Q.*P) PQ = 0.4853 % Theoretical value 0.4865, above G = t.*u; % Alternate, using the distribution for Z [Z,PZ] = csort(G,P); PZ1 = (Z<=1)*PZ' PZ1 = 0.4853 In the following example, the function $g$ has a compound definition. That is, it has a different rule for different parts of the plane. Figure 10.2.8. Regions for $P(Z \le 1/2)$ in Example 10.2.27. Example 10.2.27. A compound function The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{2}{3} (t + 2u)$ on the unit square $0 \le t \le 1$, $0 \le u \le 1$. $Z = \begin{cases} X & \text{for } X^2 - Y \ge 0 \ X + Y & \text{for } X^2 - Y < 0 \end{cases} = I_Q (X, Y) Y + I_{Q^c} (X, Y) (X + Y)$ for $Q = \{(t, u): u \le t^2\}$. Determine $P(Z <= 0.5)$. Analytical Solution $P(Z \le 1/2) = P(Y \le 1/2, Y \le X^2) + P(X + Y \le 1/2, Y > X^2) = P((X, Y) \in Q_A \bigvee Q_B)$ where $Q_A = \{(t, u) : u \le 1/2, u \le t^2\}$ and $Q_B = \{(t, u): t + u \le 1/2, u > t^2\}$. Reference to Figure 10.2.8 shows that this is the part of the unit square for which $u \le \text{min } (\text{max } (1/2 - t, t^2), 1/2)$. We may break up the integral into three parts. Let $1/2 - t_1 = t_1^2$ and $t_2^2 = 1/2$. Then $P(Z \le 1/2) = \dfrac{2}{3} \int_{0}^{t_1} \int_{0}^{1/2 - t} (t + 2u) du\ dt + \dfrac{2}{3} \int_{t_1}^{t_2} \int_{0}^{t^2} (t + 2u) du\ dt + \dfrac{2}{3} \int_{t_2}^{1} \int_{0}^{1/2} (t + 2u) du \ dt = 0.2322$ APPROXIMATE Solution tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (2/3)*(t + 2*u) Use array operations on X, Y, PX, PY, t, u, and P Q = u <= t.^2; G = u.*Q + (t + u).*(1-Q); prob = total((G<=1/2).*P) prob = 0.2328 % Theoretical is 0.2322, above The setup of the integrals involves careful attention to the geometry of the system. Once set up, the evaluation is elementary but tedious. On the other hand, the approximation proceeds in a straightforward manner from the normal description of the problem. The numerical result compares quite closely with the theoretical value and accuracy could be improved by taking more subdivision points.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/10%3A_Functions_of_Random_Variables/10.02%3A_Function_of_Random_Vectors.txt
The Quantile Function The quantile function for a probability distribution has many uses in both the theory and application of probability. If $F$ is a probability distribution function, the quantile function may be used to “construct” a random variable having $F$ as its distributions function. This fact serves as the basis of a method of simulating the “sampling” from an arbitrary distribution with the aid of a random number generator. Also, given any finite class $\{X_i: 1 \le i \le n\}$ of random variables, an independent class $\{Y_i: 1 \le i \le n\}$ may be constructed, with each $X_i$ and associated $Y_i$ having the same (marginal) distribution. Quantile functions for simple random variables may be used to obtain an important Poisson approximation theorem (which we do not develop in this work). The quantile function is used to derive a number of useful special forms for mathematical expectation. General concept—properties, and examples If $F$ is a probability distribution function, the associated quantile function $Q$ is essentially an inverse of $F$. The quantile function is defined on the unit interval (0, 1). For $F$ continuous and strictly increasing at $t$, then $Q(u) = t$ iff $F(t) = u$. Thus, if $u$ is a probability value, $t = Q(u)$ is the value of $t$ for which $P(X \le t) = u$. Example 10.3.28: The Weibull distribution (3, 2, 0) $u = F(t) = 1 - e^{-3t^2}$ $t \ge 0$ $\Rightarrow$ $t = Q(u) = \sqrt{-\text{ln } (1 - u)/3}$ Example 10.3.29:  The Normal Distribution The m-function norminv, based on the MATLAB function erfinv (inverse error function), calculates values of $Q$ for the normal distribution. The restriction to the continuous case is not essential. We consider a general definition which applies to any probability distribution function. Definition: If $F$ is a function having the properties of a probability distribution function, then the quantile function for $F$ is given by $Q(u) = \text{inf } \{t: F(t) \ge u\}$ $\forall u \in (0, 1)$ We note • If $F(t^{*}) \ge u^{*}$, then $t^{*} \ge \text{inf } \{t: F(t) \ge u^{*}\} = Q(u^{*})$ • If $F(t^{*}) < u^{*}$, then $t^{*} < \text{inf } \{t: F(t) \ge u^{*}\} = Q(u^{*})$ Hence, we have the important property: (Q1) $Q(u) \le t$ iff $u \le F(t)$ $\forall u \in (0, 1)$ The property (Q1) implies the following important property: (Q2)If $U$~ uniform (0, 1), then $X = Q(U)$ has distribution function $F_X = F$. To see this, note that $F_X(t) = P(Q(U) \le t] = P[U \le F(t)] = F(t)$. Property (Q2) implies that if $F$ is any distribution function, with quantile function $Q$, then the random variable $X = Q(U)$, with $U$ uniformly distributed on (0, 1), has distribution function $F$. Example 10.3.30:  Independent classes with prescribed distributions Suppose $\{X_i: 1 \le i \le n\}$ is an arbitrary class of random variables with corresponding distribution functions $\{F_i : 1 \le i \le n\}$. Let $\{Q_i: 1 \le i \le n\}$ be the respective quantile functions. There is always an independent class $\{U_i: 1 \le i \le n\}$ iid uniform (0, 1) (marginals for the joint uniform distribution on the unit hypercube with sides (0, 1)). Then the random variables $Y_i = Q_i (U_i)$, $1 \le i \le n$, form an independent class with the same marginals as the $X_i$. Several other important properties of the quantile function may be established. Figure 10.3.9. Graph of quantile function from graph of distribution function, $Q$ is left-continuous, whereas $F$ is right-continuous. If jumps are represented by vertical line segments, construction of the graph of $u = Q(t)$ may be obtained by the following two step procedure: • Invert the entire figure (including axes), then • Rotate the resulting figure 90 degrees counterclockwise This is illustrated in Figure 10.3.9. If jumps are represented by vertical line segments, then jumps go into flat segments and flat segments go into vertical segments. If $X$ is discrete with probability $p_i$ at $t_i$, $1 \le i \le n$, then $F$ has jumps in the amount $p_i$ at each $t_i$ and is constant between. The quantile function is a left-continuous step function having value $t_i$ on the interval $(b_{i - 1}, b_i]$, where $b_0 = 0$ and $b_i = \sum_{j = 1}^{i} p_j$. This may be stated If $F(t_i) = b_i$, then $Q(u) = t_i$ for $F(t_{i - 1}) < u \le F(t_i)$ Example 10.2.31: Quantile function for a simple random variable Suppose simple random variable $X$ has distribution $X =$ [-2 0 1 3] $PX = [0.2 0.1 0.3 0.4] Figure 1 shows a plot of the distribution function \(F_X$. It is reflected in the horizontal axis then rotated counterclockwise to give the graph of $Q(u$ versus $u$. We use the analytic characterization above in developing a number of m-functions and m-procedures. m-procedures for a simple random variable The basis for quantile function calculations for a simple random variable is the formula above. This is implemented in the m-function dquant, which is used as an element of several simulation procedures. To plot the quantile function, we use dquanplot which employs the stairs function and plots $X$ vs the distribution function $FX$. The procedure dsample employs dquant to obtain a “sample” from a population with simple distribution and to calculate relative frequencies of the various values. Example 10.3.32: Simple Random Variable X = [-2.3 -1.1 3.3 5.4 7.1 9.8]; PX = 0.01*[18 15 23 19 13 12]; dquanplot Enter VALUES for X X Enter PROBABILITIES for X PX % See Figure 10.3.11 for plot of results rand('seed',0) % Reset random number generator for reference dsample Enter row matrix of values X Enter row matrix of probabilities PX Sample size n 10000 Value Prob Rel freq -2.3000 0.1800 0.1805 -1.1000 0.1500 0.1466 3.3000 0.2300 0.2320 5.4000 0.1900 0.1875 7.1000 0.1300 0.1333 9.8000 0.1200 0.1201 Sample average ex = 3.325 Population mean E[X] = 3.305 Sample variance = 16.32 Population variance Var[X] = 16.33 Sometimes it is desirable to know how many trials are required to reach a certain value, or one of a set of values. A pair of m-procedures are available for simulation of that problem. The first is called targetset. It calls for the population distribution and then for the designation of a “target set” of possible values. The second procedure, targetrun, calls for the number of repetitions of the experiment, and asks for the number of members of the target set to be reached. After the runs are made, various statistics on the runs are calculated and displayed. Example 10.3.33 X = [-1.3 0.2 3.7 5.5 7.3]; % Population values PX = [0.2 0.1 0.3 0.3 0.1]; % Population probabilities E = [-1.3 3.7]; % Set of target states targetset Enter population VALUES X Enter population PROBABILITIES PX The set of population values is -1.3000 0.2000 3.7000 5.5000 7.3000 Enter the set of target values E Call for targetrun rand('seed',0) % Seed set for possible comparison targetrun Enter the number of repetitions 1000 The target set is -1.3000 3.7000 Enter the number of target values to visit 2 The average completion time is 6.32 The standard deviation is 4.089 The minimum completion time is 2 The maximum completion time is 30 To view a detailed count, call for D. The first column shows the various completion times; the second column shows the numbers of trials yielding those times % Figure 10.6.4 shows the fraction of runs requiring t steps or less Figure 10.3.12. Fraction of runs requiring $t$ steps or less. m-procedures for distribution functions A procedure dfsetup utilizes the distribution function to set up an approximate simple distribution. The m-procedure quanplot is used to plot the quantile function. This procedure is essentially the same as dquanplot, except the ordinary plot function is used in the continuous case whereas the plotting function stairs is used in the discrete case. The m-procedure qsample is used to obtain a sample from the population. Since there are so many possible values, these are not displayed as in the discrete case. Example 10.3.34: Quantile function associated with a distribution function F = '0.4*(t + 1).*(t < 0) + (0.6 + 0.4*t).*(t >= 0)'; % String dfsetup Distribution function F is entered as a string variable, either defined previously or upon call Enter matrix [a b] of X-range endpoints [-1 1] Enter number of X approximation points 1000 Enter distribution function F as function of t F Distribution is in row matrices X and PX quanplot Enter row matrix of values X Enter row matrix of probabilities PX Probability increment h 0.01 % See Figure 10.3.13 for plot qsample Enter row matrix of X values X Enter row matrix of X probabilities PX Sample size n 1000 Sample average ex = -0.004146 Approximate population mean E(X) = -0.0004002 % Theoretical = 0 Sample variance vx = 0.25 Approximate population variance V(X) = 0.2664 m-procedures for density functions An m- procedure acsetup is used to obtain the simple approximate distribution. This is essentially the same as the procedure tuappr, except that the density function is entered as a string variable. Then the procedures quanplot and qsample are used as in the case of distribution functions. Example 10.3.35:  Quantile function associated with a density function acsetup Density f is entered as a string variable. either defined previously or upon call. Enter matrix [a b] of x-range endpoints [0 3] Enter number of x approximation points 1000 Enter density as a function of t '(t.^2).*(t<1) + (1- t/3).*(1<=t)' Distribution is in row matrices X and PX quanplot Enter row matrix of values X Enter row matrix of probabilities PX Probability increment h 0.01 % See Figure 10.3.14 for plot rand('seed',0) qsample Enter row matrix of values X Enter row matrix of probabilities PX Sample size n 1000 Sample average ex = 1.352 Approximate population mean E(X) = 1.361 % Theoretical = 49/36 = 1.3622 Sample variance vx = 0.3242 Approximate population variance V(X) = 0.3474 % Theoretical = 0.3474
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/10%3A_Functions_of_Random_Variables/10.03%3A_The_Quantile_Function.txt
Exercise $1$ Suppose $X$ is a nonnegative, absolutely continuous random variable. Let $Z = g(X) = Ce^{-aX}$, where $a > 0$, $C > 0$. Then $0 < Z \le C$. Use properties of the exponential and natural log function to show that $F_Z (v) = 1 - F_X (- \dfrac{\text{In } (v/C)}{a})$ for $0 < v \le C$ Answer $Z = Ce^{-aX} \le v$ iff $e^{-aX} \le v/C$ iff $-aX \le \text{In } (v/C)$ iff $X \ge - \text{In } (v/C)/a$, so that $F_Z(v) = P(Z \le v) = P(X \ge -\text{In } (v/C)/a) = 1 - F_X (-\dfrac{\text{In } (v/C)}{a})$ Exercise $2$ Use the result of Exercise 10.4.1 to show that if $X$ ~ exponential $(\lambda)$, then $F_Z (v) = (\dfrac{v}{C})^{\lambda/a}$ $0 < v \le C$ Answer $F_Z (v) = 1 - [1- exp (-\dfrac{\lambda}{a} \cdot \text{In } (v/C))] = (\dfrac{v}{C})^{\lambda/a}$ Exercise $3$ Present value of future costs. Suppose money may be invested at an annual rate a, compounded continually. Then one dollar in hand now, has a value $e^{ax}$ at the end of $x$ years. Hence, one dollar spent $x$ years in the future has a present valuee$^{-ax}$. Suppose a device put into operation has time to failure (in years) $X$ ~ exponential ($\lambda$). If the cost of replacement at failure is $C$ dollars, then the present value of the replacement is $Z = Ce^{-aX}$. Suppose $\lambda = 1/10$, $a = 0.07$, and $C =$ $1000. 1. Use the result of Exercise 10.4.2. to determine the probability $Z \le 700, 500, 200$. 2. Use a discrete approximation for the exponential density to approximate the probabilities in part (a). Truncate $X$ at 1000 and use 10,000 approximation points. Answer $P(Z \le v) = (\dfrac{v}{1000})^{10/7}$ v = [700 500 200]; P = (v/1000).^(10/7) P = 0.6008 0.3715 0.1003 tappr Enter matrix [a b] of x-range endpoints [0 1000] Enter number of x approximation points 10000 Enter density as a function of t 0.1*exp(-t/10) Use row matrices X and PX as in the simple case G = 1000*exp(-0.07*t); PM1 = (G<=700)*PX' PM1 = 0.6005 PM2 = (G<=500)*PX' PM2 = 0.3716 PM3 = (G<=200)*PX' PM3 = 0.1003 Exercise $4$ Optimal stocking of merchandise. A merchant is planning for the Christmas season. He intends to stock m units of a certain item at a cost of c per unit. Experience indicates demand can be represented by a random variable $D$ ~ Poisson ($\mu$). If units remain in stock at the end of the season, they may be returned with recovery of $r$ per unit. If demand exceeds the number originally ordered, extra units may be ordered at a cost of s each. Units are sold at a price $p$ per unit. If $Z = g(D)$ is the gain from the sales, then • For $t \le m$, $g(t) = (p - c) t- (c - r)(m - t) = (p - r)t + (r - c) m$ • For $t > m$, $g(t) = (p - c)m + (t - m) (p - s) = (p - s) t + (s - c)m$ Let $M = (-\infty, m]$. Then $g(t) = I_M(t) [(p - r) t + (r - c)m] + I_M(t) [(p - s) t + (s - c) m]$ Suppose $\mu = 50$ $m = 50$ $c = 30$ $p = 50$ $r = 20$ $s = 40$. Approximate the Poisson random variable $D$ by truncating at 100. Determine $P(500 \le Z \le 1100)$. Answer mu = 50; D = 0:100; c = 30; p = 50; r = 20; s = 40; m = 50; PD = ipoisson(mu,D); G = (p - s)*D + (s - c)*m +(s - r)*(D - m).*(D <= m); M = (500<=G)&(G<=1100); PM = M*PD' PM = 0.9209 [Z,PZ] = csort(G,PD); % Alternate: use dbn for Z m = (500<=Z)&(Z<=1100); pm = m*PZ' pm = 0.9209 Exercise $5$ (See Example 2 from "Functions of a Random Variable") The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at$20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule: • 11-20, $18 each • 21-30,$16 each • 31-50, $15 each • 51-100,$13 each If the number of purchasers is a random variable $X$, the total cost (in dollars) is a random quantity $Z = g(X)$ described by $g(X) = 200 + 18 I_{M1} (X) (X - 10) + (16 - 18) I_{M2} (X) (X - 20) +$ $(15 - 16) I_{M_3} (X) (X - 30) + (13 - 15) I_{M4} (X) (X - 50)$ where $M1 = [10, \infty)$, $M2 = [20, \infty)$, $M3 = [30, \infty)$, $M4 = [50, \infty)$ Suppose $X$~ Poisson (75). Approximate the Poisson distribution by truncating at 150. Determine $P(Z \ge 1000)$, $P(Z \ge 1300)$ and $P(900 \le Z \le 1400)$. Answer X = 0:150; PX = ipoisson(75,X); G = 200 + 18*(X - 10).*(X>=10) + (16 - 18)*(X - 20).*(X>=20) + ... (15 - 16)*(X- 30).*(X>=30) + (13 - 15)*(X - 50).*(X>=50); P1 = (G>=1000)*PX' P1 = 0.9288 P2 = (G>=1300)*PX' P2 = 0.1142 P3 = ((900<=G)&(G<=1400))*PX' P3 = 0.9742 [Z,PZ] = csort(G,PX); % Alternate: use dbn for Z p1 = (Z>=1000)*PZ' p1 = 0.9288 Exercise $6$ (See Exercise 6 from "Problems on Random Vectors and Joint Distributions", and Exercise 1 from "Problems on Independent Classes of Random Variables")) The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_06.m): $X =$ [-2.3 -0.7 1.1 3.9 5.1] $Y =$ [1.3 2.5 4.1 5.3] $P = \begin{bmatrix} 0.0483 & 0.0357 & 0.0420 & 0.0399 & 0.0441 \ 0.0437 & 0.0323 & 0.0380 & 0.0361 & 0.0399 \ 0.0713 & 0.0527 & 0.0620 & 0.0609 & 0.0551 \ 0.0667 & 0.0493 & 0.0580 & 0.0651 & 0.0589 \end{bmatrix}$ Determine $P(\text{max }\{X, Y\} \le 4)$. Let $Z = 3X^3 + 3X^2 Y - Y^3$. Determine $P(Z< 0)$ and $P(-5 < Z \le 300)$. Answer npr08_06 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P P1 = total((max(t,u)<=4).*P) P1 = 0.4860 P2 = total((abs(t-u)>3).*P) P2 = 0.4516 G = 3*t.^3 + 3*t.^2.*u - u.^3; P3 = total((G<0).*P) P3 = 0.5420 P4 = total(((-5<G)&(G<=300)).*P) P4 = 0.3713 [Z,PZ] = csort(G,P); % Alternate: use dbn for Z p4 = ((-5<Z)&(Z<=300))*PZ' p4 = 0.3713 Exercise $7$ (See Exercise 2 from "Problems on Independent Classes of Random Variables") The pair $\{X, Y\}$ has the joint distribution (in m-file npr09_02.m): $X =$ [-3.9 -1.7 1.5 2 8 4.1] $Y =$ [-2 1 2.6 5.1] $P = \begin{bmatrix} 0.0589 & 0.0342 & 0.0304 & 0.0456 & 0.0209 \ 0.0962 & 0.056 & 0.0498 & 0.0744 & 0.0341 \ 0.0682 & 0.0398 & 0.0350 & 0.0528 & 0.0242 \ 0.0868 & 0.0504 & 0.0448 & 0.0672 & 0.0308 \end{bmatrix}$ Determine $P(\{X + Y \ge 5\} \cup \{Y \le 2\})$, $P(X^2 + Y^2 \le 10)$. Answer npr09_02 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = (t+u>=5)|(u<=2); P1 = total(M1.*P) P1 = 0.7054 M2 = t.^2 + u.^2 <= 10; P2 = total(M2.*P) P2 = 0.3282 Exercise $8$ (See Exercsie 7 from "Problems on Random Vectors and Joint Distributions", and Exercise 3 from "Problems on Independent Classes of Random Variables") The pair has the joint distribution (in m-file npr08_07.m): $P(X = t, Y =u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Determine $P(X^2 - 3X \le 0)$, $P(X^3 - 3|Y| < 3)$. Answer npr08_07 Data are in X, Y, P jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P M1 = t.^2 - 3*t <=0; P1 = total(M1.*P) P1 = 0.4500 M2 = t.^3 - 3*abs(u) < 3; P2 = total(M2.*P) P2 = 0.7876 Exercise $9$ For the pair $\{X, Y\}$ in Exercise 10.4.8, let $Z = g(X, Y) = 3X^2 + 2XY - Y^2$. Determine and plot the distribution function for $Z$. Answer G = 3*t.^2 + 2*t.*u - u.^2; % Determine g(X,Y) [Z,PZ] = csort(G,P); % Obtain dbn for Z = g(X,Y) ddbn % Call for plotting m-procedure Enter row matrix of VALUES Z Enter row matrix of PROBABILITIES PZ % Plot not reproduced here Exercise $10$ For the pair $\{X, Y\}$ in Exercise 8, let $W = g(X, Y) = \begin{cases} X & \text{for } X + Y \le 4 \ 2Y & \text{for } X + Y > 4 \end{cases} = I_M (X, Y) X + I_{M^c} (X, Y) 2Y$ Determine and plot the distribution function for $W$. Answer H = t.*(t+u<=4) + 2*u.*(t+u>4); [W,PW] = csort(H,P); ddbn Enter row matrix of VALUES W Enter row matrix of PROBABILITIES PW % Plot not reproduced here For the distributions in Exercises 10-15 below 1. Determine analytically the indicated probabilities. 2. Use a discrete approximation to calculate the same probablities.' Exercise $11$ $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1+ t$ (see Exercise 15 from "Problems on Random Vectors and Joint Distributions"). $Z = I_{[0, 1]} (X) 4X + I_{(1, 2]} (X) (X + Y)$ Determine $P(Z \le 2)$ Answer $P(Z \le 2) = P(Z \in Q = Q1M1 \bigvee Q2M2)$, where $M1 = \{(t, u): 0 \le t \le 1, 0 \le u \le 1 + t\}$ $M2 = \{(t, u) : 1 < t \le 2, 0 \le u \le 1 + t\}$ $Q1 = \{(t, u) : 0 \le t \le 1/2\}$, $Q2 = \{(t, u) : u \le 2 - t\}$ (see figure) $P = \dfrac{3}{88} \int_{0}^{1/2} \int_{0}^{1 + t} (2t + 3u^2) du\ dt + \dfrac{3}{88} \int_{1}^{2} \int_{0}^{2 - t} (2t + 3u^2) du\ dt = \dfrac{563}{5632}$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 3] Enter number of X approximation points 200 Enter number of Y approximation points 300 Enter expression for joint density (3/88)*(2*t + 3*u.^2).*(u<=1+t) Use array operations on X, Y, PX, PY, t, u, and P G = 4*t.*(t<=1) + (t+u).*(t>1); [Z,PZ] = csort(G,P); PZ2 = (Z<=2)*PZ' PZ2 = 0.1010 % Theoretical = 563/5632 = 0.1000 Figure 10.4.1 Exercise $12$ $f_{XY} (t, u) = \dfrac{24}{11}$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2 - t\}$(see Exercise 17 from "Problems on Random Vectors and Joint Distributions"). $Z = I_M(X, Y) \dfrac{1}{2} X + I_{M^c} (X, Y) Y^2$, $M = \{(t, u) : u > t\}$ Determine $P (Z \le 1/4)$. Answer $P(Z \le 1/4) = P((X, Y) \in M_1Q_1 \bigvee M_2Q_2)$, $M_1 = \{(t, u): 0 \le t \le u \le 1\}$ $M_2 = \{(t, u) : 0 \le t \le 2, 0 \le t \le \text{min } (t, 2 - t)\}$ $Q_1 = \{(t, u): t \le 1/2\}$ $Q_2 = \{(t, u): u \le 1/2\}$ (see figure) $P = \dfrac{24}{11} \int_{0}^{1/2} \int_{0}^{1} tu \ du\ dt + \dfrac{24}{11} \int_{1/2}^{3/2} \int_{0}^{1/2} tu\ du\ dt + \dfrac{24}{11} \int_{3/2}^{2} \int_{0}^{2 - t} tu\ du\ dt = \dfrac{85}{176}$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density (24/11)*t.*u.*(u<=min(1,2-t)) Use array operations on X, Y, PX, PY, t, u, and P G = 0.5*t.*(u>t) + u.^2.*(u<t); [Z,PZ] = csort(G,P); pp = (Z<=1/4)*PZ' pp = 0.4844 % Theoretical = 85/176 = 0.4830 Exercise $13$ $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$ (see Exercise 18 from "Problems on Random Vectors and Joint Distributions"). $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y)2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ Determine $P(Z \le 1)$ Answer $P(Z \le 1) = P((X, Y) \in M_1Q_1 \bigvee M_2Q_2)$, $M_1 = \{(t, u): 0 \le t \le 1, 0 \le u \le 1 - t\}$ $M_2 = \{(t, u) : 1 \le t \le 2, 0 \le u \le t\}$ $Q_1 = \{(t, u): u \le 1 - t\}$ $Q_2 = \{(t, u): u \le 1/2\}$ (see figure) $P = \dfrac{3}{23} \int_{0}^{1} \int_{0}^{1-t} (t + 2u) \ du\ dt + \dfrac{3}{23} \int_{1}^{2} \int_{0}^{1/2} (t + 2u)\ du\ dt = \dfrac{9}{46}$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (3/23)*(t + 2*u).*(u<=max(2-t,t)) Use array operations on X, Y, PX, PY, t, u, and P M = max(t,u) <= 1; G = M.*(t + u) + (1 - M)*2.*u; p = total((G<=1).*P) p = 0.1960 % Theoretical = 9/46 = 0.1957 Figure 10.4.2 Exercise $14$ $f_{XY} (t, u) = \dfrac{12}{179} (3t^2 + u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2, 3 - t\}$ (see Exercise 19 from "Problems on Random Vectors and Joint Distributions"). $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y) 2Y^2$, $M = \{(t, u): t \le 1, u \ge 1\}$ Determine $P(Z \le 2)$. Answer $P(Z \le 2) = P((X, Y) \in M_1 Q_1 \bigvee (M_2 \bigvee M_3) Q_2)$, $M_1 = \{(t, u): 0 \le t \le 1, 1 \le u \le 2\}$ $M_2 = \{(t, u) : 0 \le t \le 1, 0 \le u \le 1\}$ $M_3 = \{(t, u): 1 \le t \le 2, 0 \le u \le 3 - t\}$ $Q_1 = \{(t, u): u \le 1 - t\}$ $Q_2 = \{(t, u) : u \le 1/2\}$ (see figure) $P = \dfrac{12}{179} \int_{0}^{1} \int_{0}^{2 - t} (3t^2 + u) du\ dt + \dfrac{12}{179} \int_{1}^{2} \int_{0}^{1} (3t^2 + u) du\ dt = \dfrac{119}{179}$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 300 Enter number of Y approximation points 300 Enter expression for joint density (12/179)*(3*t.^2 + u).*(u<=min(2,3-t)) Use array operations on X, Y, PX, PY, t, u, and P M = (t<=1)&(u>=1); Z = M.*(t + u) + (1 - M)*2.*u.^2; G = M.*(t + u) + (1 - M)*2.*u.^2; p = total((G<=2).*P) p = 0.6662 % Theoretical = 119/179 = 0.6648 Exercise $15$ $f_{XY} (t, u) = \dfrac{12}{227} (3t + 2tu)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{1 + t, 2\}$ (see Exercise 20 from "Problems on Random Variables and joint Distributions") $Z = I_M (X, Y) X + I_{M^c} (X, Y) \dfrac{Y}{X}$, $M = \{(t, u): u \le \text{min } (1, 2 - t)\}$ Determine $P(Z \le 1)$. Figure 10.4.3 Answer $P(Z \le 1) = P((X, Y) \in M_1 Q_1 \bigvee V_2Q_2)$, $M_1 = M$, $M_2 = M^c$ $Q_1 = \{(t, u): 0 \le t \le \}$ $Q_2 = \{(t, u) : u \le t\}$ (see figure) $P = \dfrac{12}{227} \int_{0}^{1} \int_{0}^{1} (3t + 2tu) du\ dt + \dfrac{12}{227} \int_{1}^{2} \int_{2 - t}^{t} (3t + 2tu) du\ dt = \dfrac{124}{227}$ tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 400 Enter number of Y approximation points 400 Enter expression for joint density (12/227)*(3*t+2*t.*u).*(u<=min(1+t,2)) Use array operations on X, Y, PX, PY, t, u, and P Q = (u<=1).*(t<=1) + (t>1).*(u>=2-t).*(u<=t); P = total(Q.*P) P = 0.5478 % Theoretical = 124/227 = 0.5463 Exercise $16$ The class $\{X, Y, Z\}$ is independent. $X = -2 I_A + I_B + 3I_C$. Minterm probabilities are (in the usual order) 0.255 0.025 0.375 0.045 0.108 0.012 0.162 0.018 $Y = I_D + 3I_E + I_F - 3$. The class $\{D, E, F\}$ is independent with $P(D) = 0.32$ $P(E) = 0.56$ $P(F) = 0.40$ $Z$ has distribution Value -1.3 1.2 2.7 3.4 5.8 Probability 0.12 0.24 0.43 0.13 0.08 Determine $P(X^2 + 3XY^2 >3Z)$. Answer % file npr10_16.m Data for Exercise 16. cx = [-2 1 3 0]; pmx = 0.001*[255 25 375 45 108 12 162 18]; cy = [1 3 1 -3]; pmy = minprob(0.01*[32 56 40]); Z = [-1.3 1.2 2.7 3.4 5.8]; PZ = 0.01*[12 24 43 13 8]; disp('Data are in cx, pmx, cy, pmy, Z, PZ') npr10_16 % Call for data Data are in cx, pmx, cy, pmy, Z, PZ [X,PX] = canonicf(cx,pmx); [Y,PY] = canonicf(cy,pmy); icalc3 Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P M = t.^2 + 3*t.*u.^2 > 3*v; PM = total(M.*P) PM = 0.3587 Exercise $17$ The simple random variable X has distribution $X =$ [-3.1 -0.5 1.2 2.4 3.7 4.9] $PX =$ [0.15 0.22 0.33 0.12 0.11 0.07] 1. Plot the distribution function $F_X$ and the quantile function $Q_X$. 2. Take a random sample of size $n =$ 10,000. Compare the relative frequency for each value with the probability that value is taken on. Answer X = [-3.1 -0.5 1.2 2.4 3.7 4.9]; PX = 0.01*[15 22 33 12 11 7]; ddbn Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX % Plot not reproduced here dquanplot Enter VALUES for X X Enter PROBABILITIES for X PX % Plot not reproduced here rand('seed',0) % Reset random number generator dsample % for comparison purposes Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX Sample size n 10000 Value Prob Rel freq -3.1000 0.1500 0.1490 -0.5000 0.2200 0.2164 1.2000 0.3300 0.3340 2.4000 0.1200 0.1184 3.7000 0.1100 0.1070 4.9000 0.0700 0.0752 Sample average ex = 0.8792 Population mean E[X] = 0.859 Sample variance vx = 5.146 Population variance Var[X] = 5.112
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/10%3A_Functions_of_Random_Variables/10.04%3A_Problems_on_Functions_of_Random_Variables.txt
Page created for new attachment 11: Mathematical Expectation Introduction The probability that real random variable $X$ takes a value in a set $M$ of real numbers is interpreted as the likelihood that the observed value $X(\omega)$ on any trial will lie in $M$. Historically, this idea of likelihood is rooted in the intuitive notion that if the experiment is repeated enough times the probability is approximately the fraction of times the value of $X$ will fall in $M$. Associated with this interpretation is the notion of the average of the values taken on. We incorporate the concept of mathematical expectation into the mathematical model as an appropriate form of such averages. We begin by studying the mathematical expectation of simple random variables, then extend the definition and properties to the general case. In the process, we note the relationship of mathematical expectation to the Lebesque integral, which is developed in abstract measure theory. Although we do not develop this theory, which lies beyond the scope of this study, identification of this relationship provides access to a rich and powerful set of properties which have far reaching consequences in both application and theory. Expectation for simple random variables The notion of mathematical expectation is closely related to the idea of a weighted mean, used extensively in the handling of numerical data. Consider the arithmetic average $\bar{x}$ of the following ten numbers: 1, 2, 2, 2, 4, 5, 5, 8, 8, 8, which is given by $\bar{x} = \dfrac{1}{10} (1 + 2 + 2 + 2 + 4 + 5 + 5 + 8 + 8 + 8)$ Examination of the ten numbers to be added shows that five distinct values are included. One of the ten, or the fraction 1/10 of them, has the value 1, three of the ten, or the fraction 3/10 of them, have the value 2, 1/10 has the value 4, 2/10 have the value 5, and 3/10 have the value 8. Thus, we could write $\bar{x} = (0.1 \cdot 1 + 0.3 \cdot 2 + 0.1 \cdot 4 + 0.2 \cdot 5 + 0.3 \cdot 8)$ The pattern in this last expression can be stated in words: Multiply each possible value by the fraction of the numbers having that value and then sum these products. The fractions are often referred to as the relative frequencies. A sum of this sort is known as a weighted average. In general, suppose there are $n$ numbers $\{x_1, x_2, \cdot\cdot\cdot x_n\}$ to be averaged, with m≤nm≤n distinct values $\{t_1, t_2 \cdot \cdot\cdot t_m\}$. Suppose $f_1$ have value $t_1$, $f_2$ have value $t_2$, $\cdot\cdot\cdot$, $f_m$ have value $t_m$. The $f_i$ must add to $n$. If we set $p_i = f_i / n$, then the fraction $p_i$ is called the relative frequency of those numbers in the set which have the value $t_i$, $1 \le i \le m$. The average $\bar{x}$ of the $n$ numbers may be written $\bar{x} = \dfrac{1}{n} \sum_{i = 1}^{n} x_i = \sum_{j = 1}^{m} t_j p_j$ In probability theory, we have a similar averaging process in which the relative frequencies of the various possible values of are replaced by the probabilities that those values are observed on any trial. Definition. For a simple random variable $X$ with values $\{t_1, t_2, \cdot\cdot\cdot t_n\}$ and corresponding probabilities $p_i = P(X = t_i)$ mathematical expectation, designated $E[X]$, is the probability weighted average of the values taken on by $X$. In symbols $E[X] = \sum_{i = 1}^{n} t_i P(X = t_i) = \sum_{i = 1}^{n} t_ip_i$ Note that the expectation is determined by the distribution. Two quite different random variables may have the same distribution, hence the same expectation. Traditionally, this average has been called the mean, or the mean value, of the random variable $X$. Example 11.1.1. Some special cases 1. Since $X = aI_E = 0 I_{E^c} + aI_E$, we have $E[aI_E] = a P(E)$. 2. For $X$ a constant $c$, $X = cI_{\Omega}$, so that $E[c] = cP(\Omega) = c$. 3. If $X = \sum_{i = 1}^{n} t_i I_{A_i}$ then $aX = \sum_{i = 1}^{n} at_i I_{A_i}$, so that $E[aX] = \sum_{i = 1}^{n} at_i P(A_i) = a\sum_{i = 1}^{n} t_i P(A_i) = aE[X]$ Figure 1. Moment of a probability distribution about the origin. Mechanical interpretation In order to aid in visualizing an essentially abstract system, we have employed the notion of probability as mass. The distribution induced by a real random variable on the line is visualized as a unit of probability mass actually distributed along the line. We utilize the mass distribution to give an important and helpful mechanical interpretation of the expectation or mean value. In Example 6 in "Mathematical Expectation: General Random Variables", we give an alternate interpretation in terms of mean-square estimation. Suppose the random variable $X$ has values $\{t_i; 1 \le i \le n\}$, with $P(X = t_i) = p_i$. This produces a probability mass distribution, as shown in Figure 1, with point mass concentration in the amount of $p_i$ at the point $t_i$. The expectation is $\sum_{i} t_i p_i$ Now |ti||ti| is the distance of point mass $p_i$ from the origin, with $p_i$ to the left of the origin iff $t_i$ is negative. Mechanically, the sum of the products tipitipi is the moment of the probability mass distribution about the origin on the real line. From physical theory, this moment is known to be the same as the product of the total mass times the number which locates the center of mass. Since the total mass is one, the mean value is the location of the center of mass. If the real line is viewed as a stiff, weightless rod with point mass $p_i$ attached at each value $t_i$ of $X$, then the mean value $\mu_X$ is the point of balance. Often there are symmetries in the distribution which make it possible to determine the expectation without detailed calculation. Example 11.1.2. the number of spots on a die Let $X$ be the number of spots which turn up on a throw of a simple six-sided die. We suppose each number is equally likely. Thus the values are the integers one through six, and each probability is 1/6. By definition $E[X] = \dfrac{1}{6} \cdot 1 + \dfrac{1}{6} \cdot 2 + \dfrac{1}{6} \cdot 3 + \dfrac{1}{6} \cdot 4 + \dfrac{1}{6} \cdot 5 + \dfrac{1}{6} \cdot 6 = \dfrac{1}{6} (1 + 2 + 3 + 4 + 5 + 6) = \dfrac{7}{2}$ Although the calculation is very simple in this case, it is really not necessary. The probability distribution places equal mass at each of the integer values one through six. The center of mass is at the midpoint. Example 11.1.3. a simple choice A child is told she may have one of four toys. The prices are $2.50.$3.00, $2.00, and$3.50, respectively. She choses one, with respective probabilities 0.2, 0.3, 0.2, and 0.3 of choosing the first, second, third or fourth. What is the expected cost of her selection? $E[X] = 2.00 \cdot 0.2 + 2.50 \cdot 0.2 + 3.00 \cdot 0.3 + 3.50 \cdot 0.3 + 2.85$ For a simple random variable, the mathematical expectation is determined as the dot product of the value matrix with the probability matrix. This is easily calculated using MATLAB. matlab calculation for example 11.1.3 X = [2 2.5 3 3.5]; % Matrix of values (ordered) PX = 0.1*[2 2 3 3]; % Matrix of probabilities EX = dot(X,PX) % The usual MATLAB operation EX = 2.8500 Ex = sum(X.*PX) % An alternate calculation Ex = 2.8500 ex = X*PX' % Another alternate ex = 2.8500 Expectation and primitive form The definition and treatment above assumes $X$ is in canonical form, in which case $X = \sum_{i = 1}^{n} t_i I_{A_i}$, where $A_i = \{X = t_i\}$, implies $E[X] = \sum_{i = 1}^{n} t_i P(A_i)$ We wish to ease this restriction to canonical form. Suppose simple random variable $X$ is in a primitive form $X = \sum_{j = 1}^{m} c_j I_{C_j}$, where $\{C_j: 1 \le j \le m\}$ is a partition We show that $E[X] = \sum_{j = 1}^{m} c_j P(C_j)$ Before a formal verification, we begin with an example which exhibits the essential pattern. Establishing the general case is simply a matter of appropriate use of notation. Example 11.1.4. simple random variable x in primitive form $X = I_{C_1} + 2I_{C_2} + I_{C_3} + 3 I_{C_4} + 2 I_{C_5} + 2I_{C_6}$, with $\{C_1, C_2, C_3, C_4, C_5, C_6\}$ a partition inspection shows the distinct possible values of $X$ to be 1, 2, or 3. Also $A_1 = \{X = 1\} = C_1 \bigvee C_3$, $A_2 = \{X = 2\} = C_2 \bigvee C_5 \bigvee C_6$ and $A_3 = \{X = 3\} = C_4$ so that $P(A-1) = P(C_1) + P(C_3)$, $P(A_2) = P(C_2) + P(C_5) + P(C_6)$, and $P(A_3) = P(C_4)$ Now $E[X] = P(A_1) + 2P(A_2) + 3P(A_3) = P(C_1) + P(C_3) + 2[P(C_2) + P(C_5) + P(C_6)] + 3P(C_4)$ $= P(C_1) + 2P(C_2) + P(C_3) + 3P(C_4) + 2P(C_5) + 2P(C_6)$ To establish the general pattern, consider $X = \sum_{j = 1}^{m} c_j I_{C_j}$. We identify the distinct set of values contained in the set $\{c_j: 1 \le j \le m\}$. Suppose these are $t_1 < t_2 < \cdot\cdot\cdot < t_n$. For any value $t_i$ in the range, identify the index set $J_i$ of those $j$ such that $c_j = t_i$. Then the terms $\sum_{J_i} c_j I_{C_j} = t_i \sum_{J_i} I_{C_j} = t_i I_{A_i}$, where $A_i = \bigvee_{j \in J_i} C_j$ By the additivity of probability $P(A_i) = P(X = t_i) = \sum_{j \in J_i} P(C_j)$ Since for each $j \in J_i$ we have $c_j = t_i$, we have $E[X] = \sum_{i = 1}^{n} t_i P(A_i) = \sum_{i = 1}^{n} t_i \sum_{j \in J_i} P(C_j) = \sum_{i = 1}^{n} \sum_{j \in J_i} c_j P(C_j) = \sum_{j = 1}^{m} c_j P(C_j)$ — □ Thus, the defining expression for expectation thus holds for X in a primitive form. An alternate approach to obtaining the expectation from a primitive form is to use the csort operation to determine the distribution of $X$ from the coefficients and probabilities of the primitive form. Example 11.1.5. Alternate determinations of E[x] Suppose $X$ in a primitive form is $X = I_{C_1} + 2 I_{C_2} + I_{C_3} + 3I_{C_4} + 2I_{C_5} + 2I_{C_6} + I_{C_7} + 3I_{C_8} + 2I_{C_9} + I_{C_{10}}$ with respective probabilities $P(C_i) = 0.08, 0.11, 0.06, 0.13, 0.05, 0.08, 0.12, 0.07, 0.14, 0.16$ c = [1 2 1 3 2 2 1 3 2 1]; % Matrix of coefficients pc = 0.01*[8 11 6 13 5 8 12 7 14 16]; % Matrix of probabilities EX = c*pc' EX = 1.7800 % Direct solution [X,PX] = csort(c,pc); % Determinatin of dbn for X disp([X;PX]') 1.0000 0.4200 2.0000 0.3800 3.0000 0.2000 Ex = X*PX' % E[X] from distribution Ex = 1.7800 Linearity The result on primitive forms may be used to establish the linearity of mathematical expectation for simple random variables. Because of its fundamental importance, we work through the verification in some detail. Suppose $X = \sum_{i = 1}^{n} t_i I_{A_i}$ and $Y = \sum_{j = 1}^{m} u_j I_{B_j}$ (both in canonical form). Since $\sum_{i = 1}^{n} I_{A_i} = \sum_{j = 1}^{m} I_{B_j} = 1$ we have $X + Y = \sum_{i = 1}^{n} t_i I_{A_i} (\sum_{j = 1}^{m} I_{B_j}) + \sum_{j = 1}^{m} u_j I_{B_j} (\sum_{i = 1}^{n} I_{A_i}) = \sum_{i = 1}^{n} \sum_{j = 1}^{m} (t_i + u_j) I_{A_i} I_{B_j}$ Note that $I_{A_i} I_{B_j} = I_{A_i B_j}$ and $A_i B_j = \{X = t_i, Y = u_j\}$. The class of these sets for all possible pairs $(i, j)$ forms a partition. Thus, the last summation expresses $Z = X + Y$ in a primitive form. Because of the result on primitive forms, above, we have $E[X + Y] = \sum_{i = 1}^{n} \sum_{j = 1}^{m} (t_i + u_j) P(A_i B_j) = \sum_{i = 1}^{n} \sum_{j = 1}^{m} t_i P(A_i B_j) + \sum_{i = 1}^{n} \sum_{j = 1}^{m} u_j P(A_i B_j)$ $= \sum_{i = 1}^{n} t_i \sum_{j = 1}^{m} P(A_i B_j) + \sum_{j = 1}^{m} u_j \sum_{i = 1}^{n} P(A_i B_j)$ We note that for each $i$ and for each $j$ $P(A_i) = \sum_{j = 1}^{m} P(A_i B_j)$ and $P(B_j) = \sum_{i = 1}^{n} P(A_i B_j)$ Hence, we may write $E[X + Y] = \sum_{i = 1}^{n} t_i P(A_i) + \sum_{j = 1}^{m} u_j P(B_j) = E[X] + E[Y]$ Now $aX$ and $bY$ are simple if $X$ and $Y$ are, so that with the aide of Example 11.1.1 we have $E[aX + bY] = E[aX] + E[bY] = aE[X] + bE[Y]$ If $X, Y, Z$ are simple, then so are $aX + bY$, and $cZ$. It follows that $E[aX + bY + cZ] = E[aX + bY] + cE[Z] = aE[X] + bE[Y] + cE[Z]$ By an inductive argument, this pattern may be extended to a linear combination of any finite number of simple random variables. Thus we may assert Linearity. The expectation of a linear combination of a finite number of simple random variables is that linear combination of the expectations of the individual random variables. — □ Expectation of a simple random variable in affine form As a direct consequence of linearity, whenever simple random variable $X$ is in affine form, then $E[X] = E[c_0 + \sum_{i = 1}^{n} c_i I_{E_i}] = c_0 + \sum_{i = 1}^{n} c_i P(E_i)$ Thus, the defining expression holds for any affine combination of indicator functions, whether in canonical form or not. Example 11.1.6. binomial distribution (n,p) This random variable appears as the number of successes in $n$ Bernoulli trials with probability p of success on each component trial. It is naturally expressed in affine form $X = \sum_{i = 1}^{n} I_{E_i}$ so that $E[X] = \sum_{i = 1}^{n} p = np$ Alternately, in canonical form $X = \sum_{k = 0}^{n} k I_{A_{kn}}$, with $p_k = P(A_{kn}) = P(X = k) = C(n, k) p^{k} q^{n - k}$, $q = 1 - p$ so that $E[X] = \sum_{k = 0}^{n} kC(n, k) p^k q^{n - k}$, $q = 1 - p$ Some algebraic tricks may be used to show that the second form sums to $np$, but there is no need of that. The computation for the affine form is much simpler. Example 11.1.7. Expected winnings A bettor places three bets at $2.00 each. The first bet pays$10.00 with probability 0.15, the second pays $8.00 with probability 0.20, and the third pays$20.00 with probability 0.10. What is the expected gain? Solution The net gain may be expressed $X = 10I_A + 8I_B + 20 I_C - 6$, with $P(A) = 0.15$, $P(B) = 0.20$, $P(C) = 0.10$ Then $E[X] = 10 \cdot 0.15 + 8 \cdot 0.20 + 20 \cdot 0.10 - 6 = -0.90$ These calculations may be done in MATLAB as follows: c = [10 8 20 -6]; p = [0.15 0.20 0.10 1.00]; % Constant a = aI_(Omega), with P(Omega) = 1 E = c*p' E = -0.9000 Functions of simple random variables If $X$ is in a primitive form (including canonical form) and $g$ is a real function defined on the range of $X$, then $Z = g(X) = \sum_{j = 1}^{m} g(c_j) I_{C_j}$ a primitive form so that $E[Z] = E[g(X)] = \sum_{j = 1}^{m} g(c_j) P(C_j)$ Alternately, we may use csort to determine the distribution for $Z$ and work with that distribution. Caution. If $X$ is in affine form (but not a primitive form) $X = c_0 + \sum_{j = 1}^{m} c_j I_{E_j}$ then $g(X) \ne g(c_0) + \sum_{j = 1}^{m} g(c_j) I_{E_j}$ so that $E[g(X)] \ne g(c_0) + \sum_{j = 1}^{m} g(c_j) P(E_j)$ Example 11.1.8. expectation of a function of x Suppose $X$ in a primitive form is $X = -3I_{C_1} - I_{C_2} + 2I_{C_3} - 3I_{C_4} + 4I_{C_5} - I_{C_6} + I_{C_7} + 2I_{C_8} + 3I_{C_9} + 2I_{C_{10}}$ with probabilities $P(C_i) = 0.08, 0.11, 0.06, 0.13, 0.05, 0.08, 0.12, 0.07, 0.14, 0.16$. Let $g(t) = t^2 +2t$. Determine $E(g(X)]$. c = [-3 -1 2 -3 4 -1 1 2 3 2]; % Original coefficients pc = 0.01*[0 11 6 13 5 8 12 7 14 16]; % Probabilities for C_j G = c.^2 + 2*c % g(c_j) G = 3 -1 8 3 24 -1 3 8 15 8 EG = G*pc' % Direct computation EG = 6.4200 [Z,PZ] = csort(G,pc); % Distribution for Z = g(X) disp([Z; PZ]') -1.0000 0.1900 3.0000 0.3300 8.0000 0.2900 15.0000 0.1400 24.0000 0.0500 EZ = Z*PZ' % E[Z] from distribution for Z EZ = 6.4200 A similar approach can be made to a function of a pair of simple random variables, provided the joint distribution is available. Suppose $X = \sum_{i = 1}^{n} t_i I_{A_i}$ and $Y = \sum_{j = 1}^{m} u_j I_{B_j}$ (both in canonical form). Then $Z = g(X,Y) = \sum_{i = 1}^{n} \sum_{j = 1}^{m} g(t_i, u_j) I_{A_i B_j}$ The $A_i B_j$ form a partition, so $Z$ is in a primitive form. We have the same two alternative possibilities: (1) direct calculation from values of $g(t_i, u_j)$ and corresponding probabilities $P(A_i B_j) = P(X = t_i, Y = u_j)$, or (2) use of csort to obtain the distribution for $Z$. Example 11.1.9. expectation for z = g(x,y) We use the joint distribution in file jdemo1.m and let $g(t, u) = t^2 + 2tu - 3u$. To set up for calculations, we use jcalc. % file jdemo1.m X = [-2.37 -1.93 -0.47 -0.11 0 0.57 1.22 2.15 2.97 3.74]; Y = [-3.06 -1.44 -1.21 0.07 0.88 1.77 2.01 2.84]; P = 0.0001*[ 53 8 167 170 184 18 67 122 18 12; 11 13 143 221 241 153 87 125 122 185; 165 129 226 185 89 215 40 77 93 187; 165 163 205 64 60 66 118 239 67 201; 227 2 128 12 238 106 218 120 222 30; 93 93 22 179 175 186 221 65 129 4; 126 16 159 80 183 116 15 22 113 167; 198 101 101 154 158 58 220 230 228 211]; jdemo1 % Call for data jcalc % Set up Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P G = t.^2 + 2*t.*u - 3*u; % Calculation of matrix of [g(t_i, u_j)] EG = total(G.*P) % Direct claculation of expectation EG = 3.2529 [Z,PZ] = csort(G,P); % Determination of distribution for Z EZ = Z*PZ' % E[Z] from distribution EZ = 3.2529
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/11%3A_Mathematical_Expectation/11.01%3A_Mathematical_Expectation-_Simple_Random_Variables.txt
In this unit, we extend the definition and properties of mathematical expectation to the general case. In the process, we note the relationship of mathematical expectation to the Lebesque integral, which is developed in abstract measure theory. Although we do not develop this theory, which lies beyond the scope of this study, identification of this relationship provides access to a rich and powerful set of properties which have far reaching consequences in both application and theory. Extension to the General Case In the unit on Distribution Approximations, we show that a bounded random variable $X$ can be represented as the limit of a nondecreasing sequence of simple random variables. Also, a real random variable can be expressed as the difference $X = X^{+} - X^{-}$ of two nonnegative random variables. The extension of mathematical expectation to the general case is based on these facts and certain basic properties of simple random variables, some of which are established in the unit on expectation for simple random variables. We list these properties and sketch how the extension is accomplished. Definition: almost surely A condition on a random variable or on a relationship between random variables is said to hold almost surely, abbreviated “a.s.” iff the condition or relationship holds for all $\omega$ except possibly a set with probability zero. Basic properties of simple random variables (E0) : If $X = Y$ a.s. then $E[X] = E[Y]$. (E1): $E(aI_E) = aP(E)$. (E2): Linearity. $X = \sum_{i = 1}^{n} a_i X_i$ implies $E[X] = \sum_{i = 1}^{n} a_i E[X_i]$| (E3): Positivity: monotonicity a. If $X \ge 0$ a.s. , then $E[X] \ge 0$, with equality iff $X = 0$ a.s. . b. If $X \ge Y$ a.s. , then $E[X] \ge E[Y]$, with equality iff $X = Y$ a.s. . (E4): Fundamental lemma If $X \ge 0$ is bounded and $\{X_n: 1 \le n\}$ is an a.s. nonnegative, nondecreasing sequence with $\text{lim}_{n} \ X_n(\omega) \ge X(\omega)$ for almost every $\omega$, then $\text{lim}_{n} \ E[X_n] \ge E[X]$. (E4a): If for all $n$, $0 \le X_n \le X_{n + 1}$ a.s. and $X_n \to X$ a.s. , then $E[X_n] \to E[X]$ (i.e. , the expectation of the limit is the limit of the expectations). Ideas of the proofs of the fundamental properties • Modifying the random variable $X$ on a set of probability zero simply modifies one or more of the $A_i$ without changing $P(A_i)$ • Properties (E1) and (E2) are established in the unit on expectation of simple random variables.. • Positivity (E3a) is a simple property of sums of real numbers. Modification of sets of probability zero cannot affect the expectation. • Monotonicity (E3b) is a consequence of positivity and linearity. $X \ge Y$ iff $X - Y \ge 0$ a.s. and $E[X] \ge E[Y]$ iff $E[X] - E[Y] = E[X - Y] \ge 0$ • The fundamental lemma (E4) plays an essential role in extending the concept of expectation. It involves elementary, but somewhat sophisticated, use of linearity and monotonicity, limited to nonnegative random variables and positive coefficients. We forgo a proof. • Monotonicity and the fundamental lemma provide a very simple proof of the monotone convergence theoem, often designated MC. Its role is essential in the extension. Nonnegative random variables There is a nondecreasing sequence of nonnegative simple random variables converging to $X$. Monotonicity implies the integrals of the nondecreasing sequence is a nondecreasing sequence of real numbers, which must have a limit or increase without bound (in which case we say the limit is infinite). We define $E[X] = \text{lim } E[X_n]$. Two questions arise. Is the limit unique? The approximating sequences for a simple random variable are not unique, although their limit is the same. Is the definition consistent? If the limit random variable $X$ is simple, does the new definition coincide with the old? The fundamental lemma and monotone convergence may be used to show that the answer to both questions is affirmative, so that the definition is reasonable. Also, the six fundamental properties survive the passage to the limit. As a simple applications of these ideas, consider discrete random variables such as the geometric ($p$) or Poisson ($\mu$), which are integer-valued but unbounded. Example 11.2.1: Unbounded, nonnegative, integer-valued random variables The random variable $X$ may be expressed $X = \sum_{k = 0}^{\infty} k I_{E_k}$, where $E_k = \{X = k\}$ with $P(E_k) = p_k$ Let $X_n = \sum_{k = 0}^{n - 1} kI_{E_k} + n I_{B_n}$, where $B_n = \{X \ge n\}$ Then each $X_n$ is a simple random variable with $X_n \le X_{n + 1}$. If $X(\omega) = k$, then $X_n(\omega) = k = X(\omega)$ for all $n \ge k + 1$. Hence, $X_{n} (\omega) \to X(\omega)$ for all $\omega$. By monotone convergence, $E[X_n] \to E[X]$. Now $E[X_n] = \sum_{k = 1}^{n - 1} k P(E_k) + nP(B_n)$ If $\sum_{k = 0}^{\infty} kP(E_k) < \infty$, then $0 \le nP(B_n) = n \sum_{k = n}^{\infty} P(E_k) \le \sum_{k = n}^{\infty} kP(E_k) \to 0$ as $n \to \infty$ Hence $E[X] = \text{lim}_{n} \ E[X_n] = \sum_{k = 0}^{\infty} k P(A_k)$ We may use this result to establish the expectation for the geometric and Poisson distributions. Example 11.2.2: X~geometric ($p$) We have $p_k = P(X = k) = q^k p$. $0 \le k$. By the result of Example 11.2.1. $E[X] = \sum_{k = 0}^{\infty} kpq^k = pq \sum_{k = 1}^{\infty} kq^{k - 1} = \dfrac{pq}{(1 - q)^2} = q/p$ For $Y - 1$ ~ geometric ($p$), $p_k = pq^{k - 1}$ so that $E[Y] = \dfrac{1}{q} E[X] = 1/p$ Example 11.2.3: X~poisson ($\mu$) We have $p_k = e^{-\mu} \dfrac{\mu^{k}}{k!}$. By the result of Example 11.2.1. $E[X] = e^{-\mu} \sum_{k = 0}^{\infty} k \dfrac{\mu^k}{k!} = \mu e^{-\mu} \sum_{k = 1}^{\infty} \dfrac{\mu^{k - 1}}{(k - 1)!} = \mu e^{-\mu} e^{\mu} = \mu$ The general case We make use of the fact that $X = X^{+} - X^{-}$ , where both $X^{+}$ and $X^{-}$ are nonnegative. Then $E[X] = E[X^{+}] - E[X^{-}]$ provided at least one of $E[X^{+}]$, $E[X^{-}]$ is finite Definition. If both $E[X^{+}]$ and $E[X^{-}]$ are finite, $X$ is said to be integrable. The term integrable comes from the relation of expectation to the abstract Lebesgue integral of measure theory. Again, the basic properties survive the extension. The property (E0) is subsumed in a more general uniqueness property noted in the list of properties discussed below. Theoretical note The development of expectation sketched above is exactly the development of the Lebesgue integral of the random variable $X$ as a measurable function on the basic probability space ($\Omega$, $F$, $P$), so that $E[X] = \int_{\Omega} X\ dP$ As a consequence, we may utilize the properties of the general Lebesgue integral. In its abstract form, it is not particularly useful for actual calculations. A careful use of the mapping of probability mass to the real line by random variable $X$ produces a corresponding mapping of the integral on the basic space to an integral on the real line. Although this integral is also a Lebesgue integral it agrees with the ordinary Riemann integral of calculus when the latter exists, so that ordinary integrals may be used to compute expectations. Additional properties The fundamental properties of simple random variables which survive the extension serve as the basis of an extensive and powerful list of properties of expectation of real random variables and real functions of random vectors. Some of the more important of these are listed in the table in Appendix E. We often refer to these properties by the numbers used in that table. Some basic forms The mapping theorems provide a number of basic integral (or summation) forms for computation. In general, if $Z = g(X)$ with distribution functions $F_X$ and $F_Z$, we have the expectation as a Stieltjes integral. $E[Z] = E[g(X)] = \int g(t) F_X (dt) = \int u F_Z (du)$ If $X$ and $g(X)$ are absolutely continuous, the Stieltjes integrals are replaced by $E[Z] = \int g(t) f_X (t)\ dt = \int uF_Z (du)$ where limits of integration are determined by $f_X$ or $f_Y$. Justification for use of the density function is provided by the Radon-Nikodym theorem—property (E19). If $X$ is simple, in a primitive form (including canonical form), then $E[Z] = E[g(X)] = \sum_{j = 1}^{m} g(c_j) P(C_j)$ If the distribution for $Z = g(X)$ is determined by a csort operation, then $E[Z] = \sum_{k = 1}^{n} v_k P(Z = v_k)$ The extension to unbounded, nonnegative, integer-valued random variables is shown in Example 11.2.1, above. The finite sums are replaced by infinite series (provided they converge). For $Z = g(X, Y)$, $E[Z] = E[g(X, Y)] = \int \int g(t, u) F_{XY} (dtdu) = \int v F_Z (dv)$ In the absolutely continuous case $E[Z] = E[g(X,Y)] = \int \int g(t,u) f_{XY} (t, u) dudt = \int v f_Z (v) dv$ For joint simple $X,Y$ (Section on Expectation for Simple Random Variables) $E[Z] = E[g(X, Y)] = \sum_{i = 1}^{n} \sum_{j = 1}^{m} g(t_i, u_j) P(X = t_i, Y = u_j)$ Mechanical interpretation and approximation procedures In elementary mechanics, since the total mass is one, the quantity $E[X] = \int t f_X (t)\ dt$ is the location of the center of mass. This theoretically rigorous fact may be derived heuristically from an examination of the expectation for a simple approximating random variable. Recall the discussion of the m-procedure for discrete approximation in the unit on Distribution Approximations The range of $X$ is divided into equal subintervals. The values of the approximating random variable are at the midpoints of the subintervals. The associated probability is the probability mass in the subinterval, which is approximately $f_X (t_i) dx$, where $dx$ is the length of the subinterval. This approximation improves with an increasing number of subdivisions, with corresponding decrease in dxdx $X_s$ is $E[X_s] = \sum_{i} t_i f_X(t_i) dx \approx \int tf_X(t)\ dt$ The approximation improves with increasingly fine subdivisions. The center of mass of the approximating distribution approaches the center of mass of the smooth distribution. It should be clear that a similar argument for $g(X)$ leads to the integral expression $E[g(X)] = \int g(t) f_X (t)\ dt$ This argument shows that we should be able to use tappr to set up for approximating the expectation $E[g(X)]$ as well as for approximating $P(g(X) \in M)$, etc. We return to this in Section. Mean values for some absolutely continuous distributions Uniform on $[a, b]f_X (t) = \dfrac{1}{b-a}$, $a \le t \le b$ The center of mass is at $(a + b)/2$. To calculate the value formally, we write $E[X] = \int tf_X (t) dt = \dfrac{1}{b - a} \int_{a}^{b} t dt = \dfrac{b^2 - a^2}{2(b - a)} = \dfrac{b + a}{2}$ Symmetric triangular on[$a, b$] The graph of the density is an isoceles triangle with base on the interval $[a, b]$. By symmetry, the center of mass, hence the expectation, is at the midpoint $(a + b)/2$. Exponential($\lambda$) $f_X (t) = \lambda e^{-\lambda t}$, $0 \le t$ Using a well known definite integral (see Appendix B), we have $E[X] = \int tf_X(t)\ dt = \int_{0}^{\infty} \lambda te^{-\lambda t} dt = 1/\lambda$ Gamma($\alpha, \lambda$) $f_X (t) = \dfrac{1}{\Gamma} (\alpha) t^{\alpha - 1} \lambda^{\alpha} e^{-\lambda t}$, $0 \le t$ Again we use one of the integrals in Appendix B to obtain $E[X] = \int tf_X (t)\ dt = \dfrac{1}{\Gamma} \int_{0}^{\infty} \lambda^{\alpha} t^{\alpha} e^{-\lambda t} dt = \dfrac{\Gamma(\alpha + 1)}{\lambda \Gamma (\alpha)} = a/lambda$ The last equality comes from the fact that $\Gamma (\alpha + 1) = \alpha \Gamma (\alpha)$. Beta($r, s$). $f_X (t) = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} t^{r - 1} (1 - t)^{s - 1}$, $0 < t < 1$ We use the fact that $\int_{0}^{1} u^{r - 1} (1 - u)^{s - 1} \ du = \dfrac{\Gamma (r) \Gamma (s)}{\Gamma (r + s)}$, $r > 0$, $s > 0$. $E[X] = \int tf_X (t)\ dt = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} \int_{0}^{1} t^r (1 - t)^{s - 1} dt = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} \cdot \dfrac{\Gamma (r + 1) \Gamma (s)}{\Gamma (r + s + 1)} = \dfrac{r}{r + s}$ Weibull($\alpha, \lambda, v$). $F_X (t) = 1 - e^{-\lambda (t - v)^{\alpha}}$ $\alpha > 0$, $\lambda > 0$, $v \ge 0$, $t \ge v$. Differentiation shows $f_X (t) = \alpha \lambda (t - v)^{\alpha - 1} e^{-\lambda (t -v)^{\alpha}}$, $t \ge v$ First, consider $Y$ ~ exponential $(\lambda)$. For this random variable $E[Y^r] = \int_{0}^{\infty} t^r \lambda e^{-\lambda t}\ dt = \dfrac{\Gamma (r + 1)}{\lambda^r}$ If $Y$ is exponential (1), then techniques for functions of random variables show that $[\dfrac{1}{\lambda} Y]^{1/\alpha} + v$ ~ Weibull ($\alpha, lambda, v$). Hence, $E[X] = \dfrac{1}{\lambda ^{1/\alpha}} E[Y^{1/\alpha}] + v = \dfrac{1}{\lambda ^{1/\alpha}} \Gamma (\dfrac{1}{\alpha} + 1) + v$ Normal($\mu, \sigma^2$) The symmetry of the distribution about $t = \mu$ shows that $E[X] = \mu$. This, of course, may be verified by integration. A standard trick simplifies the work. $E[X] = \int_{-\infty}^{\infty} t f_X (t) \ dt = \int_{-\infty}^{\infty} (t - \mu) f_X (t) \ dt + \mu$ We have used the fact that $\int_{-\infty}^{\infty} f_X (t) \ dt = 1$. If we make the change of variable $x = t-\mu$ in the last integral, the integrand becomes an odd function, so that the integral is zero. Thus, $E[X] = \mu$. Properties and Computation The properties in the table in Appendix E constitute a powerful and convenient resource for the use of mathematical expectation. These are properties of the abstract Lebesgue integral, expressed in the notation for mathematical expectation. $E[g(X)] = \int g(X)\ dP$ In the development of additional properties, the four basic properties: (E1) Expectation of indicator functions, (E2) Linearity, (E3) Positivity; monotonicity, and (E4a) Monotone convergence play a foundational role. We utilize the properties in the table, as needed, often referring to them by the numbers assigned in the table. In this section, we include a number of examples which illustrate the use of various properties. Some are theoretical examples, deriving additional properties or displaying the basis and structure of some in the table. Others apply these properties to facilitate computation Example 11.2.4: Probability as expectation Probability may be expressed entirely in terms of expectation. • By properties (E1) and positivity (E3a), $P(A) = E[I_A] \ge 0$. • As a special cases of (E1), we have $P(\Omega) = E[I_{\Omega}] = 1$ • By the countable sums property (E8), $A = \bigvee_i A_i$ implies $P(A) = E[I_A] = E[ \sum_{i} I_{A_i}] = \sum_i E[I_{A_i}] = \sum_i P(A_i)$ Thus, the three defining properties for a probability measure are satisfied. Remark. There are treatments of probability which characterize mathematical expectation with properties (E0) through (E4a), then define $P(A) = E[I_A]$. Although such a development is quite feasible, it has not been widely adopted. Example 11.2.5: An indicator function pattern Suppose $X$ is a real random variable and $E = X^{-1} (M) =\{\omega: X(\omega) \in M\}$. Then $I_E = I_M (X)$ To see this, note that $X(\omega) \in M$ iff $\omega \in E$, so that $I_E(\omega) = 1$ iff $I_M(X(\omega)) = 1$. Similarly, if $E = X^{-1} (M) \cap Y^{-1} (N)$, then $I_E = I_M (X) I_N (Y)$. We thus have, by (E1). $P(X \in M) = E[I_M(X)]$ and $P(X \in M, Y \in N) = E[I_M(X) I_N (Y)]$ Example 11.2.6: Alternate interpretation of the mean value $E[(X - c)^2]$ is a minimum iff $c = E[X]$, in which case $E[(X - E[X])^2] = E[X^2] - E^2[X]$ INTERPRETATION. If we approximate the random variable $X$ by a constant $c$, then for any ω the error of approximation is $X(\omega) - c$. The probability weighted average of the square of the error (often called the mean squared error) is $E[(X - c)^2]$. This average squared error is smallest iff the approximating constant $c$ is the mean value. verification We expand $(X - c)^2$ and apply linearity to obtain $E[(X - c)^2 = E[X^2 - 2cX + c^2] = E[X^2] - 2E[X] c + c^2$ The last expression is a quadratic in $c$ (since $E[X^2]$ and $E[X]$ are constants). The usual calculus treatment shows the expression has a minimum for $c = E[X]$. Substitution of this value for $c$ shows the expression reduces to $E[X^2] - E^2[X]$. A number of inequalities are listed among the properties in the table. The basis for these inequalities is usually some standard analytical inequality on random variables to which the monotonicity property is applied. We illustrate with a derivation of the important Jensen's inequality. Example 11.2.7: Jensen's inequality If $X$ is a real random variable and $g$ is a convex function on an interval $I$ which includes the range of $X$, then verification The function $g$ is convex on $I$ iff for each $t_0 \in [a,b]$ there is a number $\lambda (t_0)$ such that $g(t) \ge g(t_0) + \lambda (t_0) (t - t_0)$ This means there is a line through ($t_0, g(t_0)$) such that the graph of $g$ lies on or above it. If $a \le X \le b$, then by monotonicity $E(a) = a \le E[X] \le E[b] = b$ (this is the mean value property (E11)). We may choose $t_0 = E[X] \in I$. If we designate the constant $\lambda (E[X])$ by $c$, we have $g(X) \ge g(E[X]) + c(X - E[X])$ Recalling that $E[X]$ is a constant, we take expectation of both sides, using linearity and monotonicity, to get $E[g(X)] \ge g(E[X]) + c(E[X] - E[X]) = g(E[X])$ Remark. It is easy to show that the function $\lambda (\cdot)$ is nondecreasing. This fact is used in establishing Jensen's inequality for conditional expectation. The product rule for expectations of independent random variables Example 11.2.8: product rule for simple random variables Consider an independent pair $\{X, Y\}$ of simple random variables $X = \sum_{i = 1}^{n} t_i I_{A_i}$ $Y = \sum_{j = 1}^{m} u_j I_{B_j}$ (both in canonical form) We know that each pair $\{A_i, B_j\}$ is independent, so that $P(A_i B_j) = P(A_i) P(B_j)$. Consider the product $XY$. According to the pattern described after Example 9 from "Mathematical Expectation: Simple Random Variables." $XY = \sum_{i = 1}^{n} t_i I_{A_i} \sum_{j = 1}^{m} u_j I_{B_j} = \sum_{i = 1}^{n} \sum_{j = 1}^{m} t_i u_j I_{A_i B_j}$ The latter double sum is a primitive form, so that $E[XY] = \sum_{i = 1}^{n} \sum_{j = 1}^{m} t_i u_j P(A_i B_j) = \sum_{i = 1}^{n} \sum_{j = 1}^{m} t_i u_j P(A_i) P(B_j) = (\sum_{i = 1}^{n} t_i P(A_i)) (\sum_{j = 1}^{m} u_j P(B_j)) = E[X]E[Y]$ Thus the product rule holds for independent simple random variables. Example 11.2.9: approximating simple functions for an independent pair Suppose $\{X, Y\}$ is an independent pair, with an approximating simple pair $\{X_s, Y_s\}$. As functions of $X$ and $Y$, respectively, the pair $\{X_s, Y_s\}$ is independent. According to Example, above, the product rule $E[X_s Y_s] = E[X_s] E[Y_s]$ must hold. Example 11.2.10. product rule for an independent pair For $X \ge 0$, $Y \ge 0$, there exist nondecreasing sequences $\{X_n: 1 \le n\}$ and $\{Y_n: 1 \le n\}$ of simple random variables increasing to $X$ and $Y$, respectively. The sequence $\{X_n Y_n: 1 \le n\}$ is also a nondecreasing sequence of simple random variables, increasing to $XY$. By the monotone convergence theorem (MC) $E[X_n] \nearrow E[X]$, $E[Y_n] \nearrow E[Y]$, and $E[X_n Y_n] \nearrow E[XY]$ Since $E[X_n Y_n] = E[X_n] E[Y_n]$ for each $n$, we conclude $E[XY] = E[X] E[Y]$ In the general case, $XY = (X^{+} - X^{-}) (Y^{+} - Y^{-}) = X^{+}Y^{+} - X^{+} Y^{-} - X^{-} Y^{+} + X^{-} Y^{-}$ Application of the product rule to each nonnegative pair and the use of linearity gives the product rule for the pair $\{X, Y\}$ Remark. It should be apparent that the product rule can be extended to any finite independent class. Example 11.2.11: the joint distribution of three random variables The class $\{X, Y, Z\}$ is independent, with the marginal distributions shown below. Let $W = g(X, Y, Z) = 3X^2 + 2XY - 3XYZ$. Determine $E[W]$. X = 0:4; Y = 1:2:7; Z = 0:3:12; PX = 0.1*[1 3 2 3 1]; PY = 0.1*[2 2 3 3]; PZ = 0.1*[2 2 1 3 2]; icalc3 % Setup for joint dbn for {X,Y,Z} Enter row matrix of X-values X Enter row matrix of Y-values Y Enter row matrix of Z-values Z Enter X probabilities PX Enter Y probabilities PY Enter Z probabilities PZ Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P EX = X*PX' % E[X] EX = 2 EX2 = (X.^2)*PX' % E[X^2] EX2 = 5.4000 EY = Y*PY' % E[Y] EY = 4.4000 EZ = Z*PZ' % E[Z] EZ = 6.3000 G = 3*t.^2 + 2*t.*u - 3*t.*u.*v; % W = g(X,Y,Z) = 3X^2 + 2XY - 2XYZ Example 11.2.12. a function with a compound definition: truncated exponential Suppose $X$ ~ exponential (0, 3). Let $Z = \begin{cases} X^2 & \text{for } X \le 4 \ 16 & \text{for } X > 4 \end{cases} = I_{[0, 4]} (X) X^2 + I_{(4, \infty]} (X) 16$ Determine $E(Z)$. Analytic Solution $E[g(X)] = \int g(t) f_X (t) \ dt = \int_{0}^{\infty} I_{[0, 4]} (t) t^2 0.3 e^{-0.3t}\ dt + 16 E[I_{(4, \infty]} (X)]$ $= \int_{0}^{4} t^2 0.3 e^{-0.3t}\ dt + 16 P(X > 4) \approx 7.4972$ (by Maple) APPROXIMATION To obtain a simple aproximation, we must approximate the exponential by a bounded random variable. Since $P(X > 50) = e^{-15} \approx 3 \cdot 10^{-7}$ we may safety truncate $X$ at 50. tappr Enter matrix [a b] of x-range endpoints [0 50] Enter number of x approximation points 1000 Enter density as a function of t 0.3*exp(-0.3*t) Use row matrices X and PX as in the simple case M = X <= 4 G = M.*X.^2 + 16*(1 - M); % g(X) EG = G*PX' % E[g(X)] EG = 7.4972 [Z,PZ] = csort(G,PX); % Distribution for Z = g(X) EZ = Z*PZ' % E[Z] from distribution EZ = 7.4972 Because of the large number of approximation points, the results agree quite closely with the theoretical value. Example 11.2.13. stocking for random demand (see exercise 4 from "Problems on functions of random variables") The manager of a department store is planning for the holiday season. A certain item costs $c$ dollars per unit and sells for $p$ dollars per unit. If the demand exceeds the amount $m$ ordered, additional units can be special ordered for $s$ dollars per unit $(s > c)$. If demand is less than amount ordered, the remaining stock can be returned (or otherwise disposed of) at $r$ dollars per unit ($r < c$). Demand $D$ for the season is asumed to be a random variable with Poisson ($\mu$) distribution. Suppose $\mu = 50$, $c = 30$, $p = 50$, $s = 40$, $r = 20$. What about $m$ should the manager order to maximize the expected profit? PROBLEM FORMULATION Suppose $D$ is the demand and $X$ is the profit. Then For $D \le m$, $X = D(p - c) - (m - D) (c - r) = D(p - r) + m (r - c)$ For $D > m$, $X = m(p - c) + (D - m) (p - s) = D(p - s) + m(s - c)$ It is convenient to write the expression for $X$ in terms of $I_M$, where $M = (-\infty, m]$. Thus $X = I_M (D) [D (p - r) + m(r - c)] + [1 - I_M(D)] [D(p - s) + m (s - c)]$ $= D(p - s) + m(s - c) + I_M(D) [D(p - r) + m(r - c) - D(p - s) - m(s - c)]$ $= D(p - s) + m(s - c) + I_M(D) (s - r) (D - m)$ Then $E[X] = (p - c) E[D] + m(s - c) + (s - r) E[I_M(D) D] - (s - r) m E[I_M (D)]. Analytic Solution For \(D$ ~ Poisson ($\mu$), $E[D] = \mu$ and $E[I_M(D)] = P(D \le m)$ $E[I_M(D) D] = e^{-\mu} \sum_{k = 1}^{m} k \dfrac{\mu^k}{k!} = \mu e^{-\mu} \sum_{k = 1}^{m} \dfrac{\mu^{k - 1}}{(k - 1)!} = \mu P(D \le m - 1)$ Hence, $E[X] = (p - s) E[D] + m(s - c) + (s - r) E[I_M (D) D] - (s - r) m E[I_M(D)]$ $= (p - s)\mu + m(s - c) + (s - r) \mu P(D \le m - 1) - (s - r) m P(D \le m)$ Because of the discrete nature of the problem, we cannot solve for the optimum $m$ by ordinary calculus. We may solve for various $m$ about $m = \mu$ and determine the optimum. We do so with the aid of MATLAB and the m-function cpoisson. mu = 50; c = 30; p = 50; s = 40; r = 20; m = 45:55; EX = (p - s)*mu + m*(s - c) + (s - r)*mu*(1 - cpoisson(mu, m))... -(s - r)*m.*(1 - cpoisson(mu,m+1)); disp([m;EX]') 45.0000 930.8604 46.0000 935.5231 47.0000 939.1895 48.0000 941.7962 49.0000 943.2988 50.0000 943.6750 % Optimum m = 50 51.0000 942.9247 52.0000 941.0699 53.0000 938.1532 54.0000 934.2347 55.0000 929.3886 A direct, solution may be obtained by MATLAB, using finite approximation for the Poisson distribution. APPROXIMATION ptest = cpoisson(mu,100) %Check for suitable value of n ptest = 3.2001e-10 n = 100; t = 0:n; pD = ipoisson(mu,t); for i = 1:length(m) % Step by step calculation for various m M = t > m(i); G(i,:) = t*(p - r) - M.*(t - m(i))*(s - r) - m(i)*(c - r); end EG = G*pD'; % Value agree with theoretical to four decimals An advantage of the second solution, based on simple approximation to D, is that the distribution of gain for each $m$ could be studied — e.g., the maximum and minimum gains. — □ Example 11.2.14. a jointly distributed pair Suppose the pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 3u$ on the triangular region bounded by $u = 0$, $u = 1 + t$, $u = 1 - t$ (see Figure 11.2.1). Let $Z = g(X, Y) = X^2 + 2XY$. Determine $E[Z]$. Figure 11.2.1. The density for Example 11.2.14. Analytic Solution $E[Z] = \int \int (t^2 + 2tu) f_{XY} (t, u) \ dudt$ $= 3 \int_{-1}^{0} \int_{0}^{1 + t} (t^2 u + 2tu^2) \ dudt + 3 \int_{0}^{1} \int_{0}^{1 - t} (t^2 u + 2tu^2)\ dudt = 1/10$ APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density 3*u.*(u<=min(1+t,1-t)) Use array operations on X, Y, PX, PY, t, u, and P G = t.^2 + 2*t.*u; % g(X,Y) = X^2 + 2XY EG = total(G.*P) % E[g(X,Y)] EG = 0.1006 % Theoretical value = 1/10 [Z, PZ] = csort(G,P); % Distribution for Z EZ = Z*PZ' % E[Z] from distribution EZ = 0.1006 Example 11.2.15. Afunction with a compound definition The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 1/2$ on the square region bounded to $u = 1 + t, u = 1 - t, u = 3 - t$, and $u = t - 1$ (see Figure 11.2.2). $W = \begin{cases} X & \text{for max } \{X, Y\} \le 1 \ 2Y & \text{for max } \{X, Y\} > 1 \end{cases} = I_Q (X, Y) X + I_{Q^c} (X,Y) 2Y$ where $Q = \{(t, u): \text{max } \{t, u\} \le 1\} = \{(t, u): t \le 1, u \le 1\}$. Determine $E[W]$. Figure 11.2.2. The density for Example 11.2.15 Analytic Solution The intersection of the region $Q$ and the square is the set for which $0 \le t \le 1$ and $1 - t \le u \le 1$. Reference to the figure shows three regions of integration. $E[W] = \dfrac{1}{2} \int_0^1 \int_{1 - t}^{1} t\ dudt + \dfrac{1}{2} \int_{0}^{1} \int_{1}^{1 + t} 2u\ dudt + \dfrac{1}{2} \int_{1}^{2} \int_{t - 1}^{3 - t} 2u \ dudt = 11/6 \approx 1.8333$ APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density ((u<=min(t+1,3-t))& ... (u>=max(1-t,t-1))/2 Use array operation on X, Y, PX, PY, t, u, and P M = max(t,u)<=1; G = t.*M + 2*u.*(1 - M); % Z = g(X,Y) EG = total(G.*P) % E[g(X,Y)] EG = 1.8340 % Theoretical 11/6 = 1.8333 [Z,PZ] = csort(G,P); % Distribution for Z EZ = dot(Z,PZ) % E[Z] from distribution EZ = 1.8340 Special forms for expectation The various special forms related to property (E20a) are often useful. The general result, which we do not need, is usually derived by an argument which employs a general form of what is known as Fubini's theorem. The special form (E20b) $E[X] = \int_{-\infty}^{\infty} [u(t) - F_X (t)]\ dt$ may be derived from (E20a) by use of integration by parts for Stieltjes integrals. However, we use the relationship between the graph of the distribution function and the graph of the quantile function to show the equivalence of (E20b) and (E20f). The latter property is readily established by elementary arguments. Example 11.2.16. The property (e20f) If $Q$ is the quantile function for the distribution function $F_X$, then $E[g(X)] = \int_{0}^{1} g[G(u)]\ du$ VERIFICATION If $Y = Q(U)$, where $U$ ~ uniform on (0, 1), then $Y$ has the same distribution as $X$. Hence, $E[g(X)] = E[g(Q(U))] = \int g(Q(u)) f_U (u)\ du = \int_{0}^{1} g(Q(u))\ du$ Example 11.2.17. Reliability and expectation In reliability, if $X$ is the life duration (time to failure) for a device, the reliability function is the probability at any time $t$ the device is still operative. Thus $R(t) = P(X > t) = 1 - F_X(t)$ According to property (E20b) $E[X] = \int_{0}^{\infty} R(t) \ dt$ Example 11.2.18. Use of the quantile function Suppose $F_X (t) = t^a$, $a > 0$, $0 \le t \le 1$. Then $Q(u) = u^{1/a}$, $0 \le u \le a$. $E[X] = \int_{0}^{1} u^{1/a} \ du = \dfrac{1}{1 + 1/a} = \dfrac{a}{a + 1}$ The same result could be obtained by using $f_X(t) = F_{X}^{'} (t)$ and evaluating $\int t f_X (t)\ dt$. Example 11.2.19. Equivalence of (e20b) and (e20f) For the special case $g(X) = X$. Figure 3(a) shows \int_{0}^{1} Q(u) \ du\) is the difference in the shaded areas $\int_{0}^{1} Q(u)\ du = \text{Area } A - \text{Area } B$ The corresponding graph of the distribution function F is shown in Figure 11.2.3(b). Because of the construction, the areas of the regions marked $A$ and $B$ are the same in the two figures. As may be seen, $\text{Area } A = \int_{0}^{\infty} [1 - F(t)]\ dt$ and $\text{Area } B = \int_{-\infty}^{0} F(t)\ dt$ Use of the unit step function $u(t) = 1$ for $t > 0$ and 0 for $t < 0$ (defined arbitrarily at $t = 0$) enables us to combine the two expressions to get $\int_{0}^{1} Q(u)\ du = \text{Area } A - \text{Area } B = \int_{-\infty}^{\infty} [u(t) - F(t)]\ dt$ Figure 11.2.3. Equivalence of properties (E20b) and (E20f). Property (E20c) is a direct result of linearity and (E20b), with the unit step functions cancelling out. Example 11.2.20. Property (e20d) useful inequalities Suppose $X \ge 0$. Then $\sum_{n = 0}^{\infty} P(X \ge n + 1) \le E[X] \le \sum_{n = 0}^{\infty} P(X \ge n) \le N \sum_{k = 0}^{\infty} P(X \ge kN)$, for all $N \ge 1$ VERIFICATION For $X \ge 0$, by (E20b) $E[X] = \int_{0}^{\infty} [1 - F(t)]\ dt = \int_{0}^{\infty} P(X > t)\ dt$ Since $F$ can have only a countable number of jumps on any interval and $P(X > t$ and $P(X \ge t)$ differ only at jump points, we may assert $\int_{a}^{b} P(X > t)\ dt = \int_{a}^{b} P(X \ge t)\ dt$ For each nonnegative integer $n$, let $E_n = [n, n + 1]$. By the countable additivity of expectation $E[X] = \sum_{n = 0}^{\infty} E[I_{E_n} X] = \sum_{n = 0}^{\infty} \int_{E_n} P(X \ge t) \ dt$ Since $P(X \ge t)$ is decreasing with $t$ and each $E_n$ has unit length, we have by the mean value theorem $P(X \ge n + 1) \le E[I_{E_n} X] \le P(X \ge n)$ The third inequality follows from the fact that $\int_{kN}^{(k + 1)N} P(X \ge t) \ dt \le N \int_{E_{kN}} P(X \ge t) \ dt \le NP(X \ge kN)$ Remark. Property (E20d) is used primarily for theoretical purposes. The special case (E20e) is more frequently used. Example 11.2.21. Property (e20e) If $X$ is nonnegative, integer valued, then $E[X] = \sum_{k = 1}^{\infty} P(X \ge k) = \sum_{k = 0}^{\infty} P(X > k)$ VERIFICATION The result follows as a special case of (E20d). For integer valued random variables, $P(X \ge t) = P(X \ge n)$ on $E_n$ and $P(X \ge t) = P(X > n) = P(X \ge n + 1)$ on $E_{n + 1}$ An elementary derivation of (E20e) can be constructed as follows. Example 11.2.22. (e20e) for integer-valued random variables By definition $E[X] = \sum_{k = 1}^{\infty} kP(X = k) = \text{lim}_n \sum_{k = 1}^{n} kP(X =k)$ Now for each finite $n$, $\sum_{k = 1}^{n} kP(X = k) = \sum_{k = 1}^{n} \sum_{j = 1}^{k} P(X = k) = \sum_{j = 1}^{n} \sum_{k = j}^{n} P(X = k) = \sum_{j = 1}^{n} P(X \ge j)$ Taking limits as $n \to \infty$ yields the desired result. Example 11.2.23. the geometric distribution Suppose $X$ ~ geometric ($p$). Then $P(X \ge k) = q^k$. Use of (E20e) gives $E[X] = \sum_{k = 1}^{\infty} q^k = q \sum_{k = 0}^{\infty} q^k = \dfrac{q}{1 - q} = q/p$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/11%3A_Mathematical_Expectation/11.02%3A_Mathematical_Expectation_and_General_Random_Variables.txt
Exercise $1$ (See Exercise 1 from "Problems on Distribution and Density Functions", m-file npr07_01.m). The class $\{C_j: 1 \le j \le 10\}$ is a partition. Random variable $X$ has values {1, 3, 2, 3, 4, 2, 1, 3, 5, 2} on $C_1$ through $C_{10}$, respectively, with probabilities 0.08, 0.13, 0.06, 0.09, 0.14, 0.11, 0.12, 0.07, 0.11, 0.09. Determine $E[X]$ Answer % file npr07_01.m % Data for Exercise 1 from "Problems on Distribution and Density Functions" T = [1 3 2 3 4 2 1 3 5 2]; pc = 0.01*[8 13 6 9 14 11 12 7 11 9]; disp('Data are in T and pc') npr07_01 Data are in T and pc EX = T*pc' EX = 2.7000 [X,PX] csort(T,pc): % Alternate using X, PX ex = X*PX' ex = 2.7000 Exercise $2$ (See Exercise 2 from "Problems on Distribution and Density Functions", m-file npr07_02.m ). A store has eight items for sale. The prices are $3.50,$5.00, $3.50,$7.50, $5.00,$5.00, $3.50, and$7.50, respectively. A customer comes in. She purchases one of the items with probabilities 0.10, 0.15, 0.15, 0.20, 0.10 0.05, 0.10 0.15. The random variable expressing the amount of her purchase may be written $X = 3.5I_{C_1} + 5.0 I_{C_2} + 3.5I_{C_3} + 7.5I_{C_4} + 5.0I_{C_5} + 5.0I_{C_6} + 3.5I_{C_7} + 7.5I_{C_8}$ Determine the expection $E[X]$ of the value of her purchase. Answer % file npr07_02.m % Data for Exercise 2 from "Problems on Distribution and Density Functions" T = [3.5 5.0 3.5 7.5 5.0 5.0 3.5 7.5]; pc = 0.01*[10 15 15 20 10 5 10 15]; disp('Data are in T and pc') npr07_02 Data are in T and pc EX = T*pc' EX = 5.3500 [X,PX] csort(T,pc) ex = X*PX' ex = 5.3500 Exercise $3$ See Exercise 12 from "Problems on Random Variables and Probabilities", and Exercise 3 from "Problems on Distribution and Density Functions," m-file npr06_12.m). The class $\{A, B, C, D\}$ has minterm probabilities $pm =$ 0.001 * [5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302] Determine the mathematical expection for the random variable $X = I_A + I_B + I_C + I_D$, which counts the number of the events which occur on a trial. Answer % file npr06_12.m % Data for Exercise 12 from "Problems on Random Variables and Probabilities" pm = 0.001*[5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302]; c = [1 1 1 1 0]; disp('Minterm probabilities in pm, coefficients in c') npr06_12 Minterm probabilities in pm, coefficients in c canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations call for XDBN to view the distribution EX = X*PX' EX = 2.9890 T = sum(mintable(4)); [x,px] = csort(T,pm); ex = x*px ex = 2.9890 Exercise $4$ (See Exercise 5 from "Problems on Distribution and Density Functions"). In a thunderstorm in a national park there are 127 lightning strikes. Experience shows that the probability of of a lightning strike starting a fire is about 0.0083. Determine the expected number of fires. Answer $X$ ~ binomial (127, 0.0083), $E[X] = 127 \cdot 0.0083 = 1.0541$ Exercise $5$ (See Exercise 8 from "Problems on Distribution and Density Functions"). Two coins are flipped twenty times. Let $X$ be the number of matches (both heads or both tails). Determine $E[X]$ Answer $X$ ~ binomial (20, 1/2). $E[X] = 20 \cdot 0.5 = 10$ Exercise $6$ (See Exercise 12 from "Problems on Distribution and Density Functions"). A residential College plans to raise money by selling “chances” on a board. Fifty chances are sold. A player pays $10 to play; he or she wins$30 with probability $p = 0.2$. The profit to the College is $X = 50 \cdot 10 - 30N$, where $N$ is the numbe of winners Determine the expected profit $E[X]$. Answer $N$ ~ binomial (50, 0.2). $E[N] = 50 \cdot 0.2 = 10$. $E[X] = 500 - 30E[N] = 200$. Exercise $7$ (See Exercise 19 from "Problems on Distribution and Density Functions"). The number of noise pulses arriving on a power circuit in an hour is a random quantity having Poisson (7) distribution. What is the expected number of pulses in an hour? Answer $X$ ~ Poisson (7). $E[X] = 7$. Exercise $8$ (See Exercise 24 and Exercise 25 from "Problems on Distribution and Density Functions"). The total operating time for the units in Exercise 24 is a random variable $T$ ~ gamma (20, 0.0002). What is the expected operating time? Answer $X$ ~ gamma (20, 0.0002). $E[X] = 20/0.0002 = 100,000$. Exercise $9$ (See Exercise 41 from "Problems on Distribution and Density Functions"). Random variable $X$ has density function $f_X (t) = \begin{cases} (6/5) t^2 & \text{for } 0 \le t \le 1 \ (6/5)(2 - t) & \text{for } 1 \le t \le 2 \end{cases} = I_{[0, 1]}(t) \dfrac{6}{5} t^2 + I_{(1, 2]} (t) \dfrac{6}{5} (2 - t)$. What is the expected value $E[X]$? Answer $E[X] = \int t f_X(t)\ dt = \dfrac{6}{5} \int_{0}^{1} t^3 \ dt + \dfrac{6}{5} \int_{1}^{2} (2t - t^2)\ dt = \dfrac{11}{10}$ Exercise $10$ Truncated exponential. Suppose $X$ ~ exponential ($\lambda$) and $Y = I_{[0, a]} (X) X + I_{a, \infty} (X) a$. a. Use the fact that $\int_{0}^{\infty} te^{-\lambda t} \ dt = \dfrac{1}{\lambda ^2}$ and $\int_{a}^{\infty} te^{-\lambda t}\ dt = \dfrac{1}{\lambda ^2} e^{-\lambda t} (1 + \lambda a)$ to determine an expression for $E[Y]$. b. Use the approximation method, with $\lambda = 1/50$, $a = 30$. Approximate the exponential at 10,000 points for $0 \le t \le 1000$. Compare the approximate result with the theoretical result of part (a). Answer $E[Y] = \int g(t) f_X (t)\ dt = \int_{0}^{a} t \lambda e^{-\lambda t} \ dt + aP(X > a) =$ $\dfrac{\lambda}{\lambda ^2} [1 - e^{-\lambda a} (1 + \lambda a)] + a e^{-\lambda a} = \dfrac{1}{\lambda} (1 - e^{-\lambda a})$ tappr Enter matrix [a b] of x-range endpoints [0 1000] Enter number of x approximation points 10000 Enter density as a function of t (1/50)*exp(-t/50) Use row matrices X and PX as in the simple case G = X.*(X<=30) + 30*(X>30); EZ = G8PX' EZ = 22.5594 ez = 50*(1-exp(-30/50)) %Theoretical value ez = 22.5594 Exercise $11$ (See Exercise 1 from "Problems On Random Vectors and Joint Distributions", m-file npr08_01.m). Two cards are selected at random, without replacement, from a standard deck. Let $X$ be the number of aces and $Y$ be the number of spades. Under the usual assumptions, determine the joint distribution. Determine $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$, and $E[XY]$. Answer npr08_01 Data in Pn, P, X, Y jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row marix of VALUES of X X Enter row marix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P EX = X*PX' EX = 0.1538 ex = total(t.*P) % Alternate ex = 0.1538 EY = Y*PY' EY = 0.5000 EX2 = (X.^2)*PX' EX2 = 0.1629 EY2 = (Y.^2)*PY' EY2 = 0.6176 EXY = total(t.*u.*P) EXY = 0.0769 Exercise $12$ (See Exercise 2 from "Problems On Random Vectors and Joint Distributions", m-file npr08_02.m ). Two positions for campus jobs are open. Two sophomores, three juniors, and three seniors apply. It is decided to select two at random (each possible pair equally likely). Let $X$ be the number of sophomores and $Y$ be the number of juniors who are selected. Determine the joint distribution for $\{X, Y\}$ and $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$, and $E[XY]$. Answer npr08_02 Data are in X, Y, Pn, P jcalc ----------------------- EX = X*PX' EX = 0.5000 EY = Y*PY' EY = 0.7500 EX2 = (X.^2)*PX' EX2 = 0.5714 EY2 = (Y.^2)*PY' EY2 = 0.9643 EXY = total(t.*u.*P) EXY = 0.2143 Exercise $13$ (See Exercise 3 from "Problems On Random Vectors and Joint Distributions", m-file npr08_03.m ). A die is rolled. Let X be the number of spots that turn up. A coin is flipped $X$ times. Let $Y$ be the number of heads that turn up. Determine the joint distribution for the pair $\{X, Y\}$. Assume $P(X = k) = 1/6$ for $1 \le k \le 6$ and for each $k$, $P(Y = j|X = k)$ has the binomial $(k, 1/2)$ distribution. Arrange the joint matrix as on the plane, with values of $Y$ increasing upward. Determine the expected value $E[Y]$ Answer npr08_03 Data are in X, Y, P, PY jcalc ----------------------- EX = X*PX' EX = 3.5000 EY = Y*PY' EY = 1.7500 EX2 = (X.^2)*PX' EX2 = 15.1667 EY2 = (Y.^2)*PY' EY2 = 4.6667 EXY = total(t.*u.*P) EXY = 7.5833 Exercise $14$ (See Exercise 4 from "Problems On Random Vectors and Joint Distributions", m-file npr08_04.m ). As a variation of Exercise, suppose a pair of dice is rolled instead of a single die. Determine the joint distribution for $\{X, Y\}$ and determine $E[Y]$. Answer npr08_04 Data are in X, Y, P jcalc ----------------------- EX = X*PX' EX = 7 EY = Y*PY' EY = 3.5000 EX2 = (X.^2)*PX' EX2 = 54.8333 EY2 = (Y.^2)*PY' EY2 = 15.4583 Exercise $15$ (See Exercise 5 from "Problems On Random Vectors and Joint Distributions", m-file npr08_05.m). Suppose a pair of dice is rolled. Let $X$ be the total number of spots which turn up. Roll the pair an additional $X$ times. Let $Y$ be the number of sevens that are thrown on the $X$ rolls. Determine the joint distribution for $\{X,Y\}$ and determine $E[Y]$ Answer npr08_05 Data are in X, Y, P, PY jcalc ----------------------- EX = X*PX' EX = 7.0000 EY = Y*PY' EY = 1.1667 Exercise $16$ (See Exercise 6 from "Problems On Random Vectors and Joint Distributions", m-file npr08_06.m). The pair $\{X,Y\}$ has the joint distribution: $X =$ [-2.3 -0.7 1.1 3.9 5.1] $Y =$ [1.3 2.5 4.1 5.3] $P = \begin{bmatrix} 0.0483 & 0.0357 & 0.0420 & 0.0399 & 0.0441 \ 0.0437 & 0.0323 & 0.0380 & 0.0361 & 0.0399 \ 0.0713 & 0.0527 & 0.0620 & 0.0609 & 0.0551 \ 0.0667 & 0.0493 & 0.0580 & 0.0651 & 0.0589 \end{bmatrix}$ Determine $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. Answer npr08_06 Data are in X, Y, P jcalc --------------------- EX = X*PX' EX = 1.3696 EY = Y*PY' EY = 3.0344 EX2 = (X.^2)*PX' EX2 = 9.7644 EY2 = (Y.^2)*PY' EY2 = 11.4839 EXY = total(t.*u.*P) EXY = 4.1423 Exercise $17$ (See Exercise 7 from "Problems On Random Vectors and Joint Distributions", m-file npr08_07.m). The pair $\{X, Y\}$ has the joint distribution: $P(X = t, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Determine $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. Answer npr08_07 Data are in X, Y, P jcalc --------------------- EX = X*PX' EX = 0.8590 EY = Y*PY' EY = 1.1455 EX2 = (X.^2)*PX' EX2 = 5.8495 EY2 = (Y.^2)*PY' EY2 = 19.6115 EXY = total(t.*u.*P) EXY = 3.6803 Exercise $18$ (See Exercise 8 from "Problems On Random Vectors and Joint Distributions", m-file npr08_08.m). The pair $\{X, Y\}$ has the joint distribution: $P(X = t, Y = u)$ t= 1 3 5 7 9 11 13 15 17 19 u = 12 0.0156 0.0191 0.0081 0.0035 0.0091 0.0070 0.0098 0.0056 0.0091 0.0049 10 0.0064 0.0204 0.0108 0.0040 0.0054 0.0080 0.0112 0.0064 0.0104 0.0056 9 0.0196 0.0256 0.0126 0.0060 0.0156 0.0120 0.0168 0.0096 0.0056 0.0084 5 0.0112 0.0182 0.0108 0.0070 0.0182 0.0140 0.0196 0.0012 0.0182 0.0038 3 0.0060 0.0260 0.0162 0.0050 0.0160 0.0200 0.0280 0.0060 0.0160 0.0040 -1 0.0096 0.0056 0.0072 0.0060 0.0256 0.0120 0.0268 0.0096 0.0256 0.0084 -3 0.0044 0.0134 0.0180 0.0140 0.0234 0.0180 0.0252 0.0244 0.0234 0.0126 -5 0.0072 0.0017 0.0063 0.0045 0.0167 0.0090 0.0026 0.0172 0.0217 0.0223 Determine $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. Answer npr08_08 Data are in X, Y, P jcalc --------------------- EX = X*PX' EX = 10.1000 EY = Y*PY' EY = 3.0016 EX2 = (X.^2)*PX' EX2 = 133.0800 EY2 = (Y.^2)*PY' EY2 = 41.5564 EXY = total(t.*u.*P) EXY = 22.2890 Exercise $19$ (See Exercise 9 from "Problems On Random Vectors and Joint Distributions", m-file npr08_09.m). Data were kept on the effect of training time on the time to perform a job on a production line. $X$ is the amount of training, in hours, and $Y$ is the time to perform the task, in minutes. The data are as follows: $P(X = t, Y = u)$ t = 1 1.5 2 2.5 3 u = 5 0.039 0.011 0.005 0.001 0.001 4 0.065 0.070 0.050 0.015 0.010 3 0.031 0.061 0.137 0.051 0.033 2 0.012 0.049 0.163 0.058 0.039 1 0.003 0.009 0.045 0.025 0.017 Determine $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. Answer npr08_09 Data are in X, Y, P jcalc --------------------- EX = X*PX' EX = 1.9250 EY = Y*PY' EY = 2.8050 EX2 = (X.^2)*PX' EX2 = 4.0375 EY2 = (Y.^2)*PY' EXY = total(t.*u.*P) EY2 = 8.9850 EXY = 5.1410 For the joint densities in Exercise 20-32 below a. Determine analytically $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. b. Use a discrete approximation for $E[X]$, $E[Y]$, $E[X^2]$, $E[Y^2]$ and $E[XY]$. Exercise $20$ (See Exercise 10 from "Problems On Random Vectors and Joint Distributions"). $f_{XY}(t, u) = 1$ for $0 \le t \le 1$. $0 \le u \le 2(1-t)$. $f_X(t) = 2(1 -t)$, $0 \le t \le 1$, $f_Y(u) = 1 - u/2$, $0 \le u \le 2$ Answer $E[X] = \int_{0}^{1} 2t(1 - t)\ dt = 1/3$, $E[Y] = 2/3$, $E[X^2] = 1/6$, $E[Y^2] = 2/3$ $E[XY] = \int_{0}^{1} \int_{0}^{2(1-t)} tu\ dudt = 1/6$ tuappr: [0 1] [0 2] 200 400 u<=2*(1-t) EX = 0.3333 EY = 0.6667 EX2 = 0.1667 EY2 = 0.6667 EXY = 0.1667 (use t, u, P) Exercise $21$ (See Exercise 11 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = 1/2$ on the square with vertices at (1, 0), (2, 1) (1, 2), (0, 1). $f_{X} (t) = f_{Y} (t) = I_{[0, 1]} (t) t + I_{(1, 2]} (t) (2 - t)$ Answer $E[X] = E[Y] = \int_{0}^{1} t^2 \ dt + \int_{1}^{t} (2t - t^2) \ dt = 1$, $E[X^2] = E[Y^2] = 7/6$ $E[XY] = (1/2) \int_{0}^{1} \int_{1 - t}^{1 + t} dt dt + (1/2) \int_{1}^{2} \int_{t - 1}^{3 - t} du dt = 1$ tuappr: [0 2] [0 2] 200 200 0.5*(u<=min(t+1,3-t))&(u>=max(1-t,t-1)) EX = 1.0000 EY = 1.0002 EX2 = 1.1684 EY2 = 1.1687 EXY = 1.0002 Exercise $22$ (See Exercise 12 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = 4t (1 - u)$ for $0 \le t \le 1$. $0 \le u \le 1$ $f_X (t) = 2t$, $0 \le t \le 1$, $f_Y(u) = 2(1 - u)$, $0 \le u \le 1$ Answer $E[X] = 2/3$, $E[Y] = 1/3$, $E[X^2] = 1/2$, $E[Y^2] = 1/6$, $E[XY] = 2/9$ tuappr: [0 1] [0 1] 200 200 4*t.*(1-u) EX = 0.6667 EY = 0.3333 EX2 = 0.5000 EY2 = 0.1667 EXY = 0.2222 Exercise $23$ (See Exercise 13 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{1}{8} (t + u)$ for $0 \le t \le 2$, $0 \le u \le 2$ $f_{X} (t) = f_{Y} (t) = \dfrac{1}{4} (t + 1)$, $0 \le t \le 2$ Answer $E[X] = E[Y] = \dfrac{1}[4} \int_{0}^{2} (t^2 + t) \ dt = \dfrac{7}{6}$, $E[X^2] = E[Y^2] = 5/3$ $E[XY] = \dfrac{1}{8} \int_{0}^{2} \int_{0}^{2} (t^2u + tu^2) \ dudt = \dfrac{4}{3}$ tuappr: [0 1] [0 1] 200 200 4*t.*(1-u) EX = 1.1667 EY = 1.1667 EX2 = 1.6667 EY2 = 1.6667 EXY = 1.3333 Exercise $24$ (See Exercise 14 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = 4ue^{-2t}$ for $0 \le t, 0 \le u \le 1$ $f_X (t) = 2e^{-2t}$, $0 \le t$, $f_Y(u) = 2u$, $0 \le u \le 1$ Answer $E[X] = \int_{0}^{\infty} 2te^{-2t} \ dt = \dfrac{1}{2}$, $E[Y] = \dfrac{2}{3}$, $E[X^2] = \dfrac{1}{2}$, $E[Y^2] = \dfrac{1}{2}$, $E[XY] = \dfrac{1}{3}$ tuappr: [0 6] [0 1] 600 200 4*u.*exp(-2*t) EX = 0.5000 EY = 0.6667 EX2 = 0.4998 EY2 = 0.5000 EXY = 0.3333 Exercise $25$ (See Exercise 15 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$. $f_X(t) = \dfrac{3}{88} (1 + t) (1 + 4t + t^2) = \dfrac{3}{88} (1 + 5t + 5t^2 + t^3)$, $0 \le t \le 2$ $f_Y(t) = I_{[0, 1]} (u) \dfrac{3}{88} (6u^2 + 4) + I_{(1, 3]} (u) \dfrac{3}{88} (3 + 2u + 8u^2 - 3u^3)$ Answer $E[X] = \dfrac{313}{220}$, $E[Y] = \dfrac{1429}{880}$, $E[X^2] = \dfrac{49}{22}$, $E[Y^2] = \dfrac{172}{55}$, $E[XY] = \dfrac{2153}{880}$ tuappr: [0 2] [0 3] 200 300 (3/88)*(2*t + 3*u.^2).*(u<1+t) EX = 1.4229 EY = 1.6202 EX2 = 2.2277 EY2 = 3.1141 EXY = 2.4415 Exercise $26$ (See Exercise 16 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = 12t^2 u$ on the parallelogram with vertices (-1, 0), (0, 0), (1, 1), (0, 1) $f_X(t) = I_{[-1, 0]} (t) 6t^2 (t + 1)^2 + I_{(0, 1]} (t) 6t^2 (1 - t^2)$, $f_Y(u) 12u^3 - 12u^2 + 4u$, $0 \le u \le 1$ Answer $E[X] = \dfrac{2}{5}$, $E[Y] = \dfrac{11}{15}$, $E[X^2] = \dfrac{2}{5}$, $E[Y^2] = \dfrac{3}{5}$, $E[XY] = \dfrac{2}{5}$ tuappr: [-1 1] [0 1] 400 300 12*t.^2.*u.*(u>=max(0,t)).*(u<=min(1+t,1)) EX = 0.4035 EY = 0.7342 EX2 = 0.4016 EY2 = 0.6009 EXY = 0.4021 Exercise $27$ (See Exercise 17 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2-t\}$. $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{11}t + I_{(1, 2]} (t) \dfrac{12}{11} t (2 - t)^2$, $f_Y(u) = \dfrac{12}{11} u(u - 2)^2$, $0 \le u \le 1$ Answer $E[X] = \dfrac{52}{55}$, $E[Y] = \dfrac{32}{55}$, $E[X^2] = \dfrac{57}{55}$, $E[Y^2] = \dfrac{2}{5}$, $E[XY] = \dfrac{28}{55}$ tuappr: [0 2] [0 1] 400 200 (24/11)*t.*u.*(u<=min(1,2-t)) EX = 0.9458 EY = 0.5822 EX2 = 1.0368 EY2 = 0.4004 EXY = 0.5098 Exercise $28$ (See Exercise 18 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$. $f_X (t) = I_{[0, 1]} (t) \dfrac{6}{23} (2 - t) + I_{(1, 2]} (t) \dfrac{6}{23} t^2$, $f_Y(u) = I_{[0, 1]} (u) \dfrac{6}{23} (2u + 1) + I_{(1, 2]} (u) \dfrac{3}{23} (4 + 6u - 4u^2)$ Answer $E[X] = \dfrac{53}{46}$, $E[Y] = \dfrac{22}{23}$, $E[X^2] = \dfrac{397}{230}$, $E[Y^2] = \dfrac{261}{230}$, $E[XY] = \dfrac{251}{230}$ tuappr: [0 2] [0 2] 200 200 (3/23)*(t + 2*u).*(u<=max(2-t,t)) EX = 1.1518 EY = 0.9596 EX2 = 1.7251 EY2 = 1.1417 EXY = 1.0944 Exercise $29$ (See Exercise 19 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{12}{179} (3t^2 + u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2, 3 - t\}$. $f_X (t) = I_{[0, 1]} (t) \dfrac{24}{179} (3t^2 + 1) + I_{(1, 2]} (t) \dfrac{6}{179} (9 - 6t + 19t^2 - 6t^3)$ $f_Y (u) = I_{[0, 1]} (t) \dfrac{24}{179} (4 + u) + I_{(1, 2]} (t) \dfrac{12}{179} (27 - 24u + 8u^2 - u^3)$ Answer $E[X] = \dfrac{2313}{1790}$, $E[Y] = \dfrac{778}{895}$, $E[X^2] = \dfrac{1711}{895}$, $E[Y^2] = \dfrac{916}{895}$, $E[XY] = \dfrac{1811}{1790}$ tuappr: [0 2] [0 2] 400 400 (12/179)*(3*t.^2 + u).*(u<=min(2,3-t)) EX = 1.2923 EY = 0.8695 EX2 = 1.9119 EY2 = 1.0239 EXY = 1.0122 Exercise $30$ (See Exercise 20 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{12}{227} (3t + 2tu)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{1 + t, 2\}$. $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{227} (t^3 + 5t^2 + 4t) + I_{(1, 2]} (t) \dfrac{120}{227} t$ $f_Y (u) = I_{[0, 1]} (t) \dfrac{24}{227} (2u + 3) + I_{(1, 2]} (u) \dfrac{6}{227} (2u + 3) (3 + 2u - u^2)$ $= I_{[0, 1]} (u) \dfrac{24}{227} (2u + 3) + I_{(1, 2]} (u) \dfrac{6}{227} (9 + 12u + u^2 - 2u^3)$ Answer $E[X] = \dfrac{1567}{1135}$, $E[Y] = \dfrac{2491}{2270}$, $E[X^2] = \dfrac{476}{227}$, $E[Y^2] = \dfrac{1716}{1135}$, $E[XY] = \dfrac{5261}{3405}$ tuappr: [0 2] [0 2] 400 400 (12/227)*(3*t + 2*t.*u).*(u<=min(1+t,2)) EX = 1.3805 EY = 1.0974 EX2 = 2.0967 EY2 = 1.5120 EXY = 1.5450 Exercise $31$ (See Exercise 21 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = \dfrac{2}{13} (t + 2u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2t, 3-t\}$. $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{13} t^2 + I_{(1, 2]} (t) \dfrac{6}{13} (3 - t)$ $f_Y(u) = I_{[0, 1]} (u) (\dfrac{4}{13} + \dfrac{8}{13} u - \dfrac{9}{52} u^2) + I_{(1, 2]} (u) (\dfrac{9}{13} + \dfrac{6}{13} u - \dfrac{51}{52} u^2)$ Answer $E[X] = \dfrac{16}{13}$, $E[Y] = \dfrac{11}{12}$, $E[X^2] = \dfrac{219}{130}$, $E[Y^2] = \dfrac{83}{78}$, $E[XY] = \dfrac{431}{390}$ tuappr: [0 2] [0 2] 400 400 (2/13)*(t + 2*u).*(u<=min(2*t,3-t)) EX = 1.2309 EY = 0.9169 EX2 = 1.6849 EY2 = 1.0647 EXY = 1.1056 Exercise $32$ (See Exercise 22 from "Problems On Random Vectors and Joint Distribution"). $f_{XY} (t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2 u^2$, for $0 \le u \le 1$. $f_X(t) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 1) + I_{(1, 2]} (t) \dfrac{3}{14} t^2$, $f_Y(u) = \dfrac{1}{8} + \dfrac{3}{4} u + \dfrac{3}{2} u^2$ (0 \le u \le 1\) Answer $E[X] = \dfrac{243}{224}$, $E[Y] = \dfrac{11}{16}$, $E[X^2] = \dfrac{107}{70}$, $E[Y^2] = \dfrac{127}{240}$, $E[XY] = \dfrac{347}{448}$ tuappr: [0 2] [0 1] 400 200 (3/8)*(t.^2 + 2*u).*(t<=1) + (9/14)*(t.^2.*u.^2).*(t > 1) EX = 1.0848 EY = 0.6875 EX2 = 1.5286 EY2 = 0.5292 EXY = 0.7745 Exercise $33$ The class $\{X, Y, Z\}$ of random variables is iid(independent, identically distributed) with common distribution $X =$ [-5 -1 3 4 7] $PX =$ 0.01 * [15 20 30 25 10] Let $W = 3X - 4Y + 2Z$. Determine $E[W]$. Do this using icalc, then repeat with icalc3 and compare results. Answer Use $x$ and $px$ to prevent renaming. x = [-5 -1 3 4 7]; px = 0.01*[15 20 30 25 10]; icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities px Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P G = 3*t - 4*u [R,PR] = csort(G,P); icalc Enter row matrix of X-values R Enter row matrix of Y-values x Enter X probabilities PR Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P H = t + 2*u; EH = total(H.*P) EH = 1.6500 [W,PW] = csort(H,P); % Alternate EW = W*PW' EW = 1.6500 icalc3 % Solution with icalc3 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter X probabilities px Enter Y probabilities px Enter Z probabilities px Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P K = 3*t - 4*u + 2*v; EK = total(K.*P) EK = 1.6500 Exercise $34$ (See Exercise 5 from "Problems on Functions of Random Variables") The cultural committee of a student organization has arranged a special deal for tickets to a concert. The agreement is that the organization will purchase ten tickets at $20 each (regardless of the number of individual buyers). Additional tickets are available according to the following schedule: 11-20,$18 each; 21-30 $16 each; 31-50,$15 each; 51-100, \$13 each If the number of purchasers is a random variable $X$, the total cost (in dollars) is a random quantity $Z = g(X)$ described by $g(X) = 200 + 18I_{M1} (X) (X - 10) + (16 - 18) I_{M2} (X) (X - 20) +$ $(15 - 16) I_{M3} (X) (X - 30) + (13 - 15) I_{M4} (X) (X - 50)$ where $M1 = [10, \infty)$, $M2 = [20, \infty)$, $M3 = [30, \infty)$, $M4 = [50, \infty)$ Suppose $X$ ~ Poisson (75). Approximate the Poisson distribution by truncating at 150. Determine $E[Z]$ and $E[Z^2]$. Answer X = 0:150; PX = ipoisson(75, X); G = 200 + 18*(X - 10).*(X>=10) + (16 - 18)*(X - 20).*(X>=20) + ... (15 - 16)*(X - 30).*(X>=30) + (13 - 15)*(X>=50); [Z,PZ] = csort(G,PX); EZ = Z*PZ' EZ = 1.1650e+03 EZ2 = (Z.^2)*PZ' EZ2 = 1/3699e+06 Exercise $35$ The pair $\{X, Y\}$ has the joint distribution (in m-file npr08_07.m): $P(X = t, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Let $Z = g(X, Y) = 3X^2 + 2XY - Y^2)$. Determine $E[Z]$ and $E[Z^2]$. Answer npr08_07 Data are in X, Y, P jcalc ------------------ G = 3*t.^2 + 2*t.*u - u.^2; EG = total(G.*P) EG = 5.2975 ez2 = total(G.^2.*P) EG2 = 1.0868e+03 [Z,PZ] = csort(G,P); % Alternate EZ = Z*PZ' EZ = 5.2975 EZ2 = (Z.^2)*PZ' EZ2 = 1.0868e+03 Exercise $36$ For the pair $\{X, Y\}$ in Exercise 11.3.35, let $W = g(X, Y) = \begin{cases} X & \text{for } X + Y \le 4 \ 2Y & \text{for } X+Y > 4 \end{cases} = I_M (X, Y) X + I_{M^c} (X, Y)2Y$ Determine $E[W]$ and $E[W^2]$. Answer H = t.*(t+u<=4) + 2*u.*(t+u>4); EH = total(H.*P) EH = 4.7379 EH2 = total(H.^2.*P) EH2 = 61.4351 [W,PW] = csort(H,P); %Alternate EW = W*PW' EW = 4.7379 EW2 = (W.^2)*PW' EW2 = 61.4351 For the distribution in Exercises 37-41 below a. Determine analytically $E[Z]$ and $E[Z^2]$ b. Use a discrete approximation to calculate the same quantities. Exercise $37$ $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1+t$ (see Exercise 25). $Z = I_{[0, 1]} (X)4X + I_{(1,2]} (X)(X+Y)$ Answer $E[Z] = \dfrac{3}{88} \int_{0}^{1} \int_{0}^{1 + t} 4t (2t + 3u^2)\ dudt + \dfrac{3}{88} \int_{1}^{2} \int_{0}^{1 + t} (t + u) (2t + 3u^2)\ dudt = \dfrac{5649}{1760}$ $E[Z^2] = \dfrac{3}{88} \int_{0}^{1} \int_{0}^{1 + t} (4t)^2 (2t + 3u^2)\ dudt + \dfrac{3}{88} \int_{1}^{2} \int_{0}^{1 + t} (t + u)^2 (2t + 3u^2)\ dudt = \dfrac{4881}{440}$ tuappr: [0 2] [0 3] 200 300 (3/88)*(2*t+3*u.^2).*(u<=1+t) G = 4*t.*(t<=1) + (t + u).*(t>1); EG = total(G.*P) EG = 3.2086 EG2 = total(G.^2.*P) EG2 = 11.0872 Exercise $38$ $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2 - t\}$ (see Exercise 27) $Z = I_M(X, Y) \dfrac{1}{2}X + I_{M^c} (X, Y) Y^2$, $M = \{(t, u) : u > t\}$ Answer $E[Z] = \dfrac{12}{11} \int_{0}^{1} \int_{t}^{1} t^2u\ dudt + \dfrac{24}{11} \int_{0}^{1} \int_{0}^{t} tu^3\ dudt + \dfrac{24}{11} \int_{1}^{2} \int_{0}^{2 - t} tu^3\ dudt = \dfrac{16}{55}$ $E[Z^2] = \dfrac{6}{11} \int_{0}^{1} \int_{t}^{1} t^3u\ dudt + \dfrac{24}{11} \int_{0}^{1} \int_{0}^{t} tu^5\ dudt + \dfrac{24}{11} \int_{1}^{2} \int_{0}^{2 - t} tu^5\ dudt = \dfrac{39}{308}$ tuappr: [0 2] [0 1] 400 200 (24/11)*t.*u.*(u<=min(1,2-t)) G = (1/2)*t.*(u>t) + u.^2.*(u<=t); EZ = 0.2920 EZ2 = 0.1278 Exercise $39$ $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$ (see Exercise 28) $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y)2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ Answer $E[Z] = \dfrac{3}{23} \int_{0}^{1} \int_{0}^{1} (t + u) (t + 2u) \ dudt + \dfrac{3}{23} \int_{0}^{1} \int_{1}^{2 - t} 2u (1 + 2u)\ dudt + \dfrac{3}{23} \int_{1}^{2} \int_{1}^{t} 2u (t + 2u)\ dudt = \dfrac{175}{92}$ $E[Z^2] = \dfrac{3}{23} \int_{0}^{1} \int_{0}^{1} (t + u)^2 (t + 2u) \ dudt + \dfrac{3}{23} \int_{0}^{1} \int_{1}^{2 - t} 4u^2 (1 + 2u)\ dudt + \dfrac{3}{23} \int_{1}^{2} \int_{1}^{t} 4u^2 (t + 2u)\ dudt =$ tuappr: [0 2] [0 2] 400 400 (3/23)*(t+2*u).*(u<=max(2-t,t)) M = max(t,u)<=1; G = (t+u).*M + 2*u.*(1-M); EZ = total(G.*P) EZ = 1.9048 EZ2 = total(G.^2.*P) EZ2 = 4.4963 Exercise $40$ $f_{XY} (t, u) = \dfrac{12}{179} (3t^2 + u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2, 3-t\}$ (see Exercise 19) $Z = I_M (X,Y) (X + Y) + I_{M^c} (X, Y) 2Y^2$, $M = \{(t, u): t \le 1, u \ge 1\}$ Answer $E[Z] = \dfrac{12}{179} \int_{0}^{1} \int_{1}^{2} (t + u) (3t^2 + u)\ dudt + \dfrac{12}{179} \int_{0}^{1} \int_{0}^{1} 2u^2 (3t^2 + u)\ dudt + \dfrac{12}{179} \int_{1}^{2} \int_{0}^{3 - t} 2u^2 (3t^2 + u)\ dudt = \dfrac{1422}{895}$ $E[Z^2] = \dfrac{12}{179} \int_{0}^{1} \int_{1}^{2} (t + u)^2 (3t^2 + u)\ dudt + \dfrac{12}{179} \int_{0}^{1} \int_{0}^{1} 4u^4 (3t^2 + u)\ dudt + \dfrac{12}{179} \int_{1}^{2} \int_{0}^{3 - t} 4u^4 (3t^2 + u)\ dudt = \dfrac{28296}{6265}$ tuappr: [0 2] [0 2] 400 400 (12/179)*(3*t.^2 + u).*(u <= min(2,3-t)) M = (t<=1)&(u>=1); G = (t + u).*M + 2*u.^2.*(1 - M); EZ = total(G.*P) EZ = 1.5898 EZ2 = total(G.^2.*P) EZ2 = 4.5224 Exercise $41$ $f_{XY} (t, u) = \dfrac{12}{227} (2t + 2tu)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{1 + t, 2\}$ (see Exercise 30). $Z = I_M (X, Y) X + I_{M^c} (X, Y) XY$, $M = \{(t, u): u \le \text{min } (1, 2 - t)\}$ Answer $E[Z] = \dfrac{12}{227} \int_{0}^{1} \int_{0}^{1} t (3t + 2tu) \ dudt + \dfrac{12}{227} \int_{1}^{2} \int_{0}^{2 - t} t(3t + 2tu)\ dudt +$ $\dfrac{12}{227} \int_{0}^{1} \int_{1}^{1 + t} tu(3t + 2tu)\ dudt + \dfrac{12}{227} \int_{1}^{2} \int_{2 - t}^{2} tu (3t + 2tu)\ dudt = \dfrac{5774}{3405}$ $E[Z^2] = \dfrac{56673}{15890}$ tuappr: [0 2] [0 2] 400 400 (12/227)*(3*t + 2*t.*u).*(u <= min(1+t,2)) M = u <= min(1,2-t); G = t.*M + t.*u.*(1 - M); EZ = total(G.*P) EZ = 1.6955 EZ2 = total(G.^2.*P) EZ2 = 3.5659 Exercise $42$ The class $\{X, Y, Z\}$ is independent. (See Exercise 16 from "Problems on Functions of Random Variables", m-file npr10_16.m) $X = -2I_A + I_B + 3I_C$. Minterm probabilities are (in the usual order) 0.255 0.025 0.375 0.045 0.108 0.012 0.162 0.018 $Y = I_D + 3I_E + I_F - 3$. The class $\{D, E, F\}$ is independent with $P(D) = 0.32$ $P(E) = 0.56$ $P(F) = 0.40$ $Z$ has distribution Value -1.3 1.2 2.7 3.4 5.8 Probability 0.12 0.24 0.43 0.13 0.08 $W = X^2 + 3XY^2 - 3Z$. Determine $E[W]$ and $E[W^2]$. Answer npr10_16 Data are in cx, pmx, cy, pmy, Z, PZ [X,PX] = canonicf(cx,pmx); [Y,PY] - canonicf(cy,pmy); icalc3 input: X, Y, Z, PX, PY, PZ ------------- Use array operations on matrices X, Y, Z. PX, PY, PZ, t, u, v, and P G = t.^2 + 3*t.*u.^2 - 3*v; [W,PW] = csort(G,P); EW = W*PW' EW = -1.8673 EW2 = (W.^2)*PW' EW2 = 426.8529
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/11%3A_Mathematical_Expectation/11.03%3A_Problems_on_Mathematical_Expectation.txt
In the treatment of the mathematical expection of a real random variable $X$, we note that the mean value locates the center of the probability mass distribution induced by $X$ on the real line. In this unit, we examine how expectation may be used for further characterization of the distribution for $X$. In particular, we deal with the concept of variance and its square root the standard deviation. In subsequent units, we show how it may be used to characterize the distribution for a pair $\{X, Y\}$ considered jointly with the concepts covariance, and linear regression Variance Location of the center of mass for a distribution is important, but provides limited information. Two markedly different random variables may have the same mean value. It would be helpful to have a measure of the spread of the probability mass about the mean. Among the possibilities, the variance and its square root, the standard deviation, have been found particularly useful. Definition: Variance & Standard Deviation The variance of a random variable $X$ is the mean square of its variation about the mean value: $\text{Var } [X] = \sigma_X^2 = E[(X - \mu_X)^2]$ where $\mu_X = E[X]$ The standard deviation for X is the positive square root $\sigma_X$ of the variance. Remarks • If $X(\omega)$ is the observed value of $X$, its variation from the mean is $X(\omega) - \mu_X$. The variance is the probability weighted average of the square of these variances. • The square of the error treats positive and negative variations alike, and it weights large variations more heavily than smaller ones. • As in the case of mean value, the variance is a property of the distribution, rather than of the random variable. • We show below that the standard deviation is a “natural” measure of the variation from the mean. • In the treatment of mathematical expectation, we show that $E[(X - c)^2]$ is a minimum off $c = E[X]$, in which case $E[(X - E[X])^2] = E[X^2] - E^2[X]$ This shows that the mean value is the constant which best approximates the random variable, in the mean square sense. Basic patterns for variance Since variance is the expectation of a function of the random variable X, we utilize properties of expectation in computations. In addition, we find it expedient to identify several patterns for variance which are frequently useful in performing calculations. For one thing, while the variance is defined as $E[(X - \mu_X)^2]$, this is usually not the most convenient form for computation. The result quoted above gives an alternate expression. (V1): Calculating formula. $\text{Var } [X] = E[X^2] - E^2[X]$ (V2): Shift property. $\text{Var } [X + b] = \text{Var } [X]$. Adding a constant $b$ to $X$ shifts the distribution (hence its center of mass) by that amount. The variation of the shifted distribution about the shifted center of mass is the same as the variation of the original, unshifted distribution about the original center of mass. (V3): Change of scale. $\text{Var } [aX] = a^2\text{Var }[X]$. Multiplication of $X$ by constant a changes the scale by a factor $[a]$. The squares of the variations are multiplied by $a^2$. So also is the mean of the squares of the variations. (V4): Linear combinations. a. $\text{Var }[aX \pm bY] = a^2\text{Var }[X] + b^2 \text{Var } [Y] \pm 2ab(E[XY] - E[X]E[Y])$ b. More generally, $\text{Var } [\sum_{k = 1}^{n} a_k X_k] = \sum_{k = 1}^{n} a_k^2 \text{Var }[X_k] + 2\sum_{i < j} a_i a_j (E[X_i X_j] - E[X_i] E[X_j])$ The term$c_{ij} = E[X_i X_j] - E[X_i] E[X_j]$ is the covariance of the pair $\{X_i, X_j\}$, whose role we study in the unit on that topic. If the $c_{ij}$ are all zero, we say the class is uncorrelated. Remarks • If the pair $\{X, Y\}$ is independent, it is uncorrelated. The converse is not true, as examples in the next section show. • If the $a_i = \pm 1$ and all pairs are uncorrelated, then $\text{Var }[\sum_{k = 1}^{n} a_i X_i] = \sum_{k = 1}^{n} \text{Var } [X_i]$ The variance add even if the coefficients are negative. We calculate variances for some common distributions. Some details are omitted—usually details of algebraic manipulation or the straightforward evaluation of integrals. In some cases we use well known sums of infinite series or values of definite integrals. A number of pertinent facts are summarized in Appendix B. Some Mathematical Aids. The results below are included in the table in Appendix C. Variances of some discrete distributions Indicator function $X = I_E P(E) = p, q = 1 - p$ $E[X] = p$ $E[X^2] - E^2[X] = E[I_E^2] - p^2 = E[I_E] - p^2 = p - p^2 = p(1 - p) - pq$ Simple random variable $X = \sum_{i = 1}^{n} t_i I_{A_i}$ (primitive form) $P(A_i) = p_i$. $\text{Var }[X] = \sum_{i = 1}^{n} t_i^2 p_i q_i - 2 \sum_{i < j} t_i t_j p_i p_j$, since $E[I_{A_i} I_{A_j}] = 0$ $i \ne j$ Binomial($n, p$). $X = \sum_{i = 1}^{n} I_{E_i}$ with $\{I_{E_i}: 1 \le i \le n\}$ iid $P(E_i) = p$ $\text{Var }[X] = \sum_{i = 1}^{n} \text{Var }[I_{E_i}] = \sum_{i = 1}^{n} pq = npq$ Geometric($p$). $P(X = k) = pq^k$ $\forall k \ge 0$ $E[X] = q/p$ We use a trick: $E[X^2] = E[X(X - 1)] + E[X]$ $E[X^2] = p\sum_{k = 0}^{\infty} k(k - 1)q^k + q/p = pq^2 \sum_{k = 2}^{\infty} k(k - 1)q^{k - 2} + q/p = pq^2 \dfrac{2}{(1 - q)^3} + q/p = 2\dfrac{q^2}{p^2} + q/p$ $\text{Var }[X] = 2\dfrac{q^2}{p^2} + q/p - (q/p)^2 = q/p^2$ Poisson(\mu) $P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}$ $\forall k \ge 0$ Using $E[X^2] = E[X(X - 1)] + E[X]$, we have $E[X^2] = e^{-\mu} \sum_{k = 2}^{\infty} k(k - 1) \dfrac{\mu^k}{k!} + \mu = e^{-\mu} \mu^2 \sum_{k = 2}^{\infty} \dfrac{\mu^{k - 2}}{(k - 2)!} + \mu = \mu^2 + \mu$ Thus, $\text{Var }[X] = \mu^2 + \mu - \mu^2 = \mu$. Note that both the mean and the variance have common value $\mu$ Some absolutely continuous distributions Uniform on $(a, b)f_X(t) = \dfrac{1}{b - a}$ $a < t < b$ $E[X] = \dfrac{a + b}{2}$ $E[X^2] = \dfrac{1}{b - a} \int_a^b t^2\ dt = \dfrac{b^3 - a^3}{3(b - a)}$ so $\text{Var }[X] = \dfrac{b^3 - a^3}{3(b - a)} - \dfrac{(a + b)^2}{4} = \dfrac{(b - a)^2}{12}$ Symmetric triangular $(a, b)$ Because of the shift property (V2), we may center the distribution at the origin. Then the distribution is symmetric triangular $(-c, c)$, where $c = (b- a)/2$. Because of the symmetry $\text{Var }[X] = E[X^2] = \int_{-c}^{c} t^2f_X(t)\ dt = 2\int_{0}^{c} t^2 f_X (t)\ dt$ Now, in this case, $f_X (t) = \dfrac{c - t}{c^2}$ $0 \le t \le c$ so that $E[X^2] = \dfrac{2}{c^2} \int_{0}^{c} (ct^2 - t^3)\ dt = \dfrac{c^3}{6} = \dfrac{(b - a)^2}{24}$ Exponential (\lambda) $f_X (t) = \lambda e^{-\lambda t}$, $t \ge 0$ $E[X] = 1/\lambda$ $E[X^2] = \int_{0}^{\infty} \lambda t^2 e^{-\lambda t} \ dt = \dfrac{2}{\lambda^2}$ so that $\text{Var }[X] = 1/lambda^2$ Gamma($\alpha, \lambda$) $f_{X} (t) = \dfrac{1}{\Gamma(\alpha)} \lambda^{\alpha} t^{\alpha = 1} e^{-\lambda t}$ $t \ge 0$ $E[X] = \dfrac{\alpha}{\lambda}$ $E[X^2] = \dfrac{1}{\Gamma (\alpha)} \int_{0}^{\infty} \lambda^{\alpha} t^{\alpha + 1} e^{-\lambda t}\ dt = \dfrac{\Gamma (\alpha + 2)}{\lambda^2 \Gamma(\alpha)} = \dfrac{\alpha (\alpha + 1)}{lambda^2}$ Hence $\text{Var } [X] = \alpha/\lambda^2$. Normal($\mu, \sigma^2$) $E[X] = \mu$ Consider $Y$ ~ $N(0, 1)$, $E[Y] = 0$, $\text{Var }[Y] = \dfrac{2}{\sqrt{2\pi}} \int_{0}^{\infty} t^2 e^{-t^2/2} \ dt = 1$. $X = \sigma Y + \mu$ implies $\text{Var }[Y] = \sigma^2$ Extensions of some previous examples In the unit on expectations, we calculate the mean for a variety of cases. We revisit some of those examples and calculate the variances. Example $1$ Expected winnings (Example 8 from "Mathematical Expectation: Simple Random Variables") A bettor places three bets at $2.00 each. The first pays$10.00 with probability 0.15, the second $8.00 with probability 0.20, and the third$20.00 with probability 0.10. Solution The net gain may be expressed $X = 10 I_A + 8I_B + 20I_C - 6$, with $P(A) = 0.15, P(B) = 0.20, P(C) = 0.10$ We may reasonbly suppose the class $\{A, B, C\}$ is independent (this assumption is not necessary in computing the mean). Then $\text{Var }[X] = 10^2 P(A) [1 - P(A)] + 8^2 P(B)[1 - P(B)] + 20^2 P(C) [1 - P(C)]$ Calculation is straightforward. We may use MATLAB to perform the arithmetic. c = [10 8 20]; p = 0.01*[15 20 10]; q = 1 - p; VX = sum(c.^2.*p.*q) VX = 58.9900 Example $2$ A function of $X$ (Example 9 from "Mathematical Expectation: Simple Random Variables") Suppose $X$ in a primitive form is $X = -3I_{C_1} - I_{C_2} + 2I_{C_3} - 3I_{C_4} + 4I_{C_5} - I_{C_6} + I_{C_7} + 2I_{C_8} + 3I_{C_9} + 2I_{C_{10}}$ with probabilities $P(C_i) = 0.08, 0.11, 0.06, 0.13, 0.05, 0.08, 0.12, 0.07, 0.14, 0.16$. Let $g(t) = t^2 + 2t$. Determine $E[g(X)]$ and $\text{Var}[g(X)]$ c = [-3 -1 2 -3 4 -1 1 2 3 2]; % Original coefficients pc = 0.01*[8 11 6 13 5 8 12 7 14 16]; % Probabilities for c_j G = c.^2 + 2*c % g(c_j) EG = G*pc' % Direct calculation E[g(X)] EG = 6.4200 VG = (G.^2)*pc' - EG^2; % Direct calculation Var[g(X)] VG = 40.8036 [Z,PZ] = csort(G,pc); % Distribution for Z = g(X) EZ = Z*PZ' % E[Z] EZ = 6.4200 VZ = (Z.^2)*PZ' - EZ^2 % Var[Z] VZ = 40.8036 Example $3$ $Z = g(X, Y)$ (Example 10 from "Mathematical Expectation: Simple Random Variables") We use the same joint distribution as for Example 10 from "Mathematical Expectation: Simple Random Variables" and let $g(t, u) = t^2 + 2tu - 3u$. To set up for calculations, we use jcalc. jdemo1 % Call for data jcalc % Set up Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P G = t.^2 + 2*t.*u - 3*u; % calcculation of matrix of [g(t_i, u_j)] EG = total(G.*P) % Direct calculation of E[g(X,Y)] EG = 3.2529 VG = total(G.^.*P) - EG^2 % Direct calculation of Var[g(X,Y)] VG = 80.2133 [Z,PZ] = csort(G,P); % Determination of distribution for Z EZ = Z*PZ' % E[Z] from distribution EZ = 3.2529 VZ = (Z.^2)*PZ' - EZ^2 % Var[Z] from distribution VZ = 80.2133 Example $4$ A function with compound definition (Example 12 from "Mathematical Expectation: Simple Random Variables") Suppose $X) ~ exponential (0.3). Let \(Z = \begin{cases} X^2 & \text{for } X \le 4 \ 16 & \text{for } X > 4 \end{cases} = I_{[0,4]} (X) X^2 + I_{(4, \infty]} (X) 16$ Determine $E[Z]$ and $Var[Z]$. Analytic Solution $E[g(X)] = \int g(t) f_X(t)\ dt = \int_{0}^{\infty} I_{[0, 4]} (t) t^2 0.3 e^{-0.3t}\ dt + 16 E[I_{(4, \infty]} (X)]$ $= \int_{0}^{4} t^2 0.3 e^{-0.3t} \ dt + 16 P(X > 4) \approx 7.4972$ (by Maple) $Z^2 - I_{[0, 4]} (X) X^4 + I_{(4, \infty]} (X) 256$ $E[Z^2] = \int_{0}^{\infty} I_{[0,4]} (t) t^4 0.3 e^{-0.3t}\ dt + 256 E[I_{(4, \infty]} (X)] = \int_{0}^{4} t^4 0.3 e^{-0.3t}\ dt + 256 e^{-1.2} \approx 100.0562$ $\text{Var } [Z] = E[Z^2] - E^2[Z] \approx 43.8486$ (by Maple) APPROXIMATION To obtain a simple aproximation, we must approximate by a bounded random variable. Since $P(X > 50) = e^{-15} \approx 3 \cdot 10^{-7}$ we may safely truncate $X$ at 50. tuappr Enter matrix [a b] of x-range endpoints [0 50] Enter number of x approximation points 1000 Enter density as a function of t 0.3*exp(-0.3*t) Use row matrices X and PX as in the simple case M = X <= 4; G = M.*X.^2 + 16*(1 - M); % g(X) EG = G*PX' % E[g(X)] EG = 7.4972 VG = (G.^2)*PX' - EG^2 % Var[g(X)] VG = 43.8472 % Theoretical = 43.8486 [Z,PZ] = csort(G,PX); % Distribution for Z = g(X) EZ = Z*PZ' % E[Z] from distribution EZ = 7.4972 VZ = (Z.^2)*PZ' - EZ^2 % Var[Z] VZ = 43.8472 Example $5$ Stocking for random demand (Example 13 from "Mathematical Expectation: Simple Random Variables") The manager of a department store is planning for the holiday season. A certain item costs $c$ dollars per unit and sells for $p$ dollars per unit. If the demand exceeds the amount $m$ ordered, additional units can be special ordered for $s$ dollars per unit ($s >c$). If demand is less than the amount ordered, the remaining stock can be returned (or otherwise disposed of) at $r$ dollars per unit ($r < c$). Demand $D$ for the season is assumed to be a random variable with Poisson ($\mu$) distribution. Suppose $\mu = 50$, $c = 30$, $p = 50$, $s = 40$, $r = 20$. What amount $m$ should the manager order to maximize the expected profit? Problem Formulation Suppose $D$ is the demand and $X$ is the profit. Then For $D \le m$, $X = D(p - c) - (m - D)(c - r) = D(p - r) + m(r - c)$ For $D > m$, $X = m(p - c) - (D - m)(p - s) = D(p - s) + m(s - c)$ It is convenient to write the expression for $X$ in terms of $I_M$, where $M = (-\infty, M]$. Thus $X = I_M (D) [D(p - r) + m(r - c)] + [1 - I_M(D)][D(p - s) + m(s - c)]$ $= D(p - s) + m(s - c) + I_M (D) [D(p - r) + m(r - c) - D(p - s) - m(s - c)]$ $D(p - s) + m(s - c) + I_M(D) (s- r)[D - m]$ Then $E[X] = (p - s) E[D] + m(s - c) + (s - r) E[I_M(D) D] - (s - r) mE[I_M(D)]$ We use the discrete approximation. APPROXIMATION >> mu = 50; >> n = 100; >> t = 0:n; >> pD = ipoisson(mu,t); % Approximate distribution for D >> c = 30; >> p = 50; >> s = 40; >> r = 20; >> m = 45:55; >> for i = 1:length(m) % Step by step calculation for various m M = t<=m(i); G(i,:) = (p-s)*t + m(i)*(s-c) + (s-r)*M.*(t - m(i)); end >> EG = G*pD'; >> VG = (G.^2)*pD' - EG.^2; >> SG = sqrt(VG); >> disp([EG';VG';SG']') 1.0e+04 * 0.0931 1.1561 0.0108 0.0936 1.3117 0.0115 0.0939 1.4869 0.0122 0.0942 1.6799 0.0130 0.0943 1.8880 0.0137 0.0944 2.1075 0.0145 0.0943 2.3343 0.0153 0.0941 2.5637 0.0160 0.0938 2.7908 0.0167 0.0934 3.0112 0.0174 0.0929 3.2206 0.0179 Example $6$ A jointly distributed pair (Example 14 from "Mathematical Expectation: Simple Random Variables") Suppose the pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 3u$ on the triangular region bounded by $u = 0$, $u = 1 + t$, $u = 1 - t$. Let $Z = g(X, Y) = X^2 + 2XY$. Determine $E[Z]$ and $\text{Var }[Z]$. Analytic Solution $E[Z] = \int \int (t^2 + 2tu) f_{XY} (t, u) \ dudt = 3\int_{-1}^{0} \int_{0}^{1 + t} u(t^2 + 2tu)\ dudt + 3 \int_{0}^{1} \int_{0}^{1 - t} u(t^2 + 2tu)\ dudt = 1/10$ $E[Z^2] = 3\int_{-1}^{0} \int_{0}^{1 + t} u(t^2 + 2tu)^2 \ dudt + 3\int_{0}^{1} \int_{0}^{1 - t} u(t^2 + 2tu)^2 \ dudt = 3/35$ $\text{Var } [Z] = E[Z^2] -E^2[Z] = 53/700 \approx 0.0757$ APPROXIMATION tuappr Enter matrix [a b] of x-range endpoints [-1 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 400 Enter number of Y approximation points 200 Enter expression for joint density 3*u.*(u<=min(1+t,1-t)) Use array operations on X, Y, PX, PY, t, u, and P G = t.^2 + 2*t.*u; % g(X,Y) = X^2 + 2XY EG = total(G.*P) % E[g(X,Y)] EG = 0.1006 % Theoretical value = 1/10 VG = total(G.^2.*P) - EG^2 VG = 0.0765 % Theoretical value 53/700 = 0.757 [Z,PZ] = csort(G,P); % Distribution for Z EZ = Z*PZ' % E[Z] from distribution EZ = 0.1006 VZ = (Z.^2)*PZ' - EZ^2 VZ = 0.0765 Example $7$ A function with compound definition (Example 15 from "Mathematical Expectation: Simple Random Variables") The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 1/2$ on the square region bounded by $u = 1 + t$, $u = 1 - t$, $u = 3 - t$, and $u = t - 1$. $W = \begin{cases} X & \text{for max }\{X, Y\} \le 1 \ 2Y & \text{for max } \{X, Y\} > 1 \end{cases} = I_Q(X, Y) X + I_{Q^c} (X, Y) 2Y$ where $Q = \{(t, u): \text{max } \{t, u\} \le 1 \} = \{(t, u): t \le 1, u \le 1\}$. Determine $E[W]$ and $\text{Var } [W]$. Solution The intersection of the region $Q$ and the square is the set for which $0 \le t \le 1$ and $1 - t \le u \le 1$. Reference to Figure 11.3.2 shows three regions of integration. $E[W] = \dfrac{1}{2} \int_{0}^{1} \int_{1 - t}^{1} t \ dudt + \dfrac{1}{2} \int_{0}^{1} \int_{1}^{1 + t} 2u \ dudt + \dfrac{1}{2} \int_{1}^{2} \int_{t - 1}^{3 - t} 2u\ dudt = 11/6 \approx 1.8333$ $E[W^2] = \dfrac{1}{2} \int_{0}^{1} \int_{1 - t}^{1} t^2\ dudt + \dfrac{1}{2} \int_{0}^{1} \int_{1}^{1 + t} 4u^2 \ dudt + \dfrac{1}{2} \int_{1}^{2} \int_{t - 1}^{3 - t} 4u^2 \ dudt = 103/24$ $\text{Var } [W] = 103/24 - (11/6)^2 = 67/72 \approx 0.9306$ tuappr Enter matrix [a b] of x-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density ((u<=min(t+1,3-t))& ... (u\$gt;=max(1-t,t-1))/2 Use array operations on X, Y, PX, PY, t, u, and P M = max(t,u)<=1; G = t.^M + 2*u.*(1 - M); %Z = g(X,Y) EG = total(G.*P) % E[g(X,Y)] EG = 1.8349=0 % Theoretical 11/6 = 1.8333 VG = total(G.^2.*P) - EG^2 VG = 0.9368 % Theoretical 67/72 = 0.9306 [Z,PZ] = csort(G,P); % Distribution for Z EZ = Z*PZ' % E[Z] from distribution EZ = 1.8340 VZ = (Z.^2)*PZ' - EZ^2 VZ = 0.9368 Example $8$ A function with compound definition $f_{XY} (t, u) = 3$ on $0 \le u \le t^2 \le 1$ $Z = I_Q (X, Y)X + I_{Q^c} (X, Y)$ for $Q = \{(t, u): u + t \le 1\}$ The value $t_0$ where the line $u = 1 - t$ and the curve $u = t^2$ meet satisfies $t_0^2 = 1 - t_0$. $E[Z] = 3 \int_{0}^{t_0} t \int_{0}^{t^2} \ dudt + 3 \int_{t_0}^{1} t \int_{0}^{1 - t} \ dudt + 3 \int_{t_0}^{1} \int_{1 - t}^{t^2} \ dudt = \dfrac{3}{4} (5t_0 - 2)$ For $E[Z^2]$ replace $t$ by $t^2$ in the integrands to get $E[Z^2] = (25t_0 - 1)/20$. Using $t_0 = (\sqrt{5} - 1)/2 \approx 0.6180$, we get $\text{Var }[Z] = (2125t_0 - 1309)/80 \approx 0.0540$. APPROXIMATION % Theoretical values t0 = (sqrt(5) - 1)/2 t0 = 0.6180 EZ = (3/4)*(5*t0 - 2) EZ = 0.8176 EZ2 = (25*t0 - 1)/20 EZ2 = 0.7225 VZ = (2125*T0 - 1309)/80 VZ = 0.0540 tuappr Enter matrix [a b] of x-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density 3*(u <= t.^2) Use array operations on X, Y, t, u, and P G = (t+u <= 1).*t + (t+u > 1); EG = total(G.*P) EG = 0.8169 % Theoretical = 0.8176 VG = total(G.^2.*P) - EG^2 VG = 0.0540 % Theoretical = 0.0540 [Z,PZ] = csort(G,P); EZ = Z*PZ' EZ = 0.8169 VZ = (Z.^2)*PZ' - EZ^2 VZ = 0.0540 Standard deviation and the Chebyshev inequality In Example 5 from "Functions of a Random Variable," we show that if $X$ ~ $N(\mu, \sigma^2)$, then $Z = \dfrac{X - \mu}{\sigma}$ ~ $N(0, 1)$. Also, $E[X] = \mu$ and $\text{Var } [X] = \sigma^2$. Thus $P(\dfrac{|X - \mu|}{\sigma} \le t) = P(|X - \mu| \le t \sigma) = 2 \phi (t) - 1$ For the normal distribution, the standard deviation $\sigma$ seems to be a natural measure of the variation away from the mean. For a general distribution with mean $\mu$ and variance $\sigma^2$, we have the Chebyshev inequality $P(\dfrac{|X - \mu|}{\sigma} \ge a) \le \dfrac{1}{a^2}$ or $P(|X - \mu| \ge a \sigma) \le \dfrac{1}{a^2}$ In this general case, the standard deviation appears as a measure of the variation from the mean value. This inequality is useful in many theoretical applications as well as some practical ones. However, since it must hold for any distribution which has a variance, the bound is not a particularly tight. It may be instructive to compare the bound on the probability given by the Chebyshev inequality with the actual probability for the normal distribution. t = 1:0.5:3; p = 2*(1 - gaussion(0.1,t)); c = ones(1,length(t))./(t.^2); r = c./p; h = [' t Chebyshev Prob Ratio']; m = [t;c;p;r]'; disp(h) t Chebyshev Prob Ratio disp(m) 1.0000 1.0000 0.3173 3.1515 1.5000 0.4444 0.1336 3.3263 2.0000 0.2500 0.0455 5.4945 2.5000 0.1600 0.0124 12.8831 3.0000 0.1111 0.0027 41.1554 — □ DERIVATION OF THE CHEBYSHEV INEQUALITY Let $A = \{|X - \mu| \ge a \sigma\} = \{(X - \mu)^2 \ge a^2 \sigma^2\}$. Then $a^2 \sigma^2 I_A \le (X - \mu)^2$. Upon taking expectations of both sides and using monotonicity, we have $a^2 \sigma^2 P(A) \le E[(X - \mu)^2] = \sigma^2$ from which the Chebyshev inequality follows immediately. — □ We consider three concepts which are useful in many situations. Definition A random variable $X$ is centered iff $E[X] = 0$. $X' = X - \mu$ is always centered. Definition A random variable $X$ is standardized iff $E[X] = 0$ and $\text{Var} [X] = 1$. $X^* = \dfrac{X - \mu}{\sigma} = \dfrac{X'}{\sigma}$ is standardized Definition A pair $\{X, Y\}$ of random variables is uncorrelated iff $E[XY] - E[X]E[Y] = 0$ It is always possible to derive an uncorrelated pair as a function of a pair $\{X, Y\}$, both of which have finite variances. Consider $U = (X^* + Y^*)$ $V = (X^* - Y^*)$, where $X^* = \dfrac{X - \mu_X}{\sigma_X}$, $Y^* = \dfrac{Y - \mu_Y}{\sigma_Y}$ Now $E[U] = E[V] = 0$ and $E[UV] = E(X^* + Y^*) (X^* - Y^*)] = E[(X^*)^2] - E[(Y^*)^2] = 1 - 1 = 0$ so the pair is uncorrelated. Example $9$ Determining an unvorrelated pair We use the distribution for Examples Example 10 from "Mathematical Expectation: Simple Random Variables" and Example, for which $E[XY] - E[X]E[Y] \ne 0$ jdemo1 jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P EX = total(t.*P) EX = 0.6420 EY = total(u.*P) EY = 0.0783 EXY = total(t.*u.*P) EXY = -0.1130 c = EXY - EX*EY c = -0.1633 % {X, Y} not uncorrelated VX = total(t.^2.*P) - EX^2 VX = 3.3016 VY = total(u.^2.*P) - EY^2 VY = 3.6566 SX = sqrt(VX) SX = 1.8170 SY = sqrt(VY) SY = 1.9122 x = (t - EX)/SX; % Standardized random variables y = (u - EY)/SY; uu = x + y; % Uncorrelated random variables vv = x - y; EUV = total(uu.*vv.*P) % Check for uncorrelated condition EUV = 9.9755e-06 % Differs from zero because of roundoff
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/12%3A_Variance_Covariance_and_Linear_Regression/12.01%3A_Variance.txt
The mean value $\mu_X = E[X]$ and the variance $\sigma_X^2 = E[(X - \mu_X)^2]$ give important information about the distribution for real random variable $X$. Can the expectation of an appropriate function of $(X, Y)$ give useful information about the joint distribution? A clue to one possibility is given in the expression $\text{Var}[X \pm Y] = \text{Var} [X] + \text{Var} [Y] \pm 2(E[XY] - E[X]E[Y])$ The expression $E[XY] - E[X]E[Y]$ vanishes if the pair is independent (and in some other cases). We note also that for $\mu_X = E[X]$ and $\mu_Y = E[Y]$ $E[(X - \mu_X) (Y - \mu_Y)] = E[XY] - \mu_X \mu_Y$ To see this, expand the expression $(X - \mu_X)(Y - \mu_Y)$ and use linearity to get $E[(X - \mu_X) (Y - \mu_Y)] = E[XY - \mu_Y X - \mu_X Y + \mu_X \mu_Y] = E[XY] - \mu_Y E[X] - \mu_X E[Y] + \mu_X \mu_Y$ which reduces directly to the desired expression. Now for given $\omega$, $X(\omega) - \mu_X$ is the variation of $X$ from its mean and $Y(\omega) - \mu_Y$ is the variation of $Y$ from its mean. For this reason, the following terminology is used. Definition: Covariance The quantity $\text{Cov} [X, Y] = E[(X - \mu_X)(Y - \mu_Y)]$ is called the covariance of $X$ and $Y$. If we let $X' = X - \mu_X$ and $Y' = Y - \mu_Y$ be the ventered random variables, then $\text{Cov} [X, Y] = E[X'Y']$ Note that the variance of $X$ is the covariance of $X$ with itself. If we standardize, with $X^* = (X - \mu_X)/\sigma_X$ and $Y^* = (Y - \mu_Y)/\sigma_Y$, we have Definition: Correlation Coefficient The correlation coefficient $\rho = \rho [X, Y]$ is the quantity $\rho [X,Y] = E[X^* Y^*] = \dfrac{E[(X - \mu_X)(Y - \mu_Y)]}{\sigma_X \sigma_Y}$ Thus $\rho = \text{Cov}[X, Y] / \sigma_X \sigma_Y$. We examine these concepts for information on the joint distribution. By Schwarz' inequality (E15), we have $\rho^2 = E^2 [X^* Y^*] \le E[(X^*)^2] E[(Y^*)^2] = 1$ with equality iff $Y^* = cX^*$ Now equality holds iff $1 = c^2 E^2[(X^*)^2] = c^2$ which implies $c = \pm 1$ and $\rho = \pm 1$ We conclude $-1 \le \rho \le 1$, with $\rho = \pm 1$ iff $Y^* = \pm X^*$ Relationship between $\rho$ and the joint distribution • We consider first the distribution for the standardized pair $(X^*, Y^*)$ • Since $P(X^* \le r, Y^* \le s) = P(\dfrac{X - \mu_X}{\sigma_X} \le r, \dfrac{Y - \mu_Y}{\sigma_Y} \le s)$ $= P(X \le t = \sigma_X r + \mu_X, Y \le u = \sigma_Y s + \mu_Y)$ we obtain the results for the distribution for $(X, Y)$ by the mapping $t = \sigma_X r + \mu_X$ $u = \sigma_Y s + \mu_Y$ Joint distribution for the standardized variables $(X^*, Y^*)$, $(r, s) = (X^*, Y^*)(\omega)$ $\rho = 1$ iff $X^* = Y^*$ iff all probability mass is on the line $s = r$. $\rho = -1$ iff $X^* = -Y^*$ iff all probability mass is on the line $s = -r$. If $-1 < \rho < 1$, then at least some of the mass must fail to be on these lines. Figure 12.2.1. Distance from point $(r,s)$ to the line $s = r$. The $\rho = \pm 1$ lines for the $(X, Y)$ distribution are: $\dfrac{u - \mu_Y}{\sigma_Y} = \pm \dfrac{t - \mu_X}{\sigma_X}$ or $u = \pm \dfrac{\sigma_Y}{\sigma_X}(t - \mu_X) + \mu_Y$ Consider $Z = Y^* - X^*$. Then $E[\dfrac{1}{2} Z^2] = \dfrac{1}{2} E[(Y^* - X^*)^2]$. Reference to Figure 12.2.1 shows this is the average of the square of the distances of the points $(r, s) = (X^*, Y^*) (\omega)$ from the line $s = r$ (i.e. the variance about the line $s = r$). Similarly for $W = Y^* + X^*$. $E[W^2/2]$ is the variance about $s = -r$. Now $\dfrac{1}{2} E[(Y^* \pm X^*)^2] = \dfrac{1}{2}\{E[(Y^*)^2] + E[(X^*)^2] \pm 2E[X^* Y^*]\} = 1 \pm \rho$ Thus $1 - \rho$ is the variance about $s = r$ (the $\rho = 1$ line) $1 + \rho$ is the variance about $s = -r$ (the $\rho = -1$ line) Now since $E[(Y^* - X^*)^2] = E[(Y^* + X^*)^2]$ iff $\rho = E[X^* Y^*] = 0$ the condition $\rho = 0$ is the condition for equality of the two variances. Transformation to the $(X, Y)$ plane $t = \sigma_X r + \mu_X$ $u = \sigma_Y s + \mu_Y$ $r = \dfrac{t - \mu_X}{\sigma_X}$ $s = \dfrac{u - \mu_Y}{\sigma_Y}$ The $\rho = 1$ line is: $\dfrac{u - \mu_Y}{\sigma_Y} = \dfrac{t - \mu_X}{\sigma_X}$ or $u = \dfrac{\sigma_Y}{\sigma_X} (t - \mu_X) + \mu_Y$ The $\rho = -1$ line is: $\dfrac{u - \mu_Y}{\sigma_Y} = \dfrac{t - \mu_X}{\sigma_X}$ or $u = -\dfrac{\sigma_Y}{\sigma_X} (t - \mu_X) + \mu_Y$ $1 - \rho$ is proportional to the variance abut the $\rho = 1$ line and $1 + \rho$ is proportional to the variance about the $\rho = -1$ line. $\rho = 0$ iff the variances about both are the same. Example $1$ Uncorrelated but not independent Suppose the joint density for $\{X, Y\}$ is constant on the unit circle about the origin. By the rectangle test, the pair cannot be independent. By symmetry, the $\rho = 1$ line is $u = t$ and the $\rho = -1$ line is $u = -t$. By symmetry, also, the variance about each of these lines is the same. Thus $\rho = 0$, which is true iff $\text{Cov}[X, Y] = 0$. This fact can be verified by calculation, if desired. Example $2$ Uniform marginal distributions Figure 12.2.2. Uniform marginals but different correlation coefficients. Consider the three distributions in Figure 12.2.2. In case (a), the distribution is uniform over the square centered at the origin with vertices at (1,1), (-1,1), (-1,-1), (1,-1). In case (b), the distribution is uniform over two squares, in the first and third quadrants with vertices (0,0), (1,0), (1,1), (0,1) and (0,0), (-1,0), (-1,-1), (0,-1). In case (c) the two squares are in the second and fourth quadrants. The marginals are uniform on (-1,1) in each case, so that in each case $E[X] = E[Y] = 0$ and $\text{Var} [X] = \text{Var} [Y] = 1/3$ This means the $\rho = 1$ line is $u = t$ and the $\rho = -1$ line is $u = -t$. a. By symmetry, $E[XY] = 0$ (in fact the pair is independent) and $\rho = 0$. b. For every pair of possible values, the two signs must be the same, so $E[XY] > 0$ which implies $\rho > 0$. The actual value may be calculated to give $\rho = 3/4$. Since $1 - \rho < 1 + \rho$, the variance about the $\rho = 1$ line is less than that about the $\rho = -1$ line. This is evident from the figure. c. $E[XY] < 0$ and $\rho < 0$. Since $1 + \rho < 1 - \rho$, the variance about the $\rho = -1$ line is less than that about the $\rho = 1$ line. Again, examination of the figure confirms this. Example $3$ A pair of simple random variables With the aid of m-functions and MATLAB we can easily caluclate the covariance and the correlation coefficient. We use the joint distribution for Example 9 in "Variance." In that example calculations show $E[XY] - E[X]E[Y] = -0.1633 = \text{Cov} [X,Y]$, $\sigma_X = 1.8170$ and $\sigma_Y = 1.9122$ so that $\rho = -0.04699$. Example $4$ An absolutely continuous pair The pair $\{X, Y\}$ has joint density function $f_{XY} (t, u) = \dfrac{6}{5} (t + 2u)$ on the triangular region bounded by $t = 0$, $u = t$, and $u = 1$. By the usual integration techniques, we have $f_X(t) = \dfrac{6}{5} (1 + t - 2t^2)$, $0 \le t \le 1$ and $f_Y (u) = 3u^2$, $0 \le u \le 1$ From this we obtain $E[X] = 2/5$, $\text{Var} [X] = 3/50$, $E[Y] = 3/4$, and $\text{Var} [Y] = 3/80$. To complete the picture we need $E[XY] = \dfrac{6}{5} \int_0^1 \int_t^1 (t^2 u + 2tu^2)\ dudt = 8/25$ Then $\text{Cov} [X,Y] = E[XY] - E[X]E[Y] = 2/100$ and $\rho = \dfrac{\text{Cov}[X,Y]}{\sigma_X \sigma_Y} = \dfrac{4}{30} \sqrt{10} \approx 0.4216$ APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (6/5)*(t + 2*u).*(u>=t) Use array operations on X, Y, PX, PY, t, u, and P EX = total(t.*P) EX = 0.4012 % Theoretical = 0.4 EY = total(u.*P) EY = 0.7496 % Theoretical = 0.75 VX = total(t.^2.*P) - EX^2 VX = 0.0603 % Theoretical = 0.06 VY = total(u.^2.*P) - EY^2 VY = 0.0376 % Theoretical = 0.0375 CV = total(t.*u.*P) - EX*EY CV = 0.0201 % Theoretical = 0.02 rho = CV/sqrt(VX*VY) rho = 0.4212 % Theoretical = 0.4216 Coefficient of linear correlation The parameter $\rho$ is usually called the correlation coefficient. A more descriptive name would be coefficient of linear correlation. The following example shows that all probability mass may be on a curve, so that $Y = g(X)$ (i.e., the value of Y is completely determined by the value of $X$), yet $\rho = 0$. Example $5$ $Y = g(X)$ but $\rho = 0$ Suppose $X$ ~ uniform (-1, 1), so that $f_X (t) = 1/2$, $-1 < t < 1$ and $E[X] = 0$. Let $Y = g(X) = \cos X$. Then $\text{Cov} [X, Y] = E[XY] = \dfrac{1}{2} \int_{-1}^{1} t \cos t\ dt = 0$ Thus $\rho = 0$. Note that $g$ could be any even function defined on (-1,1). In this case the integrand $tg(t)$ is odd, so that the value of the integral is zero. Variance and covariance for linear combinations We generalize the property (V4) on linear combinations. Consider the linear combinations $X = \sum_{i = 1}^{n} a_i X_i$ and $Y = \sum_{j = 1}^{m} b_j Y_j$ We wish to determine $\text{Cov} [X, Y]$ and $\text{Var}[X]$. It is convenient to work with the centered random variables $X' = X - \mu_X$ and $Y' = Y - \mu_Y$. Since by linearity of expectation, $\mu_X = \sum_{i = 1}^{n} a_i \mu_{X_i}$ and $\mu_Y = \sum_{j = 1}^{m} b_j \mu_{Y_j}$ we have $X' = \sum_{i = 1}^{n} a_i X_i - \sum_{i = 1}^{n} a_i \mu_{X_i} = \sum_{i = 1}^{n} a_i (X_i - \mu_{X_i}) = \sum_{i = 1}^{n} a_i X_i'$ and similarly for $Y'$. By definition $\text{Cov} (X, Y) = E[X'Y'] = E[\sum_{i, j} a_i b_j X_i' Y_j'] = \sum_{i,j} a_i b_j E[X_i' E_j'] = \sum_{i,j} a_i b_j \text{Cov} (X_i, Y_j)$ In particular $\text{Var} (X) = \text{Cov} (X, X) = \sum_{i, j} a_i a_j \text{Cov} (X_i, X_j) = \sum_{i = 1}^{n} a_i^2 \text{Cov} (X_i, X_i) + \sum_{i \ne j} a_ia_j \text{Cov} (X_i, X_j)$ Using the fact that $a_ia_j \text{Cov} (X_i, X_j) = a_j a_i \text{Cov} (X_j, X_i)$, we have $\text{Var}[X] = \sum_{i = 1}^{n} a_i^2 \text{Var} [X_i] + 2\sum_{i <j} a_i a_j \text{Cov} (X_i, X_j)$ Note that $a_i^2$ does not depend upon the sign of $a_i$. If the $X_i$ form an independent class, or are otherwise uncorrelated, the expression for variance reduces to $\text{Var}[X] = \sum_{i = 1}^{n} a_i^2 \text{Var} [X_i]$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/12%3A_Variance_Covariance_and_Linear_Regression/12.02%3A_Covariance_and_the_Correlation_Coefficient.txt
Linear Regression Suppose that a pair $\{X, Y\}$ of random variables has a joint distribution. A value $X(\omega)$ is observed. It is desired to estimate the corresponding value $Y(\omega)$. Obvious there is no rule for determining $Y(\omega)$ unless $Y$ is a function of $X$. The best that can be hoped for is some estimate based on an average of the errors, or on the average of some function of the errors. Suppose $X(\omega)$ is observed, and by some rule an estimate $\widehat{Y} (\omega)$ is returned. The error of the estimate is $Y(\omega) - \widehat{Y} (\omega)$. The most common measure of error is the mean of the square of the error $E[(Y - \widehat{Y})^2]$ The choice of the mean square has two important properties: it treats positive and negative errors alike, and it weights large errors more heavily than smaller ones. In general, we seek a rule (function) $r$ such that the estimate $\widehat{Y} (\omega)$ is $r(X(\omega))$. That is, we seek a function $r$ such that $E[(Y - r(X))^2]$ is a minimum. The problem of determining such a function is known as the regression problem. In the unit on Regression, we show that this problem is solved by the conditional expectation of $Y$, given $X$. At this point, we seek an important partial solution. The regression line of $Y$ on $X$ We seek the best straight line function for minimizing the mean squared error. That is, we seek a function $r$ of the form $u = r(t0 = at + b$. The problem is to determine the coefficients $a, b$ such that $E[(Y - aX - b)^2]$ is a minimum We write the error in a special form, then square and take the expectation. $\text{Error} = Y - aX - b = (Y - \mu_Y) - a(X - \mu_X) + \mu_Y - a\mu_X - b = (Y - \mu_Y) - a(X - \mu_X) - \beta$ $\text{Error squared} = (Y - \mu_Y)^2 + a^2 (X - \mu_X)^2 + \beta^2 - 2\beta (Y - \mu_Y) + 2 \alpha \beta (X - \mu_X) - 2a(Y - \mu_Y) (X - \mu_X)$ $E[(Y - aX - b)^2] = \sigma_Y^2 + a^2 \sigma_X^2 + \beta^2 - 2a \text{Cov} [X, Y]$ Standard procedures for determining a minimum (with respect to a) show that this occurs for $a = \dfrac{\text{Cov} [X,Y]}{\text{Var}[X]}$ $b = \mu_Y - a \mu_X$ Thus the optimum line, called the regression line of $Y$ on $X$, is $u = \dfrac{\text{Cov} [X,Y]}{\text{Var}[X]} (t - \mu_X) + \mu_Y = \rho \dfrac{\sigma_Y}{\sigma_X} (t - \mu_X) + \mu_Y = \alpha(t)$ The second form is commonly used to define the regression line. For certain theoretical purposes, this is the preferred form. But for calculation, the first form is usually the more convenient. Only the covariance (which requres both means) and the variance of $X$ are needed. There is no need to determine $\text{Var} [Y]$ or $\rho$. Example $1$ The simple air of Example 3 from "Variance" jdemo1 jcalc Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P EX = total(t.*P) EX = 0.6420 EY = total(u.*P) EY = 0.0783 VX = total(t.^2.*P) - EX^2 VX = 3.3016 CV = total(t.*u.*P) - EX*EY CV = -0.1633 a = CV/VX a = -0.0495 b = EY - a*EX b = 0.1100 % The regression line is u = -0.0495t + 0.11 Example $2$ The pair in Example 6 from "Variance" Suppose the pair $\{X, Y\}$ has joint density $f_{XY}(t, u) = 3u$ on the triangular region bounded by $u = 0$, $u = 1 + t$, $u = 1- t$. Determine the regression line of $Y$ on $X$. Analytic Solution By symmetry, $E[X] = E[XY] = 0$, so $\text{Cov} [X, Y] = 0$. The regression curve is $u = E[Y] = 3\int_0^1 u^2 \int_{u - 1}^{1 - u} \ dt du = 6 \int_{0}^{1} u^2 (1 - u)\ du = 1/2$ Note that the pair is uncorrelated, but by the rectangle test is not independent. With zero values of $E[X]$ and $E[XY]$, the approximation procedure is not very satisfactory unless a very large number of approximation points are employed. Example $3$ Distribution of Example 5 from "Random Vectors and MATLAB" and Example 12 from "Function of Random Vectors" The pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{6}{37} (t + 2u)$ on the region $0 \le t \le 2$, $0 \le u \le \text{max} \{1, t\}$ (see Figure 12.3.1). Determine the regression line of $Y$ on $X$. If the value $X(\omega) = 1.7$ is observed, what is the best mean-square linear estimate of $Y(\omega)$? Figure 12.3.1. Regression line for Example 12.3.3 Analytic Solution $E[X] = \dfrac{6}{37} \int_{0}^{1} \int_{0}^{1} (t^2 + 2tu)\ dudt + dfrac{6}{37} \int_{1}^{2} \int_{0}^{t} (t^2 + 2tu)\ dudt = 50/37$ The other quantities involve integrals over the same regions with appropriate integrands, as follows: Quantity Integrand Value $E[X^2]$ $t^3 + 2t^2 u$ 779/370 $E[Y]$ $tu + 2u^2$ 127/148 $E[XY]$ $t^2u + 2tu^2$ 232/185 Then $\text{Var} [X] = \dfrac{779}{370} - (\dfrac{50}{37})^2 = \dfrac{3823}{13690}$ $text{Cov}[X, Y] =\dfrac{232}{185} - \dfrac{50}{37} \cdot \dfrac{127}{148} = \dfrac{1293}{13690}$ and $a = \text{Cov}[X, Y]/\text{Var}[X] = \dfrac{1293}{3823} \approx 0.3382$, $b = E[Y] - aE[X] = \dfrac{6133}{15292} \approx 0.4011$ The regression line is $u = at + b$. If $X(\omega) = 1.7$, the best linear estimate (in the mean square sense) is $\widehat{Y} (\omega) = 1.7a + b = 0.9760$ (see Figure 12.3.1 for an approximate plot). APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [0 2] Enter matrix [c d] of Y-range endpoints [0 2] Enter number of X approximation points 400 Enter number of Y approximation points 400 Enter expression for joint density (6/37)*(t+2*u).*(u<=max(t,1)) Use array operations on X, Y, PX, PY, t, u, and P EX = total(t.*P) EX = 1.3517 % Theoretical = 1.3514 EY = total(u.*P) EY = 0.8594 % Theoretical = 0.8581 VX = total(t.^2.*P) - EX^2 VX = 0.2790 % Theoretical = 0.2793 CV = total(t.*u.*P) - EX*EY CV = 0.0947 % Theoretical = 0.0944 a = CV/VX a = 0.3394 % Theoretical = 0.3382 b = EY - a*EX b = 0.4006 % Theoretical = 0.4011 y = 1.7*a + b y = 0.9776 % Theoretical = 0.9760 An interpretation of $\rho^2$ The analysis above shows the minimum mean squared error is given by $E[(Y - \widehat{Y})^2] = E[(Y - \rho \dfrac{\sigma_Y}{\sigma_X} (X - \mu_X) - \mu_Y)^2] = \sigma_Y^2 E[(Y^* - \rho X^*)^2]$ $= \sigma_Y^2 E[(Y^*)^2 - 2\rho X^* Y^* + \rho^2(X^*)^2] = \sigma_Y^2 (1 - 2\rho^2 + \rho^2) = \sigma_Y^2 (1 - \rho^2)$ If $\rho = 0$, then $E[(Y - \widehat{Y})^2] = \sigma_Y^2$, the mean squared error in the case of zero linear correlation. Then, $\rho^2$ is interpreted as the fraction of uncertainty removed by the linear rule and X. This interpretation should not be pushed too far, but is a common interpretation, often found in the discussion of observations or experimental results. More general linear regression Consider a jointly distributed class. $\{Y, X_1, X_2, \cdot\cdot\cdot, X_n\}$. We wish to deterimine a function $U$ of the form $U = \sum_{i = 0}^{n} a_i X_i$, with $X_0 = 1$, such that $E[(Y - U)^2]$ is a minimum If $U$ satisfies this minimum condition, then $E[(Y - U)V] = 0$, or, equivalently $E[YV] = E[UV]$ for all $V$ of the form $V = \sum_{i = 0}^{n} c_i X_i$ To see this, set $W = Y - U$ and let $d^2 = E[W^2]$. Now, for any $\alpha$ $d^2 \le E[(W + \alpha V)^2] = d^2 + 2\alpha E[WV] + \alpha^2 E[V^2]$ If we select the special $\alpha = -\dfrac{E[WV]}{E[V^2]}$ then $0 \le -\dfrac{2E[WV]^2}{E[V^2]} + \dfrac{E[WV]^2}{E[V^2]^2} E[V^2]$ This implies $E[WV]^2 \le 0$, which can only be satisfied by $E[WV] =0$, so that $E[YV] = E[UV]$ On the other hand, if $E[(Y - U)V] = 0$ for all $V$ of the form above, then $E[(Y- U)^2]$ is a minimum. Consider $E[(Y - V)^2] = E[(Y - U + U - V)^2] = E[(Y - U)^2] + E[(U - V)^2] + 2E[(Y - U) (U - V)]$ See $U - V$ is of the same form as $V$, the last term is zero. The first term is fixed. The second term is nonnegative, with zero value iff $U - V = 0$ a.s. Hence, $E[(Y - V)^2]$ is a minimum when $V = U$. If we take $V$ to be 1, $X_1, X_2, \cdot\cdot\cdot, X_n$, successively, we obtain $n + 1$ linear equations in the $n + 1$ unknowns $a_0, a_1, \cdot\cdot\cdot, a_n$, as follows. $E[Y] = a_0 + a_1 E[X_1] + \cdot\cdot\cdot + a_n E[X_n]$ $E[YX_1] = a_0 E[X_i] + a_1 E[X_1X_i] + \cdot\cdot\cdot + a_n E[X_n X_i]$ for $1 \le i \le n$ For each $i = 1, 2, \cdot\cdot\cdot, n$, we take (2) - $E[X_i] \cdot (1)$ and use the calculating expressions for variance and covariance to get $\text{Cov} [Y, X_i] = a_1 \text{Cov} [X_1, X_i] + a_2 \text{Cov} [X_2, X_i] + \cdot\cdot\cdot + a_n \text{Cov} [X_n, X_i]$ These $n$ equations plus equation (1) may be solved alagebraically for the $a_i$. In the important special case that the $X_i$ are uncorrelated (i.e. $\text{Cov}[X_i, X_j] = 0$ for $i \ne j$), we have $a_i = \dfrac{\text{Cov}[Y, X_i]}{\text{Var} [X_i]}$ $1 \le i \le n$ and $a_0 = E[Y] - a_1 E[X_1] - a_2 E[X_2] - \cdot\cdot\cdot - a_n E[X_n]$ In particular, this condition holds if the class $\{X_i : 1 \le i \le n\}$ is iid as in the case of a simple random sample (see the section on "Simple Random Samples and Statistics"). Examination shows that for $n = 1$, with $X_1 = X$, $a_0 = b$, and $a_1 = a$, the result agrees with that obtained in the treatment of the regression line, above. Example $4$ Linear regression with two variables. Suppose $E[Y] = 3$, $E[X_1] = 2$, $E[X_2] = 3$, $\text{Var}[X_1] = 3$, $\text{Var}[X_2] = 8$, $\text{Cov}[Y, X_1] = 5$, $\text{Cov} [Y, X_2] = 7$, and $\text{Cov} [X_1, X_2] = 1$. Then the three equations are $a_0 + 2a_2 + 3a_3 = 3$ $0 + 3a_1 + 1 a_2 = 5$ $0 + 1a_1 + 8a_2 = 7$ Solution of these simultaneous linear equations with MATLAB gives the results $a_0 = - 1.9565$, $a_1 = 1.4348$, and $a_2 = 0.6957$.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/12%3A_Variance_Covariance_and_Linear_Regression/12.03%3A_Linear_Regression.txt
Exercise $1$ (See Exercise 1 from "Problems on Distribution and Density Functions ", and Exercise 1 from "Problems on Mathematical Expectation", m-file npr07_01.m). The class $\{C_j: 1 \le j \le 10\}$ is a partition. Random variable $X$ has values {1, 3, 2, 3, 4, 2, 1, 3, 5, 2} on $C_1$ through $C_{10}$, respectively, with probabilities 0.08, 0.13, 0.06, 0.09, 0.14, 0.11, 0.12, 0.07, 0.11, 0.09. Determine $\text{Var} [X]$. Answer npr07_01 Data are in T and pc EX = T*pc' EX = 2.7000 VX = (T.^2)*pc' - EX^2 VX = 1.5500 [X,PX] = csort(T,pc); % Alternate Ex = X*PX' Ex = 2.7000 Vx = (X.^2)*PX' - EX^2 Vx = 1.5500 Exercise $2$ (See Exercise 2 from "Problems on Distribution and Density Functions ", and Exercise 2 from "Problems on Mathematical Expectation", m-file npr07_02.m). A store has eight items for sale. The prices are $3.50,$5.00, $3.50,$7.50, $5.00,$5.00, $3.50, and$7.50, respectively. A customer comes in. She purchases one of the items with probabilities 0.10, 0.15, 0.15, 0.20, 0.10 0.05, 0.10 0.15. The random variable expressing the amount of her purchase may be written $X = 3.5 I_{C_1} + 5.0 I_{C_2} + 3.5 I_{C_3} + 7.5 I_{C_4} + 5.0 I_{C_5} + 5.0 I_{C_6} + 3.5 I_{C_7} + 7.5 I_{C_8}$ Determine $\text{Var} [X]$. Answer npr07_02 Data are in T, pc EX = T*pc'; VX = (T.^2)*pc' - EX^2 VX = 2.8525 Exercise $3$ (See Exercise 12 from "Problems on Random Variables and Probabilities", Exercise 3 from "Problems on Mathematical Expectation", m-file npr06_12.m). The class $\{A, B, C, D\}$ has minterm probabilities $pm =$ 0.001 * [5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302] Consider $X = I_A + I_B + I_C + I_D$, which counts the number of these events which occur on a trial. Determine $\text{Var} [X]$. Answer npr06_12 Minterm probabilities in pm, coefficients in c canonic Enter row vector of coefficients c Enter row vector of minterm probabilities pm Use row matrices X and PX for calculations Call for XDBN to view the distribution VX = (X.^2)*PX' - (X*PX')^2 VX = 0.7309 Exercise $4$ (See Exercise 4 from "Problems on Mathematical Expectation"). In a thunderstorm in a national park there are 127 lightning strikes. Experience shows that the probability of each lightning strike starting a fire is about 0.0083. Determine $\text{Var} [X]$. Answer $X$ ~ binomial (127, 0.0083). $\text{Var} [X] = 127 \cdot 0.0083 \cdot (1-0.0083) = 1.0454$. Exercise $5$ (See Exercise 5 from "Problems on Mathematical Expectation"). Two coins are flipped twenty times. Let $X$ be the number of matches (both heads or both tails). Determine $\text{Var} [X]$. Answer $X$ ~ binomial (20, 1/2). $\text{Var}[X] = 20 \cdot (1/2)^2 = 5$. Exercise $6$ (See Exercise 6 from "Problems on Mathematical Expectation"). A residential College plans to raise money by selling “chances” on a board. Fifty chances are sold. A player pays $10 to play; he or she wins$30 with probability $p = 0.2$. The profit to the College is $X = 50 \cdot 10 - 30 N$, where $N$ is the number of winners Determine $\text{Var} [X]$. Answer $N$ ~ binomial (50, 0.2). $\text{Var}[N] = 50 \cdot 0.2 \cdot 0.8 = 8$. $\text{Var} [X] = 30^2\ \text{Var} [N] = 7200$. Exercise $7$ (See Exercise 7 from "Problems on Mathematical Expectation"). The number of noise pulses arriving on a power circuit in an hour is a random quantity $X$ having Poisson (7) distribution. Determine $\text{Var} [X]$. Answer $X$ ~ Poisson (7). $\text{Var} [X] = \mu = 7$. Exercise $8$ (See Exercise 24 from "Problems on Distribution and Density Functions", and Exercise 8 from "Problems on Mathematical Expectation"). The total operating time for the units in Exercise 24 from "Problems on Distribution and Density Functions" is a random variable $T$ ~ gamma (20, 0.0002). Determine $\text{Var} [T]$. Answer $T$ ~ gamma (20, 0.0002). $\text{Var}[T] = 20/0.0002^2 = 500,000,000$. Exercise $9$ The class $\{A, B, C, D, E, F\}$ is independent, with respective probabilities 0.43, 0.53, 0.46, 0.37, 0.45, 0.39. Let $X = 6 I_A + 13 I_B - 8I_C$, $Y = -3I_D + 4 I_E + I_F - 7$ a. Use properties of expectation and variance to obtain $E[X]$, $\text{Var} [X]$, $E[Y]$, and $\text{Var}[Y]$. Note that it is not necessary to obtain the distributions for $X$ or $Y$. b. Let $Z = 3Y - 2X$. Determine $E[Z]$, and $\text{Var} [Z]$. Answer cx = [6 13 -8 0]; cy = [-3 4 1 -7]; px = 0.01*[43 53 46 100]; py = 0.01*[37 45 39 100]; EX = dot(cx,px) EX = 5.7900 EY = dot(cy,py) EY = -5.9200 VX = sum(cx.^2.*px.*(1-px)) VX = 66.8191 VY = sum(cy.^2.*py.*(1-py)) VY = 6.2958 EZ = 3*EY - 2*EX EZ = -29.3400 VZ = 9*VY + 4*VX VZ = 323.9386 Exercise $10$ Consider $X = -3.3 I_A - 1.7 I_B + 2.3 I_C + 7.6 I_D - 3.4$. The class $\{A, B, C, D\}$ has minterm probabilities (data are in m-file npr12_10.m) $\text{pmx} =$ [0.0475 0.0725 0.0120 0.0180 0.1125 0.1675 0.0280 0.0420 $\cdot\cdot\cdot$ 0.0480 0.0720 0.0130 0.0170 0.1120 0.1680 0.0270 0.0430] a. Calculate $E[X]$ and $\text{Var} [X]$. b. Let $W = 2X^2 - 3X + 2$. Calculate $E[W]$ and $\text{Var} [W]$ Answer npr12_10 Data are in cx, cy, pmx and pmy canonic Enter row vector of coefficients cx Enter row vector of minterm probabilities pmx Use row matrices X and PX for calculations Call for XDBN to view the distribution EX = dot(X,PX) EX = -1.2200 VX = dot(X.^2,PX) - EX^2 VX = 18.0253 G = 2*X.^2 - 3*X + 2; [W,PW] = csort(G,PX); EW = dot(W,PW) EW = 44.6874 VW = dot(W.^2,PW) - EW^2 VW = 2.8659e+03 Exercise $11$ Consider a second random variable $Y = 10 I_E + 17 I_F + 20 I_G - 10$ in addition to that in Exercise 12.4.10. The class $\{E, F, G\}$ has minterm probabilities (in mfile npr12_10.m) $\text{pmy} =$ [0.06 0.14 0.09 0.21 0.06 0.14 0.09 0.21] The pair $\{X, Y\}$ is independent. a. Calculate $E[Y]$ and $\text{Var} [Y]$. b. Let $Z = X^2 + 2XY - Y$. Calculate $E[Z]$ and $\text{Var} [Z]$. Answer (Continuation of Exercise 12.4.10) [Y,PY] = canonicf(cy,pmy); EY = dot(Y,PY) EY = 19.2000 VY = dot(Y.^2,PY) - EY^2 VY = 178.3600 icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P H = t.^2 + 2*t.*u - u; [Z,PZ] = csort(H,P); EZ = dot(Z,PZ) EZ = -46.5343 VZ = dot(Z.^2,PZ) - EZ^2 VZ = 3.7165e+04 Exercise $12$ Suppose the pair $\{X, Y\}$ is independent, with $X$ ~ gamma (3, 0.1) and $Y$ ~ Poisson (13). Let $Z = 2X - 5Y$. Determine $E[Z]$ and $\text{Var} [Z]$. Answer $X$ ~ gamma (3, 0.1) implies $E[X] = 30$ and $\text{Var} [X] = 300.$ $Y$ ~ Poisson (13) implies $E[Y] = \text{Var} [Y] = 13$. Then $E[Z] = 2\cdot 30 - 5 \cdot 13 = -5$, $\text{Var}[Z] = 4 \cdot 300 + 25 \cdot 13 = 1525$. Exercise $13$ The pair $\{X, Y\}$ is jointly distributed with the following parameters: $E[X] = 3$, $E[Y] = 4$, $E[XY] = 15$, $E[X^2] = 11$, $\text{Var} [Y] = 5$ Determine $\text{Var} [3X - 2Y]$. Answer EX = 3; EY = 4; EXY = 15; EX2 = 11; VY = 5; VX = EX2 - EX^2 VX = 2 CV = EXY - EX*EY CV = 3 VZ = 9*VX + 4*VY - 6*2*CV VZ = 2 Exercise $14$ The class $\{A, B, C, D, E, F\}$ is independent, with respective probabilities 0.47, 0.33, 0.46, 0.27, 0.41, 0.37 Let $X = 8I_A + 11 I_B - 7I_C$, $Y = -3I_D + 5I_E + I_F - 3$, and $Z = 3Y - 2X$ a. Use properties of expectation and variance to obtain $E[X]$, $\text{Var} [X]$, $E[Y]$, and $\text{Var}[Y]$. b. Determine $E[Z]$, and $\text{Var} [Z]$. c. Use appropriate m-programs to obtain $E[X]$, $\text{Var} [X]$, $E[Y]$, $\text{Var} [Y]$, $E[Z]$, and $\text{Var} [Z]$. Compare with results of parts (a) and (b). Answer px = 0.01*[47 33 46 100]; py = 0.01*[27 41 37 100]; cx = [8 11 -7 0]; cy = [-3 5 1 -3]; ex = dot(cx,px) ex = 4.1700 ey = dot(cy,py) ey = -1.3900 vx = sum(cx.^2.*px.*(1 - px)) vx = 54.8671 vy = sum(cy.^2.*py.*(1-py)) vy = 8.0545 [X,PX] = canonicf(cx,minprob(px(1:3))); [Y,PY] = canonicf(cy,minprob(py(1:3))); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P EX = dot(X,PX) EX = 4.1700 EY = dot(Y,PY) EY = -1.3900 VX = dot(X.^2,PX) - EX^2 VX = 54.8671 VY = dot(Y.^2,PY) - EY^2 VY = 8.0545 EZ = 3*EY - 2*EX EZ = -12.5100 VZ = 9*VY + 4*VX VZ = 291.9589 Exercise $15$ For the Beta ($r, s$) distribution. a. Determine $E[X^n]$, where $n$ is a positive integer. b. Use the result of part (a) to determine $E[X]$ and $\text{Var} [X]$. Answer $E[X^n] = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} \int_0^1 t^{r + n - 1} dt = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} \cdot \dfrac{\Gamma (r + n) \Gamma (s)}{\Gamma (r + s + n)} =$ $\dfrac{\Gamma (r + n) \Gamma (r + s)}{\Gamma (r + s + n) \Gamma (r)}$ Using $\Gamma (x + 1) = x \Gamma (x)$ we have $E[X] = \dfrac{r}{r + s}$, $E[X^2] = \dfrac{r(r + 1)}{(r + s) (r + s + 1)}$ Some algebraic manipulations show that $\text{Var} [X] = E[X^2] - E^2[X] = \dfrac{rs} {(r + s)^2 (r + s + 1)}$ Exercise $16$ The pair $\{X, Y\}$ has joint distribution. Suppose $E[X] = 3$, $E[X^2] = 11$, $E[Y] = 10$, $E[Y^2] = 101$, $E[XY] = 30$ Determine $\text{Var} [15X - 2Y]$. Answer EX = 3; EX2 = 11; EY = 10; EY2 = 101; EXY = 30; VX = EX2 - EX^2 VX = 2 VY = EY2 - EY^2 VY = 1 CV = EXY - EX*EY CV = 0 VZ = 15^2*VX + 2^2*VY VZ = 454 Exercise $17$ The pair $\{X, Y\}$ has joint distribution. Suppose $E[X] = 2$, $E[X^2] = 5$, $E[Y] = 1$, $E[Y^2] = 2$, $E[XY] = 1$ Determine $\text{Var} [3X + 2Y]$. Answer EX = 2; EX2 = 5; EY = 1; EY2 = 2; EXY = 1; VX = EX2 - EX^2 VX = 1 VY = EY2 - EY^2 VY = 1 CV = EXY - EX*EY CV = -1 VZ = 9*VX + 4*VY + 2*6*CV VZ = 1 Exercise $18$ The pair $\{X, Y\}$ is independent, with $E[X] = 2$, $E[Y] = 1$, $\text{Var} [X] = 6$, $\text{Var} [Y] = 4$ Let $Z = 2X^2 + XY^2 - 3Y + 4$. Determine $E[Z]$. Answer EX = 2; EY = 1; VX = 6; VY = 4; EX2 = VX + EX^2 EX2 = 10 EY2 = VY + EY^2 EY2 = 5 EZ = 2*EX2 + EX*EY2 - 3*EY + 4 EZ = 31 Exercise $19$ (See Exercise 9 from "Problems on Mathematical Expectation"). Random variable X has density function $f_X (t) = \begin{cases} (6/5) t^2 & \text{for } 0 \le t \le 1 \ (6/5)(2 - t) & \text{for } 1 < t \le 2 \end{cases} = I_{[0, 1]} (t) \dfrac{6}{5} t^2 + I_{(1, 2]} (t) \dfrac{6}{5} (2 - t)$ $E[X] = 11/10$. Determine $\text{Var} [X]$. Answer $E[X^2] = \int t^2 f_X (t)\ dt = \dfrac{6}{5} \int_0^1 t^4\ dt + \dfrac{6}{5} \int_1^2 (2t^2 - t^3)\ dt = \dfrac{67}{50}$ $\text{Var} [X] = E[X^2] - E^2[X] = \dfrac{13}{100}$ For the distributions in Exercises 20-22 Determine $\text{Var} [X]$, $\text{Cov} [X, Y]$, and the regression line of $Y$ on $X$. Exercise $20$ (See Exercise 7 from "Problems On Random Vectors and Joint Distributions", and Exercise 17 from "Problems on Mathematical Expectation"). The pair $\{X, Y\}$ has the joint distribution (in file npr08_07.m): $P(X = t, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 Answer npr08_07 Data are in X, Y, P jcalc - - - - - - - - - - - EX = dot(X,PX); EY = dot(Y,PY); VX = dot(X.^2,PX) - EX^2 VX = 5.1116 CV = total(t.*u.*P) - EX*EY CV = 2.6963 a = CV/VX a = 0.5275 b = EY - a*EX b = 0.6924 % Regression line: u = at + b Exercise $21$ (See Exercise 8 from "Problems On Random Vectors and Joint Distributions", and Exercise 18 from "Problems on Mathematical Expectation"). The pair $\{X, Y\}$ has the joint distribution (in file npr08_08.m): $P(X = t, Y = u)$ t = 1 3 5 7 9 11 13 15 17 19 u = 12 0.0156 0.0191 0.0081 0.0035 0.0091 0.0070 0.0098 0.0056 0.0091 0.0049 10 0.0064 0.0204 0.0108 0.0040 0.0054 0.0080 0.0112 0.0064 0.0104 0.0056 9 0.0196 0.0256 0.0126 0.0060 0.0156 0.0120 0.0168 0.0096 0.0056 0.0084 5 0.0112 0.0182 0.0108 0.0070 0.0182 0.0140 0.0196 0.0012 0.0182 0.0038 3 0.0060 0.0260 0.0162 0.0050 0.0160 0.0200 0.0280 0.0060 0.0160 0.0040 -1 0.0096 0.0056 0.0072 0.0060 0.0256 0.0120 0.0268 0.0096 0.0256 0.0084 -3 0.0044 0.0134 0.0180 0.0140 0.0234 0.0180 0.0252 0.0244 0.0234 0.0126 -5 0.0072 0.0017 0.0063 0.0045 0.0167 0.0090 0.0026 0.0172 0.0217 0.0223 Answer npr08_08 Data are in X, Y, P jcalc - - - - - - - - - - - - EX = dot(X,PX); EY = dot(Y,PY); VX = dot(X.^2,PX) - EX^2 VX = 31.0700 CV = total(t.*u.*P) - EX*EY CV = -8.0272 a = CV/VX a = -0.2584 b = EY - a*EX b = 5.6110 % Regression line: u = at + b Exercise $22$ (See Exercise 9 from "Problems On Random Vectors and Joint Distributions", and Exercise 19 from "Problems on Mathematical Expectation"). Data were kept on the effect of training time on the time to perform a job on a production line. $X$ is the amount of training, in hours, and $Y$ is the time to perform the task, in minutes. The data are as follows (in file npr08_09.m): $P(X = t, Y = u)$ t = 1 1.5 2 2.5 3 u = 5 0.039 0.011 0.005 0.001 0.001 4 0.065 0.070 0.050 0.015 0.010 3 0.031 0.061 0.137 0.051 0.033 2 0.012 0.049 0.163 0.058 0.039 1 0.003 0.009 0.045 0.025 0.017 Answer npr08_09 Data are in X, Y, P jcalc - - - - - - - - - - - - EX = dot(X,PX); EY = dot(Y,PY); VX = dot(X.^2,PX) - EX^2 VX = 0.3319 CV = total(t.*u.*P) - EX*EY CV = -0.2586 a = CV/VX a = -0.77937/6; b = EY - a*EX b = 4.3051 % Regression line: u = at + b For the joint densities in Exercises 23-30 below 1. Determine analytically $\text{Var} [X]$ $\text{Cov} [X, Y]$, and the regression line of $Y$ on $X$. 2. Check these with a discrete approximation. Exercise $23$ (See Exercise 10 from "Problems On Random Vectors and Joint Distributions", and Exercise 20 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = 1$ for $0 \le t \le 1$, $0 \le u \le 2(1 - t)$. $E[X] = \dfrac{1}{3}$, $E[X^2] = \dfrac{1}{6}$, $E[Y] = \dfrac{2}{3}$ Answer $E[XY] = \int_{0}^{1} \int_{0}^{2(1-t)} tu\ dudt = 1/6$ $\text{Cov} [X, Y] = \dfrac{1}{6} - \dfrac{1}{3} \cdot \dfrac{2}{3} = -1/18$ $\text{Var} [X] = 1/6 - (1/3)^2 = 1/18$ $a = \text{Cov} [X, Y] /\text{Var} [X] = -1$ $b = E[Y] - aE[X] = 1$ tuappr: [0 1] [0 2] 200 400 u<=2*(1-t) EX = dot(X,PX); EY = dot(Y,PY); VX = dot(X.^2,PX) - EX^2 VX = 0.0556 CV = total(t.*u.*P) - EX*EY CV = -0.0556 a = CV/VX a = -1.0000 b = EY - a*EX b = 1.0000 Exercise $24$ (See Exercise 13 from "Problems On Random Vectors and Joint Distributions", and Exercise 23 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = \dfrac{1}{8} (t + u)$ for $0 \le t \le 2$, $0 \le u \le 2$. $E[X] = E[Y] = \dfrac{7}{6}$, $E[X^2] = \dfrac{5}{3}$ Answer $E[XY] = \dfrac{1}{8} \int_{0}^{2} \int_{0}^{2} tu (t + u)\ dudt = 4/3$, $\text{Cov} [X, Y] = -1/36$, $\text{Var} [X] = 11/36$ $a = \text{Cov} [X, Y]/\text{Var} [X] = -1/11$, $b = E[Y] - a E[X] = 14/11$ tuappr: [0 2] [0 2] 200 200 (1/8)*(t+u) VX = 0.3055 CV = -0.0278 a = -0.0909 b = 1.2727 Exercise $25$ (See Exercise 15 from "Problems On Random Vectors and Joint Distributions", and Exercise 25 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$. $E[X] = \dfrac{313}{220}$, $E[Y] = \dfrac{1429}{880}$, $E[X^2] = \dfrac{49}{22}$ Answer $E[XY] = \dfrac{3}{88} \int_{0}^{2} \int_{0}^{1+t} tu (2t + 3u^2)\ dudt = 2153/880$, $\text{Cov} [X, Y] = 26383/1933600$, $\text{Var} [X] = 9831/48400$ $a = \text{Cov} [X, Y]/\text{Var} [X] = 26383/39324$, $b = E[Y] - a E[X] = 26321/39324$ tuappr: [0 2] [0 3] 200 300 (3/88)*(2*t + 3*u.^2).*(u<=1+t) VX = 0.2036 CV = 0.1364 a = 0.6700 b = 0.6736 Exercise $26$ (See Exercise 16 from "Problems On Random Vectors and Joint Distributions", and Exercise 26 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = 12t^2 u$ on the parallelogram with vertices (-1, 0), (0, 0), (1, 1), (0, 1) $E[X] = \dfrac{2}{5}$, $E[Y] = \dfrac{11}{15}$, $E[X^2] = \dfrac{2}{5}$ Answer $E[XY] = 12 \int_{-1}^{0} \int_{0}^{t + 1} t^3 u^2\ dudt + 12 \int_{0}^{1} \int_{t}^{1} t^3 u^2 \ dudt = \dfrac{2}{5}$ $\text{Cov} [X, Y] = \dfrac{8}{75}$, $\text{Var} [X] = \dfrac{6}{25}$ $a = \text{Cov} [X, Y]/\text{Var} [X] = 4/9$, $b = E[Y] - a E[X] = 5/9$ tuappr: [-1 1] [0 1] 400 200 12*t.^2.*u.*(u>= max(0,t)).*(u<= min(1+t,1)) VX = 0.2383 CV = 0.1056 a = 0.4432 b = 0.5553 Exercise $27$ (See Exercise 17 from "Problems On Random Vectors and Joint Distributions", and Exercise 27 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2 - t\}$. $E[X] = \dfrac{52}{55}$, $E[Y] = \dfrac{32}{55}$, $E[X^2] = \dfrac{627}{605}$ Answer $E[XY] = \dfrac{24}{11} \int_{0}^{1} \int_{0}^{1} t^2 u^2\ dudt + \dfrac{24}{11} \int_{1}^{2} \int_{0}^{2-t} t^2 u^2 \ dudt = \dfrac{28}{55}$ $\text{Cov} [X, Y] = -\dfrac{124}{3025}$, $\text{Var} [X] = \dfrac{431}{3025}$ $a = \text{Cov} [X, Y]/\text{Var} [X] = -\dfrac{124}{431}$, $b = E[Y] - a E[X] = \dfrac{368}{431}$ tuappr: [0 2] [0 1] 400 200 (24/11)*t.*u.*(u<=min(1,2-t)) VX = 0.1425 CV =-0.0409 a = -0.2867 b = 0.8535 Exercise $28$ (See Exercise 18 from "Problems On Random Vectors and Joint Distributions", and Exercise 28 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$, for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$. $E[X] = \dfrac{53}{46}$, $E[Y] = \dfrac{22}{23}$, $E[X^2] = \dfrac{9131}{5290}$ Answer $E[XY] = \dfrac{3}{23} \int_{0}^{1} \int_{0}^{2-t} tu (t + 2u)\ dudt + \dfrac{3}{23} \int_{1}^{2} \int_{0}^{t} tu (t + 2u) \ dudt = \dfrac{251}{230}$ $\text{Cov} [X, Y] = -\dfrac{57}{5290}$, $\text{Var} [X] = \dfrac{4217}{10580}$ $a = \text{Cov} [X, Y]/\text{Var} [X] = -\dfrac{114}{4217}$, $b = E[Y] - a E[X] = \dfrac{4165}{4217}$ tuappr: [0 2] [0 2] 200 200 (3/23)*(t + 2*u).*(u<=max(2-t,t)) VX = 0.3984 CV = -0.0108 a = -0.0272 b = 0.9909 Exercise $29$ (See Exercise 21 from "Problems On Random Vectors and Joint Distributions", and Exercise 31 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = \dfrac{2}{13} (t + 2u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2t, 3 - t\}$. $E[X] = \dfrac{16}{13}$, $E[Y] = \dfrac{11}{12}$, $E[X^2] = \dfrac{2847}{1690}$ Answer $E[XY] = \dfrac{2}{13} \int_{0}^{1} \int_{0}^{3-t} tu (t + 2u)\ dudt + \dfrac{2}{13} \int_{1}^{2} \int_{0}^{2t} tu (t + 2u) \ dudt = \dfrac{431}{390}$ $\text{Cov} [X, Y] = -\dfrac{3}{130}$, $\text{Var} [X] = \dfrac{287}{1690}$ $a = \text{Cov} [X, Y]/\text{Var} [X] = -\dfrac{39}{297}$, $b = E[Y] - a E[X] = \dfrac{3733}{3444}$ tuappr: [0 2] [0 2] 400 400 (2/13)*(t + 2*u).*(u<=min(2*t,3-t)) VX = 0.1698 CV = -0.0229 a = -0.1350 b = 1.0839 Exercise $30$ (See Exercise 22 from "Problems On Random Vectors and Joint Distributions", and Exercise 32 from "Problems on Mathematical Expectation"). $f_{XY} (t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2u^2$, for $0 \le u \le 1$. $E[X] = \dfrac{243}{224}$, $E[Y] = \dfrac{11}{16}$, $E[X^2] = \dfrac{107}{70}$ Answer $E[XY] = \dfrac{3}{8} \int_{0}^{1} \int_{0}^{1} tu (t^2 + 2u)\ dudt + \dfrac{9}{14} \int_{1}^{2} \int_{0}^{1} t^3u^3 \ dudt = \dfrac{347}{448}$ $\text{Cov} [X, Y] = -\dfrac{103}{3584}$, $\text{Var} [X] = \dfrac{88243}{250880}$ $a = \text{Cov} [X, Y]/\text{Var} [X] = -\dfrac{7210}{88243}$, $b = E[Y] - a E[X] = \dfrac{105691}{176486}$ tuappr: [0 2] [0 1] 400 200 (3/8)*(t.^2 + 2*u).*(t<=1) + (9/14)*t.^2.*u.^2.*(t>1) VX = 0.3517 CV = 0.0287 a = 0.0817 b = 0.5989 Exercise $31$ The class $\{X, Y, Z\}$ of random variables is iid (independent, identically distributed) with common distribution $X =$ [-5 -1 3 4 7] $PX =$ 0.01 * [15 20 30 25 10] Let $W = 3X - 4Y + 2Z$. Determine $E[W]$ and $\text{Var} [W]$. Do this using icalc, then repeat with icalc3 and compare results. Answer x = [-5 -1 3 4 7]; px = 0.01*[15 20 30 25 10]; EX = dot(x,px) % Use of properties EX = 1.6500 VX = dot(x.^2,px) - EX^2 VX = 12.8275 EW = (3 - 4+ 2)*EX EW = 1.6500 VW = (3^2 + 4^2 + 2^2)*VX VW = 371.9975 icalc % Iterated use of icalc Enter row matrix of X-values x Enter row matrix of Y-values x Enter X probabilities px Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P G = 3*t - 4*u; [R,PR] = csort(G,P); icalc Enter row matrix of X-values R Enter row matrix of Y-values x Enter X probabilities PR Enter Y probabilities px Use array operations on matrices X, Y, PX, PY, t, u, and P H = t + 2*u; [W,PW] = csort(H,P); EW = dot(W,PW) EW = 1.6500 VW = dot(W.^2,PW) - EW^2 VW = 371.9975 icalc3 % Use of icalc3 Enter row matrix of X-values x Enter row matrix of Y-values x Enter row matrix of Z-values x Enter X probabilities px Enter Y probabilities px Enter Z probabilities px Use array operations on matrices X, Y, Z, PX, PY, PZ, t, u, v, and P S = 3*t - 4*u + 2*v; [w,pw] = csort(S,P); Ew = dot(w,pw) Ew = 1.6500 Vw = dot(w.^2,pw) - Ew^2 Vw = 371.9975 Exercise $32$ $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$ (see Exercise 25 and Exercise 37 from "Problems on Mathematical Expectation"). $Z = I_{[0, 1]} (X) 4X + I_{(1, 2]} (X) (X+ Y)$ $E[X] = \dfrac{313}{220}$, $E[Z] = \dfrac{5649}{1760}$, $E[Z^2] = \dfrac{4881}{440}$ Determine $\text{Var} [Z]$ and $\text{Cov} [X, Z]$. Check with discrete approximation. Answer $E[XZ] = \dfrac{3}{88} \int_0^1 \int_{0}^{1+t} 4t^2 (2t + 2u^2)\ dudt + \dfrac{3}{88} \int_{1}^{2} \int_{0}^{1 + t} t (t + u) (2t + 3u^2)\ dudt = \dfrac{16931}{3520}$ $\text{Var} [Z] = E[Z^2] - E^2[Z] = \dfrac{2451039}{3097600}$ $\text{Cov} [X,Z] = E[XZ] - E[X] E[Z] = \dfrac{94273}{387200}$ tuappr: [0 2] [0 3] 200 300 (3/88)*(2*t+3*u.^2).*(u<=1+t) G = 4*t.*(t<=1) + (t+u).*(t>1); EZ = total(G.*P) EZ = 3.2110 EX = dot(X,PX) EX = 1.4220 CV = total(G.*t.*P) - EX*EZ CV = 0.2445 % Theoretical 0.2435 VZ = total(G.^2.*P) - EZ^2 VZ = 0.7934 % Theoretical 0.7913 Exercise $33$ $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2 - t\}$ (see Exercise 27 and Exercise 38 from "Problems on Mathematical Expectation"). $Z = I_M (X,Y) (X + Y) + I_{M^c} (X, Y) 2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ $E[X] = \dfrac{52}{55}$, $E[Z] = \dfrac{16}{55}$, $E[Z^2] = \dfrac{39}{308}$ Determine $\text{Var} [Z]$ and $\text{Cov} [X, Z]$. Check with discrete approximation. Answer $E[XZ] = \dfrac{24}{11} \int_0^1 \int_{t}^{1} t (t/2) tu \ dudt + \dfrac{24}{11} \int_{0}^{1} \int_{0}^{t} tu^2tu \ dudt \dfrac{24}{11} \int_1^2 \int_{0}^{2 - t} t tu^2 tu\ dudt= \dfrac{211}{770}$ $\text{Var} [Z] = E[Z^2] - E^2[Z] = \dfrac{3557}{84700}$ $\text{Cov} [Z,X] = E[XZ] - E[X] E[Z] = -\dfrac{43}{42350}$ tuappr: [0 2] [0 1] 400 200 (24/11)*t.*u.*(u<=min(1,2-t)) G = (t/2).*(u>t) + u.^2.*(u<=t); VZ = total(G.^2.*P) - EZ^2 VZ = 0.0425 CV = total(t.*G.*P) - EZ*dot(X,PX) CV = -9.2940e-04 Exercise $34$ $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$ (see Exercise 28 and Exercise 39 from "Problems on Mathematical Expectation"). $Z = I_M (X, Y) (X+Y) + I_{M^c} (X, Y) 2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ $E[X] = \dfrac{53}{46}$, $E[Z] = \dfrac{175}{92}$, $E[Z^2] = \dfrac{2063}{460}$ Determine $\text{Var} [Z]$ and $\text{Cov} [Z]$. Check with discrete approximation. Answer $E[ZX] = \dfrac{3}{23} \int_{0}^{1} \int_{0}^{1} t (t + u) (t + 2u) \ dudt + \dfrac{3}{23} \int_{0}^{1} \int_{1}^{2 - t} 2tu(t + 2u) \ dudt +$ $\dfrac{3}{23} \int_{1}^{2} \int_{1}^{t} 2tu(t + 2u)\ dudt = \dfrac{1009}{460}$ $\text{Var} [Z] = E[Z^2] - E^2[Z] = \dfrac{36671}{42320}$ $\text{Cov} [Z, X] = E[ZX] - E[Z] E[X] = \dfrac{39}{21160}$ tuappr: [0 2] [0 2] 400 400 (3/23)*(t+2*u).*(u<=max(2-t,t)) M = max(t,u)<=1; G = (t+u).*M + 2*u.*(1-M); EZ = total(G.*P); EX = dot(X,PX); CV = total(t.*G.*P) - EX*EZ CV = 0.0017 Exercise $35$ $f_{XY} (t, u) = \dfrac{12}{179} (3t^2 + u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2, 3 - t\}$ (see Exercise 29 and Exercise 40 from "Problems on Mathematical Expectation"). $Z = I_M (X, Y) (X+Y) + I_{M^c} (X, Y) 2Y^2$, $M = \{(t, u): t \le 1, u \ge 1\}$ $E[X] = \dfrac{2313}{1790}$, $E[Z] = \dfrac{1422}{895}$, $E[Z^2] = \dfrac{28296}{6265}$ Determine $\text{Var} [Z]$ and $\text{Cov} [X, Z]$. Check with discrete approximation. Answer $E[ZX] = \dfrac{12}{179} \int_{0}^{1} \int_{1}^{2} t (t + u) (3t^2 + u) \ dudt + \dfrac{12}{179} \int_{0}^{1} \int_{0}^{1} 2tu^2 (3t^2 + u) \ dudt +$ $\dfrac{12}{179} \int_{1}^{2} \int_{0}^{3 - t} 2tu^2(3t^2 + u)\ dudt = \dfrac{24029}{12530}$ $\text{Var} [Z] = E[Z^2] - E^2[Z] = \dfrac{11170332}{5607175}$ $\text{Cov} [Z, X] = E[ZX] - E[Z] E[X] = -\dfrac{1517647}{11214350}$ tuappr: [0 2] [0 2] 400 400 (12/179)*(3*t.^2 + u).*(u <= min(2,3-t)) M = (t<=1)&(u>=1); G = (t + u).*M + 2*u.^2.*(1 - M); EZ = total(G.*P); EX = dot(X,PX); CV = total(t.*G.*P) - EZ*EX CV = -0.1347 Exercise $36$ $f_{XY} (t, u) = \dfrac{12}{227} (3t + 2tu)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{1 + t, 2\}$ (see Exercise 30 and Exercise 41 from "Problems on Mathematical Expectation"). $Z = I_M (X, Y) X + I_{M^c} (X, Y) XY$, $M = \{(t, u): u \le \text{min } (1, 2 - t)\}$ $E[X] = \dfrac{1567}{1135}$, $E[Z] = \dfrac{5774}{3405}$, $E[Z^2] = \dfrac{56673}{15890}$ Determine $\text{Var} [Z]$ and $\text{Cov} [X, Z]$. Check with discrete approximation. Answer $E[ZX] = \dfrac{12}{227} \int_{0}^{1} \int_{0}^{1} t^2 (3t + 2tu) \ dudt + \dfrac{12}{227} \int_{1}^{2} \int_{0}^{2-t} t^2(3t + 2tu) \ dudt +$ $\dfrac{12}{227} \int_{0}^{1} \int_{1}^{1 + t} t^2 u(3t + 2tu)\ dudt + \dfrac{12}{227} \int_{1}^{2} \int_{2 - t}^{2} t^2 u(3t + 2tu)\ dudt = \dfrac{20338}{7945}$ $\text{Var} [Z] = E[Z^2] - E^2[Z] = \dfrac{112167631}{162316350}$ $\text{Cov} [Z, X] = E[ZX] - E[Z] E[X] = \dfrac{5915884}{27052725}$ tuappr: [0 2] [0 2] 400 400 (12/227)*(3*t + 2*t.*u).*(u <= min(1+t,2)) EX = dot(X,PX); M = u <= min(1,2-t); G = t.*M + t.*u.*(1 - M); EZ = total(G.*P); EZX = total(t.*G.*P) EZX = 2.5597 CV = EZX - EX*EZ CV = 0.2188 VZ = total(G.^2.*P) - EZ^2 VZ = 0.6907 Exercise $37$ (See Exercise 12.4.20, and Exercises 9 and 10 from "Problems on Functions of Random Variables"). For the pair $\{X, Y\}$ in Exercise 12.4.20, let $Z = g(X, Y) = 3X^2 + 2XY - Y^2$ $W = h(X, Y) = \begin{cases} X & \text{for } X + Y \le 4 \ 2Y & \text{for } X + Y > 4 \end{cases} = I_M (X, Y) X + I_{M^c} (X, Y) 2Y$ Determine the joint distribution for the pair $\{Z, W\}$ and determine the regression line of $W$ on $Z$. Answer npr08_07 Data are in X, Y, P jointzw Enter joint prob for (X,Y) P Enter values for X X Enter values for Y Y Enter expression for g(t,u) 3*t.^2 + 2*t.*u - u.^2 Enter expression for h(t,u) t.*(t+u<=4) + 2*u.*(t+u>4) Use array operations on Z, W, PZ, PW, v, w, PZW EZ = dot(Z,PZ) EZ = 5.2975 EW = dot(W,PW) EW = 4.7379 VZ = dot(Z.^2,PZ) - EZ^2 VZ = 1.0588e+03 CZW = total(v.*w.*PZW) - EZ*EW CZW = -12.1697 a = CZW/VZ a = -0.0115 b = EW - a*EZ b = 4.7988 % Regression line: w = av + b
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/12%3A_Variance_Covariance_and_Linear_Regression/12.04%3A_Problems_on_Variance_Covariance_Linear_Regression.txt
As pointed out in the units on Expectation and Variance, the mathematical expectation $E[X] = \mu_X$ of a random variable $X$ locates the center of mass for the induced distribution, and the expectation $E[g(X)] = E[(X - E[X])^2] = \text{Var} [X] = \sigma_X^2$ measures the spread of the distribution about its center of mass. These quantities are also known, respectively, as the mean (moment) of $X$ and the second moment of $X$ about the mean. Other moments give added information. For example, the third moment about the mean $E[(X - \mu_X)^3]$ gives information about the skew, or asymetry, of the distribution about the mean. We investigate further along these lines by examining the expectation of certain functions of $X$. Each of these functions involves a parameter, in a manner that completely determines the distribution. For reasons noted below, we refer to these as transforms. We consider three of the most useful of these. Three basic transforms We define each of three transforms, determine some key properties, and use them to study various probability distributions associated with random variables. In the section on integral transforms, we show their relationship to well known integral transforms. These have been studied extensively and used in many other applications, which makes it possible to utilize the considerable literature on these transforms. Definition The moment generating function $M_X$ for random variable $X$ (i.e., for its distribution) is the function $M_X (s) = E[e^{sX}]$ ($s$ is a real or complex parameter) The characteristic function $\phi_X$ for random variable $X$ is $\varphi_X (u) = E[e^{iuX}]$ ($i^2 = -1$, $u$ is a real parameter) The generating function $g_X(s)$ for a nonnegative, integer-valued random variable $X$ is $g_X (s) = E[s^X] = \sum_k s^k P(X = k)$ The generating function $E[s^X]$ has meaning for more general random variables, but its usefulness is greatest for nonnegative, integer-valued variables, and we limit our consideration to that case. The defining expressions display similarities which show useful relationships. We note two which are particularly useful. $M_X (s) = E[e^{sX}] = E[(e^s)^X] = g_X (e^s)$ and $\varphi_X (u) = E[e^{iuX}] = M_X (iu)$ Because of the latter relationship, we ordinarily use the moment generating function instead of the characteristic function to avoid writing the complex unit i. When desirable, we convert easily by the change of variable. The integral transform character of these entities implies that there is essentially a one-to-one relationship between the transform and the distribution. Moments The name and some of the importance of the moment generating function arise from the fact that the derivatives of $M_X$ evaluateed at $s = 0$ are the moments about the origin. Specifically $M_{X}^{(k)} (0) = E[X^k]$, provided the $k$th moment exists Since expectation is an integral and because of the regularity of the integrand, we may differentiate inside the integral with respect to the parameter. $M_X'(s) = \dfrac{d}{ds} E[e^{sX}] = E[\dfrac{d}{ds} e^{sX}] = E[X e^{sX}]$ Upon setting $s = 0$, we have $M_X'(0) = E[X]$. Repeated differentiation gives the general result. The corresponding result for the characteristic function is $\varphi^{(k)} (0) = i^k E[X^k]$. Example $1$ The exponential distribution The density function is $f_X (t) = \lambda e^{-\lambda t}$ for $t \ge 0$. $M_X (s) = E[e^{sX}] = \int_{0}^{\infty} \lambda e^{-(\lambda - s) t}\ dt = \dfrac{\lambda}{\lambda - s}$ $M_X'(s) = \dfrac{\lambda}{(\lambda - s)^2}$ $M_X '' (s0 = \dfrac{2\lambda}{(\lambda - s)^3}$ $E[X] = M_X' (0) = \dfrac{\lambda}{\lambda^2} = \dfrac{1}{\lambda}$ $E[X^2] = M_X'' (0) = \dfrac{2\lambda}{\lambda^3} = \dfrac{2}{\lambda^2}$ From this we obtain $\text{Var} [X] = 2/\lambda^2 - 1/\lambda^2 = 1/\lambda^2$. The generating function does not lend itself readily to computing moments, except that $g_X' (s) = \sum_{k = 1}^{\infty} k s^{k - 1} P(X = k)$ so that $g_X'(1) = \sum_{k = 1}^{\infty} kP(X = k) = E[X]$ For higher order moments, we may convert the generating function to the moment generating function by replacing $s$ with $e^s$, then work with $M_X$ and its derivatives. Example $2$ The Poisson ($\mu$) distribution $P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}$, $k \ge 0$, so that $g_X (s) = e^{-\mu} \sum_{k = 0}^{\infty} s^k \dfrac{\mu^k}{k!} = e^{-\mu} \sum_{k = 0}^{\infty} \dfrac{(s\mu)^k}{k!} = e^{-\mu} e^{\mu s} = e^{\mu (s - 1)}$ We convert to $M_X$ by replacing $s$ with $e^s$ to get $M_X (s) = e^{u(e^s - 1)}$. Then $M_X'(s) = e^{u(e^s - 1)} \mu e^s$ $M_X''(s) = e^{u(e^s - 1)} [\mu^2 e^{2s} + \mu e^s]$ so that $E[X] = M_X' (0) = \mu$, $E[X^2] = M_X''(0) = \mu^2 + \mu$, and $\text{Var} [X] = \mu^2 + \mu - \mu^2 = \mu$ These results agree, of course, with those found by direct computation with the distribution. Operational properties We refer to the following as operational properties. (T1): If $Z = aX + b$, then $M_Z (s) = e^{bs} M_X (as)$, $\varphi_Z (u) = e^{iub} \varphi_X (au)$, $g_Z (s) = s^b g_X (s^a)$ For the moment generating function, this pattern follows from $E[e^{(aX + b)s}] = s^{bs} E[e^{(as)X}]$ Similar arguments hold for the other two. (T2): If the pair $\{X, Y\}$ is independent, then $M_{X+Y} (s) = M_X (s) M_Y(s)$, $\varphi_{X+Y} (u) = \varphi_X (u) \varphi_Y(u)$, $g_{X+Y} (s) = g_X (s) g_Y(s)$ For the moment generating function, $e^{sX}$ and $e^{sY}$ form an independent pair for each value of the parameter $s$. By the product rule for expectation $E[e^{s(X+Y)}] = E[e^{sX} e^{sY}] = E[e^{sX}] E[e^{sY}]$ Similar arguments are used for the other two transforms. A partial converse for (T2) is as follows: (T3): If $M_{X + Y} (s) = M_X (s) M_Y (s)$, then the pair $\{X + Y\}$ is uncorrelated. To show this, we obtain two expressions for $E[(X + Y)^2]$, one by direct expansion and use of linearity, and the other by taking the second derivative of the moment generating function. $E[(X + Y)^2] = E[X^2] + E[Y^2] + 2E[XY]$ $M_{X+Y}'' (s) = [M_X (s) M_Y(s)]'' = M_X'' (s) M_Y(s) + M_X (s) M_Y''(s) + 2M_X'(s) M_Y'(s)$ On setting $s = 0$ and using the fact that $M_X (0) = M_Y (0) = 1$, we have $E[(X + Y)^2] = E[X^2] + E[Y^2] + 2E[X]E[Y]$ which implies the equality $E[XY] = E[X] E[Y]$. Note that we have not shown that being uncorrelated implies the product rule. We utilize these properties in determining the moment generating and generating functions for several of our common distributions. Some discrete distributions Indicator function $X = I_E$ $P(E) = p$ $g_X(s) = s^0 q + s^1 p = q + ps$ $M_X (s) = g_X (e^s) = q + pe^s$ Simple random variable $X = \sum_{i = 1}^{n} t_i I_{A_i}$ (primitive form) $P(A_i) = p_i$ $M_X(s) = \sum_{i = 1}^{n} e^{st_i} p_i$ Binomial ($n$, $p$). $X = \sum_{i = 1}^{n} I_{E_i}$ with $\{I_{E_i}: 1 \le i \le n\}$ iid $P(E_i) = p$ We use the product rule for sums of independent random variables and the generating function for the indicator function. $g_X (s) = \prod_{i = 1}^{n} (q + ps) = (q + ps)^n$ $M_X (s) = (q + pe^s)^n$ Geometric ($p$). $P(X = k) = pq^k$ $\forall k \ge 0$ $E[X] = q/p$ We use the formula for the geometric series to get $g_X (s) = \sum_{k = 0}^{\infty} pq^k s^k = p \sum_{k = 0}^{\infty} (qs)^k = \dfrac{p}{1 - qs} M_X (s) = \dfrac{p}{1 - qe^s}$ Negative binomial ($m, p$) If $Y_m$ is the number of the trial in a Bernoulli sequence on which the $m$th success occurs, and $X_m = Y_m - m$ is the number of failures before the $m$th success, then $P(X_m = k) = P(Y_m - m = k) = C(-m, k) (-q)^k p^m$ where $C(-m, k) = \dfrac{-m (-m - 1) (-m - 2) \cdot\cdot\cdot (-m - k + 1)}{k!}$ The power series expansion about $t = 0$ shows that $(1 + t)^{-m} = 1 + C(-m, 1) t + C(-m, 2)t^2 + \cdot\cdot\cdot$ for $-1 < t < 1$ Hence, $M_{X_m} (s) = p^m \sum_{k = 0}^{\infty} C(-m, k) (-q)^k e^{sk} = [\dfrac{p}{1 - qe^s}]^m$ Comparison with the moment generating function for the geometric distribution shows that $X_m = Y_m - m$ has the same distribution as the sum of $m$ iid random variables, each geometric ($p$). This suggests that the sequence is characterized by independent, successive waiting times to success. This also shows that the expectation and variance of $X_m$ are $m$ times the expectation and variance for the geometric. Thus $E[X_m] = mq/p$ and $\text{Var} [X_m] = mq/p^2$ Poisson ($\mu$) $P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}$ $\forall k \ge 0$ In Example 13.1.2, above, we establish $g_X (s) = e^{\mu(s -1)}$ and $M_X (s) = e^{\mu (e^s - 1)}$. If $\{X, Y\}$ is an independent pair, with $X$ ~ Poisson ($\lambda$) and $Y$ ~ Poission ($\mu$), then $Z = X + Y$ ~ Poisson $(\lambda + \mu)$. Follows from (T1) and product of exponentials. Some absolutely continuous distributions Uniform on $(a, b) f_X(t) = \dfrac{1}{b - a}$ $a < t < b$ $M_X (s) = \int e^{st} f_X (t)\ dt = \dfrac{1}{b-a} \int_{a}^{b} e^{st}\ dt = \dfrac{e^{sb} - e^{sa}}{s(b - a)}$ Symmetric triangular $(-c, c)$ $f_X(t) = I_{[-c, 0)} (t) \dfrac{c + t}{c^2} + I_{[0, c]} (t) \dfrac{c - t}{c^2}$ $M_X (s) = \dfrac{1}{c^2} \int_{-c}^{0} (c + t) e^{st} \ dt + \dfrac{1}{c^2} \int_{0}^{c} (c - t) e^{st}\ dt = \dfrac{e^{cs} + e^{-cs} - 2}{c^2s^2}$ $= \dfrac{e^{cs} - 1}{cs} \cdot \dfrac{1 - e^{-cs}}{cs} = M_Y (s) M_Z (-s) = M_Y (s) M_{-Z} (s)$ where $M_Y$ is the moment generating function for $Y$ ~ uniform $(0, c)$ and similarly for $M_Z$. Thus, $X$ has the same distribution as the difference of two independent random variables, each uniform on $(0, c)$. Exponential ($\lambda$) $f_X (t) = \lambda e^{-\lambda t}$, $t \ge 0$ In example 1, above, we show that $M_X (s) = \dfrac{\lambda}{\lambda - s}$. Gamma($\alpha, \lambda$) $f_X (t) = \dfrac{1}{\Gamma(\alpha)} \lambda^{\alpha} t^{\alpha - 1} e^{-\lambda t}$ $t \ge 0$ $M_X (s) = \dfrac{\lambda^{\alpha}}{\Gamma (\alpha)} \int_{0}^{\infty} t^{\alpha - 1} e^{-(\lambda - s)t} \ dt = [\dfrac{\lambda}{\lambda - s}]^{\alpha}$ For $\alpha = n$, a positive integer, $M_X (s) = [\dfrac{\lambda}{\lambda - s}]^n$ which shows that in this case $X$ has the distribution of the sum of $n$ independent random variables each exponential $(\lambda)$. Normal ($\mu, \sigma^2$). • The standardized normal, $Z$ ~ $N(0, 1)$ $M_Z (s) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{st} e^{-t^2/2}\ dt$ Now $st - \dfrac{t^2}{2} = \dfrac{s^2}{2} - \dfrac{1}{2} (t - s)^2$ so that $M_Z (s) = e^{s^2/2} \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-(t - s)^2/2} \ dt = e^{s^2/2}$ since the integrand (including the constant $(1/\sqrt{2\pi})$ is the density for $N(s, 1)$. • $X = \sigma Z + \mu$ implies by property (T1) $M_X (s) = e^{s\mu} e^{\sigma^2 s^2/2} = \text{exp} (\dfrac{\sigma^2 s^2}{2} + s\mu)$ Example $3$ Affine combination of independent normal random variables Suppose $\{X, Y\}$ is an independent pair with $X$ ~ $N(\mu_X, \sigma_X^2)$ and $Y$ ~ $N(\mu_Y, \sigma_Y^2)$. Let $Z = aX + bY + c$. The $Z$ is normal, for by properties of expectation and variance $\mu_Z = a \mu_X + b \mu_Y + c$ and $\sigma_Z^2 = a^2 \sigma_X^2 + b^2 \sigma_Y^2$ and by the operational properties for the moment generating function $M_Z (s) = e^{sc} M_X (as) M_Y (bs) = \text{exp} (\dfrac{(a^2 \sigma_X^2 + b^2 \sigma_Y^2) s^2}{2} + s(a\mu_X + b\mu_Y + c))$ $= \text{exp} (\dfrac{\sigma_Z^2 s^2}{2} + s \mu_Z)$ This form of $M_Z$ shows that $Z$ is normally distributed. Moment generating function and simple random variables Suppose $X = \sum_{i = 1}^{n} t_i I_{A_i}$ in canonical form. That is, $A_i$ is the event $\{X = t_i\}$ for each of the distinct values in the range of $X_i$ with $p_i = P(A_i) = P(X = t_i)$. Then the moment generating function for $X$ is $M_X (s) = \sum_{i = 1}^{n} p_i e^{st_i}$ The moment generating function $M_X$ is thus related directly and simply to the distribution for random variable $X$. Consider the problem of determining the sum of an independent pair $\{X, Y\}$ of simple random variables. The moment generating function for the sum is the product of the moment generating functions. Now if $Y = \sum_{j = 1}^{m} u_j I_{B_j}$, with $P(Y = u_j) = \pi_j$, we have $M_X (s) M_Y(s) = (\sum_{i = 1}^{n} p_i e^{st_i})(\sum_{j = 1}^{m} \pi_j e^{su_j}) = \sum_{i,j} p_i \pi_j e^{s(t_i + u_j)}$ The various values are sums $t_i + u_j$ of pairs $(t_i, u_j)$ of values. Each of these sums has probability $p_i \pi_j$ for the values corresponding to $t_i, u_j$. Since more than one pair sum may have the same value, we need to sort the values, consolidate like values and add the probabilties for like values to achieve the distribution for the sum. We have an m-function mgsum for achieving this directly. It produces the pair-products for the probabilities and the pair-sums for the values, then performs a csort operation. Although not directly dependent upon the moment generating function analysis, it produces the same result as that produced by multiplying moment generating functions. Example $4$ Distribution for a sum of independent simple random variables Suppose the pair $\{X, Y\}$ is independent with distributions $X =$ [1 3 5 7] $Y =$ [2 3 4] $PX =$ [0.2 0.4 0.3 0.1] $PY =$ [0.3 0.5 0.2] Determine the distribution for $Z = X + Y$. X = [1 3 5 7]; Y = 2:4; PX = 0.1*[2 4 3 1]; PY = 0.1*[3 5 2]; [Z,PZ] = mgsum(X,Y,PX,PY); disp([Z;PZ]') 3.0000 0.0600 4.0000 0.1000 5.0000 0.1600 6.0000 0.2000 7.0000 0.1700 8.0000 0.1500 9.0000 0.0900 10.0000 0.0500 11.0000 0.0200 This could, of course, have been achieved by using icalc and csort, which has the advantage that other functions of $X$ and $Y$ may be handled. Also, since the random variables are nonnegative, integer-valued, the MATLAB convolution function may be used (see Example 13.1.7). By repeated use of the function mgsum, we may obtain the distribution for the sum of more than two simple random variables. The m-functions mgsum3 and mgsum4 utilize this strategy. The techniques for simple random variables may be used with the simple approximations to absolutely continuous random variables. Example $5$ Difference of uniform distribution The moment generating functions for the uniform and the symmetric triangular show that the latter appears naturally as the difference of two uniformly distributed random variables. We consider $X$ and $Y$ iid, uniform on [0,1]. tappr Enter matrix [a b] of x-range endpoints [0 1] Enter number of x approximation points 200 Enter density as a function of t t<=1 Use row matrices X and PX as in the simple case [Z,PZ] = mgsum(X,-X,PX,PX); plot(Z,PZ/d) % Divide by d to recover f(t) % plotting details --- see Figure 13.1.1 Figure 13.1.1. Density for the difference of an independent pair, uniform (0,1). The generating function The form of the generating function for a nonnegative, integer-valued random variable exhibits a number of important properties. $X = \sum_{k = 0}^{\infty} kI_{A_i}$ (canonical form) $p_k = P(A_k) = P(X = k)$ $g_X (s) = \sum_{k = 0}^{\infty} s^k p_k$ As a power series in $s$ with nonegative coefficients whose partial sums converge to one, the series converges at least for $|s| \le 1$. The coefficients of the power series display the distribution: for value $k$ the probability $p_k = P(X = k)$ is the coefficient of $s^k$. The power series expansion about the origin of an analytic function is unique. If the generating function is known in closed form, the unique power series expansion about the origin determines the distribution. If the power series converges to a known closed form, that form characterizes the distribution. For a simple random variable (i.e. $p_k = 0$ for $k > n$), $g_X$ is a polynomial. Example $6$ The Poisson distribution In Example 13.1.2, above, we establish the generating function for $X$ ~ Poisson $(\mu)$ from the distribution. Suppose, however, we simply encounter the generating function $g_X (s) = e^{m(s - 1)} = e^{-m} e^{ms}$ From the known power series for the exponential, we get $g_X (s) = e^{-m} \sum_{k = 0}^{\infty} \dfrac{(ms)^k}{k!} = e^{-m} \sum_{k = 0}^{\infty} s^k \dfrac{m^k}{k!}$ We conclude that $P(X = k) = e^{-m} \dfrac{m^k}{k!}$, $0 \le k$ which is the Poisson distribution with parameter $\mu = m$. For simple, nonnegative, integer-valued random variables, the generating functions are polynomials. Because of the product rule (T2), the problem of determining the distribution for the sum of independent random variables may be handled by the process of multiplying polynomials. This may be done quickly and easily with the MATLAB convolution function. Example $7$ Sum of independent simple random variables Suppose the pair $\{X, Y\}$ is independent, with $g_X (s) = \dfrac{1}{10} (2 + 3s + 3s^2 + 2s^5)$ $g_Y (s) = \dfrac{1}{10} (2s + 4s^2 + 4s^3)$ In the MATLAB function convolution, all powers of s must be accounted for by including zeros for the missing powers. gx = 0.1*[2 3 3 0 0 2]; % Zeros for missing powers 3, 4 gy = 0.1*[0 2 4 4]; % Zero for missing power 0 gz = conv(gx,gy); a = [' Z PZ']; b = [0:8;gz]'; disp(a) Z PZ % Distribution for Z = X + Y disp(b) 0 0 1.0000 0.0400 2.0000 0.1400 3.0000 0.2600 4.0000 0.2400 5.0000 0.1200 6.0000 0.0400 7.0000 0.0800 8.0000 0.0800 If mgsum were used, it would not be necessary to be concerned about missing powers and the corresponding zero coefficients. Integral transforms We consider briefly the relationship of the moment generating function and the characteristic function with well known integral transforms (hence the name of this chapter). Moment generating function and the Laplace transform When we examine the integral forms of the moment generating function, we see that they represent forms of the Laplace transform, widely used in engineering and applied mathematics. Suppose $F_X$ is a probability distribution function with $F_X (-\infty) = 0$. The bilateral Laplace transform for $F_X$ is given by $\int_{-\infty}^{\infty} e^{-st} F_X (t) \ dt$ The Laplace-Stieltjes transform for $F_X$ is $\int_{-\infty}^{\infty} e^{-st} F_X (dt)$ Thus, if $M_X$ is the moment generating function for $X$, then $M_X (-s)$ is the Laplace-Stieltjes transform for $X$ (or, equivalently, for $F_X$). The theory of Laplace-Stieltjes transforms shows that under conditions sufficiently general to include all practical distribution functions $M_X (-s) = \int_{-\infty}^{\infty} e^{-st} F_X (dt) = s \int_{-\infty}^{\infty} e^{-st} F_X (t)\ dt$ Hence $\dfrac{1}{s} M_X (-s) = \int_{-\infty}^{\infty} e^{-st} F_X (t)\ dt$ The right hand expression is the bilateral Laplace transform of $F_X$. We may use tables of Laplace transforms to recover $F_X$ when $M_X$ is known. This is particularly useful when the random variable $X$ is nonnegative, so that $F_X (t) = 0$ for $t < 0$. If $X$ is absolutely continuous, then $M_X (-s) = \int_{-\infty}^{\infty} e^{-st} f_X (t) \ dt$ In this case, $M_X (-s)$ is the bilateral Laplace transform of $f_X$. For nonnegative random variable $X$, we may use ordinary tables of the Laplace transform to recover $f_X$. Example $8$ Use of Laplace transform Suppose nonnegative $X$ has moment generating function $M_X (s) = \dfrac{1}{(1 - s)}$ We know that this is the moment generating function for the exponential (1) distribution. Now, $\dfrac{1}{s} M_X (-s) = \dfrac{1}{s(1 + s)} = \dfrac{1}{s} - \dfrac{1}{1 + s}$ From a table of Laplace transforms, we find $1/s$ is the transform for the constant 1 (for $t \ge 0$) and $1/(1 + s)$ is the transform for $e^{-t}$, $t \ge 0$, so that $F_X (t) = 1 - e^{-t} t \ge 0$, as expected. Example $9$ Laplace transform and the density Suppose the moment generating function for a nonnegative random variable is $M_X (s) = [\dfrac{\lambda}{\lambda - s}]^{\alpha}$ From a table of Laplace transforms, we find that for $\alpha >0$. $\dfrac{\Gamma (\alpha)}{(s - a)^{\alpha}}$ is the Laplace transform of $t^{\alpha - 1} e^{at}$ $t \ge 0$ If we put $a = -\lambda$, we find after some algebraic manipulations $f_X (t) = \dfrac{\lambda^{\alpha} t^{\alpha - 1} e^{-\lambda t}}{\Gamma (\alpha)}$, $t \ge 0$ Thus, $X$ ~ gamma $(\alpha, \lambda)$, in keeping with the determination, above, of the moment generating function for that distribution. The characteristic function Since this function differs from the moment generating function by the interchange of parameter $s$ and $iu$, where $i$ is the imaginary unit, $i^2 = -1$, the integral expressions make that change of parameter. The result is that Laplace transforms become Fourier transforms. The theoretical and applied literature is even more extensive for the characteristic function. Not only do we have the operational properties (T1) and (T2) and the result on moments as derivatives at the origin, but there is an important expansion for the characteristic function. An expansion theorem If $E[[X]^n] < \infty$, then $\varphi^{(k)} (0) = i^k E[X^k]$, for $0 \le k \le n$ and $\varphi (u) = \sum_{k = 0}^{n} \dfrac{(iu)^k}{k!} E[X^k] + o (u^n)$ as $u \to 0$ We note one limit theorem which has very important consequences. A fundamental limit theorem Suppose $\{F_n: 1 \le n\}$ is a sequence of probability distribution functions and $\{\varphi_n: 1 \le n\}$ is the corresponding sequence of characteristic functions. If $F$ is a distribution function such that $F_n (t) \to F(t)$ at every point continuity for $F$, and $\phi$ is the characteristic function for $F$, then $\varphi_n (u) \to \varphi (u)$ $\forall u$ If $\varphi_n (u) \to \varphi (u)$ for all $u$ and $\phi$ is continuous at 0, then $\phi$ is the characteristic function for distribution function $F$ such that $F_n (t) \to F(t)$ at each point of continuity of $F$ — □
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/13%3A_Transform_Methods/13.01%3A_Transform_Methods.txt
The Central Limit Theorem The central limit theorem (CLT) asserts that if random variable $X$ is the sum of a large class of independent random variables, each with reasonable distributions, then $X$ is approximately normally distributed. This celebrated theorem has been the object of extensive theoretical research directed toward the discovery of the most general conditions under which it is valid. On the other hand, this theorem serves as the basis of an extraordinary amount of applied work. In the statistics of large samples, the sample average is a constant times the sum of the random variables in the sampling process . Thus, for large samples, the sample average is approximately normal—whether or not the population distribution is normal. In much of the theory of errors of measurement, the observed error is the sum of a large number of independent random quantities which contribute additively to the result. Similarly, in the theory of noise, the noise signal is the sum of a large number of random components, independently produced. In such situations, the assumption of a normal population distribution is frequently quite appropriate. We consider a form of the CLT under hypotheses which are reasonable assumptions in many practical situations. We sketch a proof of this version of the CLT, known as the Lindeberg-Lévy theorem, which utilizes the limit theorem on characteristic functions, above, along with certain elementary facts from analysis. It illustrates the kind of argument used in more sophisticated proofs required for more general cases. Consider an independent sequence $\{X_n: 1 \le n\}$ of random variables. Form the sequence of partial sums $S_n = \sum_{i = 1}^{n} X_i$ $\forall n \ge 1$ with $E[S_n] = \sum_{i = 1}^{n} E[X_i]$ and $\text{Var} [S_n] = \sum_{i = 1}^{n} \text{Var} [X_i]$ Let $S_n^*$ be the standardized sum and let $F_n$ be the distribution function for $S_n^*$. The CLT asserts that under appropriate conditions, $F_n (t) \to \phi(t)$ as $n \to \infty$ for all $t$. We sketch a proof of the theorem under the condition the $X_i$ form an iid class. Central Limit Theorem (Lindeberg-Lévy form) If $\{X_n: 1 \le n\}$ is iid, with $E[X_i] = \mu$, $\text{Var} [X_i] = \sigma^2$, and $S_n^* = \dfrac{S_n - n\mu}{\sigma \sqrt{n}}$ then $F_n (t) \to \phi (t)$ as $n \to \infty$, for all $t$ IDEAS OF A PROOF There is no loss of generality in assuming $\mu = 0$. Let $\phi$ be the common characteristic function for the $X_i$, and for each $n$ let $\phi_n$ be the characteristic function for $S_n^*$. We have $\varphi (t) = E[e^{itX}]$ and $\varphi_n (t) = E[e^{itS_n^*}] = \varphi^n (t/\sigma \sqrt{n})$ Using the power series expansion of $\varphi$ about the origin noted above, we have $\varphi (t) = 1 - \dfrac{\sigma^2 t^2}{2} + \beta (t)$ where $\beta (t) = o (t^2)$ as $t \to 0$ This implies $[\varphi (t/\sigma \sqrt{n}) - (1 - t^2/2n)] = [\beta (t /\sigma \sqrt{n})] = o(t^2/\sigma^2 n)$ so that $n[\varphi (t/\sigma \sqrt{n}) - (1 - t^2/2n)] \to 0$ as $n \to \infty$ A standard lemma of analysis ensures $(1 - \dfrac{t^2}{2n})^n \to e^{-t^2/2}$ as $n \to \infty$ so that $\varphi (t/\sigma \sqrt{n}) \to e^{-t^2/2}$ as $n \to \infty$ for all $t$ By the convergence theorem on characteristic functions, above, $F_n(t) \to \phi (t)$. — □ The theorem says that the distribution functions for sums of increasing numbers of the Xi converge to the normal distribution function, but it does not tell how fast. It is instructive to consider some examples, which are easily worked out with the aid of our m-functions. Demonstration of the central limit theorem Discrete examples We first examine the gaussian approximation in two cases. We take the sum of five iid simple random variables in each case. The first variable has six distinct values; the second has only three. The discrete character of the sum is more evident in the second case. Here we use not only the gaussian approximation, but the gaussian approximation shifted one half unit (the so called continuity correction for integer-values random variables). The fit is remarkably good in either case with only five terms. A principal tool is the m-function diidsum (sum of discrete iid random variables). It uses a designated number of iterations of mgsum. Example $1$ First random variable X = [-3.2 -1.05 2.1 4.6 5.3 7.2]; PX = 0.1*[2 2 1 3 1 1]; EX = X*PX' EX = 1.9900 VX = dot(X.^2,PX) - EX^2 VX = 13.0904 [x,px] = diidsum(X,PX,5); % Distribution for the sum of 5 iid rv F = cumsum(px); % Distribution function for the sum stairs(x,F) % Stair step plot hold on plot(x,gaussian(5*EX,5*VX,x),'-.') % Plot of gaussian distribution function % Plotting details (see Figure 13.2.1) Figure 13.2.1. Distribution for the sum of five iid random variables. Example $2$ Second random variable X = 1:3; PX = [0.3 0.5 0.2]; EX = X*PX' EX = 1.9000 EX2 = X.^2*PX' EX2 = 4.1000 VX = EX2 - EX^2 VX = 0.4900 [x,px] = diidsum(X,PX,5); % Distribution for the sum of 5 iid rv F = cumsum(px); % Distribution function for the sum stairs(x,F) % Stair step plot hold on plot(x,gaussian(5*EX,5*VX,x),'-.') % Plot of gaussian distribution function plot(x,gaussian(5*EX,5*VX,x+0.5),'o') % Plot with continuity correction % Plotting details (see Figure 13.2.2) Figure 13.2.2. Distribution for the sum of five iid random variables. As another example, we take the sum of twenty one iid simple random variables with integer values. We examine only part of the distribution function where most of the probability is concentrated. This effectively enlarges the x-scale, so that the nature of the approximation is more readily apparent. Example $3$ Sum of twenty-one iid random variables X = [0 1 3 5 6]; PX = 0.1*[1 2 3 2 2]; EX = dot(X,PX) EX = 3.3000 VX = dot(X.^2,PX) - EX^2 VX = 4.2100 [x,px] = diidsum(X,PX,21); F = cumsum(px); FG = gaussian(21*EX,21*VX,x); stairs(40:90,F(40:90)) hold on plot(40:90,FG(40:90)) % Plotting details (see Figure 13.2.3) Figure 13.2.3. Distribution for the sum of twenty one iid random variables. Absolutely continuous examples By use of the discrete approximation, we may get approximations to the sums of absolutely continuous random variables. The results on discrete variables indicate that the more values the more quickly the conversion seems to occur. In our next example, we start with a random variable uniform on (0, 1). Example $4$ Sum of three iid, uniform random variables. Suppose $X$ ~ uniform (0, 1). Then $E[X] = 0.5$ and $\text{Var} [X] = 1/12$. tappr Enter matrix [a b] of x-range endpoints [0 1] Enter number of x approximation points 100 Enter density as a function of t t<=1 Use row matrices X and PX as in the simple case EX = 0.5; VX = 1/12; [z,pz] = diidsum(X,PX,3); F = cumsum(pz); FG = gaussian(3*EX,3*VX,z); length(z) ans = 298 a = 1:5:296; % Plot every fifth point plot(z(a),F(a),z(a),FG(a),'o') % Plotting details (see Figure 13.2.4) Figure 13.2.4. Distribution for the sum of three iid uniform random variables. For the sum of only three random variables, the fit is remarkably good. This is not entirely surprising, since the sum of two gives a symmetric triangular distribution on (0, 2). Other distributions may take many more terms to get a good fit. Consider the following example. Example $5$ Sum of eight iid random variables Suppose the density is one on the intervals (-1, -0.5) and (0.5, 1). Although the density is symmetric, it has two separate regions of probability. From symmetry. $E[X] = 0$. Calculations show $\text{Var}[X] = E[X^2] = 7/12$. The MATLAB computations are: tappr Enter matrix [a b] of x-range endpoints [-1 1] Enter number of x approximation points 200 Enter density as a function of t (t<=-0.5)|(t>=0.5) Use row matrices X and PX as in the simple case [z,pz] = diidsum(X,PX,8); VX = 7/12; F = cumsum(pz); FG = gaussian(0,8*VX,z); plot(z,F,z,FG) % Plottting details (see Figure 13.2.5) Figure 13.2.5. Distribution for the sum of eight iid uniform random variables. Although the sum of eight random variables is used, the fit to the gaussian is not as good as that for the sum of three in Example 13.2.4. In either case, the convergence is remarkable fast—only a few terms are needed for good approximation. Convergence phenomena in probability theory The central limit theorem exhibits one of several kinds of convergence important in probability theory, namely convergence in distribution (sometimes called weak convergence). The increasing concentration of values of the sample average random variable Anwith increasing $n$ illustrates convergence in probability. The convergence of the sample average is a form of the so-called weak law of large numbers. For large enough n the probability that $A_n$ lies within a given distance of the population mean can be made as near one as desired. The fact that the variance of $A_n$ becomes small for large n illustrates convergence in the mean (of order 2). $E[|A_n - \mu|^2] \to 0$ as $n \to \infty$ In the calculus, we deal with sequences of numbers. If $\{a_n: 1 \le n\}$ s a sequence of real numbers, we say the sequence converges iff for $N$ sufficiently large $a_n$ approximates arbitrarily closely some number $L$ for all $n \ge N$. This unique number $L$ is called the limit of the sequence. Convergent sequences are characterized by the fact that for large enough $N$, the distance $|a_n - a_m|$ between any two terms is arbitrarily small for all $n$, $m \ge N$. Such a sequence is said to be fundamental (or Cauchy). To be precise, if we let $\epsilon > 0$ be the error of approximation, then the sequence is • Convergent iff there exists a number $L$ such that for any $\epsilon > 0$ there is an $N$ such that $|L - a_n| \le \epsilon$ for all $n \ge N$ • Fundamental iff for any $\epsilon > 0$ there is an $N$ such that $|a_n - a_m| \le \epsilon$ for all $n, m \ge N$ As a result of the completeness of the real numbers, it is true that any fundamental sequence converges (i.e., has a limit). And such convergence has certain desirable properties. For example the limit of a linear combination of sequences is that linear combination of the separate limits; and limits of products are the products of the limits. The notion of convergent and fundamental sequences applies to sequences of real-valued functions with a common domain. For each $x$ in the domain, we have a sequence $\{f_n (x): 1 \le n\}$ of real numbers. The sequence may converge for some $x$ and fail to converge for others. A somewhat more restrictive condition (and often a more desirable one) for sequences of functions is uniform convergence. Here the uniformity is over values of the argument $x$. In this case, for any $\epsilon > 0$ there exists an $N$ which works for all $x$ (or for some suitable prescribed set of $x$). These concepts may be applied to a sequence of random variables, which are real-valued functions with domain $\Omega$ and argument $\omega$. Suppose $\{X_n: 1 \le n\}$ is is a sequence of real random variables. For each argument $\omega$ we have a sequence $\{X_n (\omega): 1 \le n\}$ of real numbers. It is quite possible that such a sequence converges for some ω and diverges (fails to converge) for others. As a matter of fact, in many important cases the sequence converges for all $\omega$ except possibly a set (event) of probability zero. In this case, we say the seqeunce converges almost surely (abbreviated a.s.). The notion of uniform convergence also applies. In probability theory we have the notion of almost uniform convergence. This is the case that the sequence converges uniformly for all $\omega$ except for a set of arbitrarily small probability. The notion of convergence in probability noted above is a quite different kind of convergence. Rather than deal with the sequence on a pointwise basis, it deals with the random variables as such. In the case of sample average, the “closeness” to a limit is expressed in terms of the probability that the observed value $X_n (\omega)$ should lie close the the value $X(\omega)$ of the limiting random variable. We may state this precisely as follows: A sequence $\{X_n: 1 \le n\}$ converges to Xin probability, designated $X_n \stackrel{P}\longrightarrow X$ iff for any $\epsilon > 0$. $\text{lim}_n P(|X - X_n| > \epsilon) = 0$ There is a corresponding notion of a sequence fundamental in probability. The following schematic representation may help to visualize the difference between almost-sure convergence and convergence in probability. In setting up the basic probability model, we think in terms of “balls” drawn from a jar or box. Instead of balls, consider for each possible outcome $\omega$ a “tape” on which there is the sequence of values $X_1 (\omega)$, $X_2 (\omega)$, $X_3 (\omega)$, $\cdot\cdot\cdot$. • If the sequence of random variable converges a.s. to a random variable $X$, then there is an set of “exceptional tapes” which has zero probability. For all other tapes, $X_n (\omega) \to X(\omega)$. This means that by going far enough out on any such tape, the values $X_n (\omega)$ beyond that point all lie within a prescribed distance of the value $X(\omega)$ of the limit random variable. • If the sequence converges in probability, the situation may be quite different. A tape is selected. For $n$ sufficiently large, the probability is arbitrarily near one that the observed value $X_n (\omega)$ lies within a prescribed distance of $X(\omega)$. This says nothing about the values $X_m (\omega)$ on the selected tape for any larger $m$. In fact, the sequence on the selected tape may very well diverge. It is not difficult to construct examples for which there is convergence in probability but pointwise convergence for no $\omega$. It is easy to confuse these two types of convergence. The kind of convergence noted for the sample average is convergence in probability (a “weak” law of large numbers). What is really desired in most cases is a.s. convergence (a “strong” law of large numbers). It turns out that for a sampling process of the kind used in simple statistics, the convergence of the sample average is almost sure (i.e., the strong law holds). To establish this requires much more detailed and sophisticated analysis than we are prepared to make in this treatment. The notion of mean convergence illustrated by the reduction of $\text{Var} [A_n]$ with increasing $n$ may be expressed more generally and more precisely as follows. A sequence $\{X_n: 1 \le n\}$ converges in the mean of order $p$ to $X$ iff $E[|X - X_n|^p] \to 0$ as $n \to \infty$ designated $X_n \stackrel{L^p}\longrightarrow X$; as $n \to \infty$ If the order $p$ is one, we simply say the sequence converges in the mean. For $p = 2$, we speak of mean-square convergence. The introduction of a new type of convergence raises a number of questions. 1. There is the question of fundamental (or Cauchy) sequences and convergent sequences. 2. Do the various types of limits have the usual properties of limits? Is the limit of a linear combination of sequences the linear combination of the limits? Is the limit of products the product of the limits? 3. What conditions imply the various kinds of convergence? 4. What is the relation between the various kinds of convergence? Before sketching briefly some of the relationships between convergence types, we consider one important condition known as uniform integrability. According to the property (E9b) for integrals $X$ is integrable iff $E[I_{\{|X_i|>a\}} |X_t|] \to 0$ as $a \to \infty$ Roughly speaking, to be integrable a random variable cannot be too large on too large a set. We use this characterization of the integrability of a single random variable to define the notion of the uniform integrability of a class. Definition An arbitray class $\{X_t: t \in T\}$ is uniformly integrable (abbreviated u.i.) with respect to probability measure $P$ iff $\text{sup}_{t \in T} E[I_{\{|X_i| > a\}} | X_t|] \to 0$ as $a \to \infty$ This condition plays a key role in many aspects of theoretical probability. The relationships between types of convergence are important. Sometimes only one kind can be established. Also, it may be easier to establish one type which implies another of more immediate interest. We simply state informally some of the important relationships. A somewhat more detailed summary is given in PA, Chapter 17. But for a complete treatment it is necessary to consult more advanced treatments of probability and measure. Relationships between types of convergence for probability measures Consider a sequence $\{X_n: 1 \le n\}$ of random variables. It converges almost surely iff it converges almost uniformly. If it converges almost surely, then it converges in probability. It converges in mean, order $p$, iff it is uniformly integrable and converges in probability. If it converges in probability, then it converges in distribution (i.e. weakly). Various chains of implication can be traced. For example • Almost sure convergence implies convergence in probability implies convergence in distribution. • Almost sure convergence and uniform integrability implies convergence in mean $p$. We do not develop the underlying theory. While much of it could be treated with elementary ideas, a complete treatment requires considerable development of the underlying measure theory. However, it is important to be aware of these various types of convergence, since they are frequently utilized in advanced treatments of applied probability and of statistics.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/13%3A_Transform_Methods/13.02%3A_Convergence_and_the_Central_Limit_Theorem.txt
Simple Random Samples and Statistics We formulate the notion of a (simple) random sample, which is basic to much of classical statistics. Once formulated, we may apply probability theory to exhibit several basic ideas of statistical analysis. We begin with the notion of a population distribution. A population may be most any collection of individuals or entities. Associated with each member is a quantity or a feature that can be assigned a number. The quantity varies throughout the population. The population distribution is the distribution of that quantity among the members of the population. If each member could be observed, the population distribution could be determined completely. However, that is not always feasible. In order to obtain information about the population distribution, we select “at random” a subset of the population and observe how the quantity varies over the sample. Hopefully, the sample distribution will give a useful approximation to the population distribution. The sampling process We take a sample of size $n$, which means we select n members of the population and observe the quantity associated with each. The selection is done in such a manner that on any trial each member is equally likely to be selected. Also, the sampling is done in such a way that the result of any one selection does not affect, and is not affected by, the others. It appears that we are describing a composite trial. We model the sampling process as follows: Let $X_i$, $1 \le i \le n$ be the random variable for the ith component trial. Then the class $\{X_i: 1 \le i \le n\}$ is iid, with each member having the population distribution. This provides a model for sampling either from a very large population (often referred to as an infinite population) or sampling with replacement from a small population. The goal is to determine as much as possible about the character of the population. Two important parameters are the mean and the variance. We want the population mean and the population variance. If the sample is representative of the population, then the sample mean and the sample variance should approximate the population quantities. • The sampling process is the iid class $\{X_i: 1 \le i \le n\}$. • A random sample is an observation, or realization, $(t_1, t_2, \cdot\cdot\cdot, t_n)$ of the sampling process. The sample average and the population mean Consider the numerical average of the values in the sample $\bar{x} = \dfrac{1}{n} \sum_{i = 1}^{n} t_i$. This is an observation of the sample average $A_n = \dfrac{1}{n} \sum_{i = 1}^{n} X_i = \dfrac{1}{n} S_n$ The sample sum $S_n$ and the sample average $A_n$ are random variables. If another observation were made (another sample taken), the observed value of these quantities would probably be different. Now $S_n$ and $A_n$ are functions of the random variables $\{X_i: 1 \le i \le n\}$ in the sampling process. As such, they have distributions related to the population distribution (the common distribution of the $X_i$). According to the central limit theorem, for any reasonable sized sample they should be approximately normally distributed. As the examples demonstrating the central limit theorem show, the sample size need not be large in many cases. Now if the population mean $E[X]$ is $\mu$ and the population variance $\text{Var} [X]$ is $\sigma^2$, then $E[S_n] = \sum_{i = 1}^{n} E[X_i] = nE[X] = n\mu$ and $\text{Var}[S_n] = \sum_{i = 1}^{n} \text{Var} [X_i] = n \text{Var} [X] = n \sigma^2$ so that $E[A_n] = \dfrac{1}{n} E[S_n] = \mu$ and $\text{Var}[A_n] = \dfrac{1}{n^2} \text{Var} [S_n] = \sigma^2/n$ Herein lies the key to the usefulness of a large sample. The mean of the sample average $A_n$ is the same as the population mean, but the variance of the sample average is $1/n$ times the population variance. Thus, for large enough sample, the probability is high that the observed value of the sample average will be close to the population mean. The population standard deviation, as a measure of the variation is reduced by a factor $1/\sqrt{n}$. Example $1$ Sample size Suppose a population has mean $\mu$ and variance $\sigma^2$. A sample of size $n$ is to be taken. There are complementary questions: 1. If $n$ is given, what is the probability the sample average lies within distance a from the population mean? 2. What value of $n$ is required to ensure a probability of at least p that the sample average lies within distance a from the population mean? Solution Suppose the sample variance is known or can be approximated reasonably. If the sample size $n$ is reasonably large, depending on the population distribution (as seen in the previous demonstrations), then $A_n$ is approximately $N(\mu, \sigma^2/n)$. 1. Sample size given, probability to be determined. $p = P(|A_n - \mu| \le a) = P(|\dfrac{A_n - \mu}{\sigma/\sqrt{n}}| \le \dfrac{a \sqrt{n}}{\sigma} = 2\phi (a \sqrt{n}/\sigma) -1$ 2. Sample size to be determined, probability specified. $2 \phi (a \sqrt{n}/\sigma) - 1 \ge p$ iff $\phi (a\sqrt{n} /\sigma) \ge \dfrac{p + 1}{2}$ Find from a table or by use of the inverse normal function the value of $x = a\sqrt{n}/\sigma$ required to make $\phi (x)$ at least $(p + 1)/2$. Then $n \ge \sigma^2 (x/a)^2 = (\dfrac{\sigma}{a})^2 x^2$ We may use the MATLAB function norminv to calculate values of $x$ for various $p$. p = [0.8 0.9 0.95 0.98 0.99]; x = norminv(0,1,(1+p)/2); disp([p;x;x.^2]') 0.8000 1.2816 1.6424 0.9000 1.6449 2.7055 0.9500 1.9600 3.8415 0.9800 2.3263 5.4119 0.9900 2.5758 6.6349 For $p = 0.95$, $\sigma = 2$, $a = 0.2$, $n \ge (2/0.2)^2 3.8415 = 384.15$. Use at least 385 or perhaps 400 because of uncertainty about the actual $\sigma^2$ The idea of a statistic As a function of the random variables in the sampling process, the sample average is an example of a statistic. Definition: statistic A statistic is a function of the class $\{X_i: 1 \le i \le n\}$ which uses explicitly no unknown parameters of the population. Example $2$ Statistics as functions of the sampling progress The random variable $W = \dfrac{1}{n} \sum_{i = 1}^{n} (X_i - \mu)^2$, where $\mu = E[X]$ is not a statistic, since it uses the unknown parameter $\mu$. However, the following is a statistic. $V_n^* = \dfrac{1}{n} \sum_{i = 1}^{n} (X_i - A_n)^2 = \dfrac{1}{n} \sum_{i = 1}^{n} X_i^2 - A_n^2$ It would appear that $V_n^*$ might be a reasonable estimate of the population variance. However, the following result shows that a slight modification is desirable. Example $3$ An estimator for the population variance The statistic $V_n = \dfrac{1}{n - 1} \sum_{i = 1}^{n} (X_i - A_n)^2$ is an estimator for the population variance. VERIFICATION Consider the statistic $V_n^* = \dfrac{1}{n} \sum_{i = 1}^{n} (X_i - A_n)^2 = \dfrac{1}{n} \sum_{i = 1}^{n} X_i^2 - A_n^2$ Noting that $E[X^2] = \sigma^2 + \mu^2$, we use the last expression to show $E[V_n^*] = \dfrac{1}{n} n (\sigma^2 + \mu^2) - (\dfrac{\sigma^2}{n} + \mu^2) = \dfrac{n - 1}{n} \sigma^2$ The quantity has a bias in the average. If we consider $V_n = \dfrac{n}{n - 1} V_n^* = \dfrac{1}{n - 1} \sum_{i = 1}^{n} (X_i - A_n)^2$, then $E[V_n] = \dfrac{n}{n - 1} \dfrac{n - 1}{n} \sigma^2 = \sigma^2$ The quantity $V_n$ with $1/(n - 1)$ rather than $1/n$ is often called the sample variance to distinguish it from the population variance. If the set of numbers $(t_1, t_2, \cdot\cdot\cdot, t_N)$ represent the complete set of values in a population of $N$ members, the variance for the population would be given by $\sigma^2 = \dfrac{1}{N} \sum_{i = 1}^{N} t_i^2 - (\dfrac{1}{N} \sum_{i = 1}^{N} t_i)^2$ Here we use $1/N$ rather than $1/(N -1)$. Since the statistic $V_n$ has mean value $\sigma^2$, it seems a reasonable candidate for an estimator of the population variance. If we ask how good is it, we need to consider its variance. As a random variable, it has a variance. An evaluation similar to that for the mean, but more complicated in detail, shows that $\text{Var} [V_n] = \dfrac{1}{n} (\mu_4 - \dfrac{n - 3}{n - 1} \sigma^4)$ where $\mu_4 = E[(X - \mu)^4]$ For large $n$, $\text{Var} [V_n]$ is small, so that $V_n$ is a good large-sample estimator for $\sigma^2$. Example $4$ A sampling demonstration of the CLT Consider a population random variable $X$ ~ uniform [-1, 1]. Then $E[X] = 0$ and $\text{Var} [X] = 1/3$. We take 100 samples of size 100, and determine the sample sums. This gives a sample of size 100 of the sample sum random variable $S_{100}$, which has mean zero and variance 100/3. For each observed value of the sample sum random variable, we plot the fraction of observed sums less than or equal to that value. This yields an experimental distribution function for $S_{100}$, which is compared with the distribution function for a random variable $Y$ ~ $N(0, 100/3)$. rand('seed',0) % Seeds random number generator for later comparison tappr % Approximation setup Enter matrix [a b] of x-range endpoints [-1 1] Enter number of x approximation points 100 Enter density as a function of t 0.5*(t<=1) Use row matrices X and PX as in the simple case qsample % Creates sample Enter row matrix of VALUES X Enter row matrix of PROBABILITIES PX Sample size n = 10000 % Master sample size 10,000 Sample average ex = 0.003746 Approximate population mean E(X) = 1.561e-17 Sample variance vx = 0.3344 Approximate population variance V(X) = 0.3333 m = 100; a = reshape(T,m,m); % Forms 100 samples of size 100 A = sum(a); % Matrix A of sample sums [t,f] = csort(A,ones(1,m)); % Sorts A and determines cumulative p = cumsum(f)/m; % fraction of elements <= each value pg = gaussian(0,100/3,t); % Gaussian dbn for sample sum values plot(t,p,'k-',t,pg,'k-.') % Comparative plot % Plotting details (see Figure 13.3.1) Figure 13.3.1. The central limit theorem for sample sums.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/13%3A_Transform_Methods/13.03%3A_Simple_Random_Samples_and_Statistics.txt
Exercise $1$ Calculate directly the generating function $g_X(s)$ for the geometric $(p)$ distribution. Answer $g_X (s) = E[s^2] = \sum_{k = 0}^{\infty} p_k s^k = p \sum_{k = 0}^{\infty} q^k s^k = \dfrac{p}{1 - qs}$ (geometric series) Exercise $2$ Calculate directly the generating function $g_X(s)$ for the Poisson $(\mu)$ distribution. Answer $g_X (s) = E[s^X] = \sum_{k = 0}^{\infty} p_k s^k = e^{-\mu} \sum_{k = 0}^{\infty} \dfrac{\mu^k s^k}{k!} = e^{-\mu} e^{\mu s} = e^{\mu (s - 1)}$ Exercise $3$ A projection bulb has life (in hours) represented by $X$ ~ exponential (1/50). The unit will be replaced immediately upon failure or at 60 hours, whichever comes first. Determine the moment generating function for the time $Y$ to replacement. Answer $Y = I_{[0, a]} (X) X + I_{(a, \infty)} (X) a$ $e^{sY} = I_{[0, a)} (X) e^{sX} + I_{(a, \infty) (X) e^{as}$ $M_Y (s) = \int_{0}^{a} e^{st} \lambda e^{-\lambda t}\ dt + s^{sa} \int_{a}^{\infty} \lambda e^{-\lambda t}\ dt$ $= \dfrac{\lambda}{\lambda - s} [1 - e^{(\lambda - s) a}] + e^{-(\lambda - s) a}$ Exercise $4$ Simple random variable $X$ has distribution $X =$ [-3 -2 0 1 4] $PX =$ [0.15 0.20 0.30 0.25 0.10] a. Determine the moment generating function for $X$ b. Show by direct calculation the $M_X' (0) = E[X]$ and $M_X'' (0) = E[X^2]$. Answer $M_X (s) = 0.15 e^{-3s} + 0.20 e^{-2s} + 0.30 + 0.25 e^s + 0.10 e^{4s}$ $M_X' (s) = -3 \cdot 0.15 e^{-3s} - 2 \cdot 0.20 e^{-2s} + 0 + 0.25 e^{s} + 4 \cdot 0.10 e^{4s}$ $M_X''(s) = (-3)^2 \cdot 0.15 e^{-3s} + (-2)^2 \cdot 0.20 e^{-2s} + 0 + 0.25 e^{s} + 4^2 \cdot 0.10 e^{4s}$ Setting $s = 0$ and using $e^0 = 1$ give the desired results. Exercise $5$ Use the moment generating function to obtain the variances for the following distributions EXponential $(\lambda)$ Gamma ($\alpha, \lambda$) Normal ($\mu, \sigma^2$) Answer a. Exponential: $M_X(s) = \dfrac{\lambda}{\lambda - s}$ $M_X'(s) = \dfrac{\lambda}{(\lambda - s)^2}$ $M_X''(s) = \dfrac{2\lambda}{(\lambda - s)^3}$ $E[X] = \dfrac{\lambda}{\lambda^2} = \dfrac{1}{\lambda}$ $E[X^2] = \dfrac{2\lambda}{\lambda^3} = \dfrac{2}{\lambda^2}$ $\text{Var}[X] = \dfrac{2}{\lambda^2} - (\dfrac{1}{\lambda})^2= \dfrac{1}{\lambda^2}$ b. Gamma ($\alpha, \lambda$): $M_X (s) = (\dfrac{\lambda}{\lambda - s})^{\alpha}$ $M_X' (s) = \alpha (\dfrac{\lambda}{\lambda - s})^{\alpha - 1}$ $\dfrac{\lambda}{(\lambda - s)^2} = \alpha (\dfrac{\lambda}{\lambda - s})^{\alpha} \dfrac{1}{\lambda - s}$ $M_X'' (s) = \alpha^2 (\dfrac{\lambda}{\lambda - s})^{\alpha}\dfrac{1}{\lambda - s} \dfrac{1}{\lambda - s} + \alpha (\dfrac{\lambda}{\lambda - s})^{\alpha} \dfrac{1}{(\lambda - s)^2}$ $E[X] =\dfrac{\alpha}{\lambda}$ $E[X^2] =\dfrac{\alpha^2 + \alpha}{\lambda^2}$ $\text{Var} [X] = \dfrac{\alpha}{\lambda^2}$ c. Normal($\mu, \sigma$): $M_X (s) = \text{exp} (\dfrac{\sigma^2 s^2}{2} + \mu s)$ $M_X'(s) = M_X (s) \cdot (\sigma^2 s + \mu)$ $M_X''(s) = M_X (s) \cdot (\sigma^2 s + \mu)^2 + M_X (s) \sigma^2$ $E[X] = \mu$ $E[X^2] = \mu^2 + \sigma^2$ $\text{Var} [X] = \sigma^2$ Exercise $6$ The pair $\{X, Y\}$ is iid with common moment generating function $\dfrac{\lambda^3}{(\lambda - s)^3}$. Determine the moment generating function for $Z = 2X - 4Y + 3$. Answer $M_Z(s) = e^{3s} (\dfrac{\lambda}{\lambda - 2s})^3 (\dfrac{\lambda}{\lambda + 4s})^3$ Exercise $7$ The pair $\{X, Y\}$ is iid with common moment generating function $M_X (s) = (0.6 + 0.4e^s)$. Determine the moment generating function for $Z = 5X + 2Y$. Answer $M_Z (s) = (0.6 + 0.4e^{5s})(0.6 + 0.4e^{2s}) Exercise \(8$ Use the moment generating function for the symmetric triangular distribution on $(-c, c)$ as derived in the section "Three Basic Transforms". 1. Obtain an expression for the symmetric triangular distribution on $(a, b)$ for any $a < b$. 2. Use the result of part (a) to show that the sum of two independent random variables uniform on $(a, b)$ has symmetric triangular distribution on $(2a, 2b)$. Answer Let $m = (a + b)/2$ and $c = (b - a)/2$. If $Y$ ~ symetric triangular on $(-c, c)$, then $X = Y + m$ is symmetric triangular on $(m - c, m + c) = (a, b)$ and $M_X (s) = e^{ms} M_Y (s) = \dfrac{e^{cs} + e^{-cs} - 2}{c^2s^2} e^{ms} = \dfrac{e^{(m + c)s} + e^{(m - c)s} - 2e^{ms}}{c^2s^2} = \dfrac{e^{hs} + e^{as} - 2e^{\dfrac{a+b}{2}s}}{(\dfrac{b - a}{2})^2s^2}$ $M_{X + Y} (s) = [\dfrac{e^{sb} - e^{sa}}{s(b - a)}]^2 = \dfrac{e^{s2b}+ e^{s2a} - 2e^{s(b + a)}}{s^2 (b - a)^2}$ Exercise $9$ Random variable $X$ has moment generating function $\dfrac{p^2}{(1 - qe^s)^2}$. a. Use derivatives to determine $E[X]$ and $\text{Var} [X]$. b. Recognize the distribution from the form and compare $E[X]$ and $\text{Var} [X]$ with the result of part (a). Answer $[p^2 (1 - qe^s)^{-2}]' = \dfrac{2p^2qe^s}{(1 - qe^s)^3}$ so that $E[X] = 2q/p$ $[p^2 (1 - qe^s)^{-2}]'' = \dfrac{6p^2 q^2 e^s}{(1 - qe^s)^4} + \dfrac{2p^2qe^s}{(1 - qe^s)^3}$ so that $E[X^2] = \dfrac{6q^2}{p^2} + \dfrac{2q}{p}$ $\text{Var} [X] = \dfrac{2q^2}{p^2} + \dfrac{2q}{p} = \dfrac{2(q^2 + pq)}{p^2} = \dfrac{2q}{p^2}$ $X$ ~ negative binomial $(2, p)$, which has $E[X] = 2q/p$ and $\text{Var} [X] = 2q/p^2$. Exercise $10$ The pair $\{X, Y\}$ is independent. $X$ ~ Poisson (4) and $Y$ ~ geometric (0, 3). Determine the generating function $g_Z$ for $Z = 3X + 2Y$. Answer $g_Z (s) = g_X (s^3) g_Y (s^2) = e^{4(s^3-1)} \cdot \dfrac{0.3}{1 - qs^2}$ Exercise $11$ Random variable $X$ has moment generating function $M_X (s) = \dfrac{1}{1 - 3s} \cdot \text{exp} (16s^2/2 + 3s)$ By recognizing forms and using rules of combinations, determine $E[X]$ and $\text{Var} [X]$. Answer $X = X_1 + X_2$ with $X_1$ ~ exponential (1/3) $X_2$ ~ $N$(3, 16) $E[X] = 3 + 3 = 6$ $\text{Var} [X] = 9 + 16 = 25$ Exercise $12$ Random variable $X$ has moment generating function $M_X (s) = \dfrac{\text{exp} (3(e^s - 1))}{1 - 5s} \cdot \text{exp} (16s^2/2 + 3s)$ By recognizing forms and using rules of combinations, determine $E[X]$ and $\text{Var} [X]$. Answer $X = X_1 + X_2 + X_3$, with $X_1$ ~ Poisson (3), $X_2$ ~ exponential (1/5), $X_3\0 ~ \(N$ (3, 16) $E[X] = 3 + 5 + 3 = 11$ $\text{Var} [X] = 3 + 25 + 16 = 44$ Exercise $13$ Suppose the class $\{A, B, C\}$ of events is independent, with respective probabilities 0.3, 0.5, 0.2. Consider $X = -3I_A + 2I_B + 4I_C$ a. Determine the moment generating functions for and use properties of moment generating functions to determine the moment generating function for $X$. b. Use the moment generating function to determine the distribution for $X$. c. Use canonic to determine the distribution. Compare with result (b). d. Use distributions for the separate terms; determine the distribution for the sum with mgsum3. Compare with result (b). Answer $M_X (s) = (0.7 + 0.3 e^{-3s})(0.5 + 0.5 e^{2s}) (0.8 + 0.2 e^{4s}) =$ $0.12 e^{-3s} + 0.12 e^{-s} + 0.28 + 0.03 e^{s} + 0.28 e^{2s} + 0.03 e^{3s} + 0.07 e^{4s} + 0.07 e^{6s}$ The distribution is $X =$ [-3 -1 0 1 2 3 4 6] $PX =$ [0.12 0.12 0.28 0.03 0.28 0.03 0.07 0.07] c = [-3 2 4 0]; P = 0.1*[3 5 2]; canonic Enter row vector of coefficients c Enter row vector of minterm probabilities minprob(P) Use row matrices X and PX for calculations Call for XDBN to view the distribution P1 = [0.7 0.3]; P2 = [0.5 0.5]; P3 = [0.8 0.2]; X1 = [0 -3]; X2 = [0 2]; X3 = [0 4]; [x,px] = mgsum3(X1,X2,X3,P1,P2,P3); disp([X;PX;x;px]') -3.0000 0.1200 -3.0000 0.1200 -1.0000 0.1200 -1.0000 0.1200 0 0.2800 0 0.2800 1.0000 0.0300 1.0000 0.0300 2.0000 0.2800 2.0000 0.2800 3.0000 0.0300 3.0000 0.0300 4.0000 0.0700 4.0000 0.0700 6.0000 0.0700 6.0000 0.0700 Exercise $14$ Suppose the pair $\{X, Y\}$ is independent, with both $X$ and $Y$ binomial. Use generating functions to show under what condition, if any, $X + Y$ is binomial. Answer Binomial iff both have same $p$, as shown below. $g_{X + Y} (s) = (q_1 + p_1 s)^n (q_2 + p_2s)^m = (q + ps)^{n + m}$ iff $p_1 = p_2$ Exercise $15$ Suppose the pair $\{X, Y\}$ is independent, with both $X$ and $Y$ Poisson. a. Use generating functions to show under what condition $X + Y$ is Poisson. b. What about $X - Y$? Justify your answer. Answer Always Poisson, as the argument below shows. $g_{X + Y} (s) = e^{\mu(s - 1)} e^{v(s - 1)} = e^{(\mu + v) (s - 1)}$ However, $Y$ ~ $X$ could have negative values. Exercise $16$ Suppose the pair $\{X, Y\}$ is independent, $Y$ is nonnegative integer-valued, $X$ is Poisson and $X + Y$ is Poisson. Use the generating functions to show that $Y$ is Poisson. Answer $E[X+Y] = \mu + v$, where $v = E[Y] > 0$, $g_X (s) = e^{\mu(s - 1)}$ and $g_{X + Y} (s) = g_X (s) g_Y (s) = e^{(\mu + s) (s - 1)$. Division by $g_X (s)$ gives $g_Y (s) = e^{v(s - 1)}$. Exercise $17$ Suppose the pair $\{X, Y\}$ is iid, binomial (6, 0.51). By the result of Exercise 13.4.14 $X + Y$ is binomial. Use mgsum to obtain the distribution for $Z = 2X + 4Y$. Does $Z$ have the binomial distribution? Is the result surprising? Examine the first few possible values for $Z$. Write the generating function for $Z$; does it have the form for the binomial distribution? Answer x = 0:6; px = ibinom(6,0.51,x); [Z,PZ] = mgsum(2*x,4*x,px,px); disp([Z(1:5);PZ(1:5)]') 0 0.0002 % Cannot be binomial, since odd values missing 2.0000 0.0012 4.0000 0.0043 6.0000 0.0118 8.0000 0.0259 - - - - - - - - $g_X (s) = g_Y (s) = (0.49 + 0.51s)^6$ $g_Z (s) = (0.49 + 0.51s^2)^6 (0.49 + 0.51s^4)^6$ Exercise $18$ Suppose the pair $\{X, Y\}$ is independent, with $X$ ~ binomial (5, 0.33) and $Y$ ~ binomial (7, 0.47). Let $G = g(X) = 3X^2 - 2X$ and $H = h(Y) = 2Y^2 + Y + 3$. a. Use the mgsum to obtain the distribution for $G + H$. b. Use icalc and csort to obtain the distribution for $G + H$ and compare with the result of part (a). Answer X = 0:5; Y = 0:7; PX = ibinom(5,0.33,X); PY = ibinom(7,0.47,Y); G = 3*X.^2 - 2*X; H = 2*Y.^2 + Y + 3; [Z,PZ] = mgsum(G,H,PX,PY); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P M = 3*t.^2 - 2*t + 2*u.^2 + u + 3; [z,pz] = csort(M,P); e = max(abs(pz - PZ)) % Comparison of p values e = 0 Exercise $19$ Suppose the pair $\{X, Y\}$ is independent, with $X$ ~ binomial (8, 0.39) and $Y$ ~ uniform on {-1.3, -0.5, 1.3, 2.2, 3.5}. Let $U = 3X^2 - 2X + 1$ and $V = Y^3 + 2Y - 3$ a. Use mgsum to obtain the distribution for $U + V$. b. Use icalc and csort to obtain the distribution for $U + V$ and compare with the result of part (a). Answer X = 0:8; Y = [-1.3 -0.5 1.3 2.2 3.5]; PX = ibinom(8,0.39,X); PY = (1/5)*ones(1,5); U = 3*X.^2 - 2*X + 1; V = Y.^3 + 2*Y - 3; [Z,PZ] = mgsum(U,V,PX,PY); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P M = 3*t.^2 - 2*t + 1 + u.^3 + 2*u - 3; [z,pz] = csort(M,P); e = max(abs(pz - PZ)) e = 0 Exercise $20$ If $X$ is a nonnegative integer-valued random variable, express the generating function as a power series. a. Show that the $k$th derivative at $s = 1$ is $g_X^{(k)} (1) = E[X(X - 1)(X - 2) \cdot \cdot \cdot (X - k + 1)]$ b. Use this to show the $\text{Var} [X] = g_X''(1) + g_X'(1) - [g_X'(1)]^2$. Answer Since power series may be differentiated term by term $g_X^{(n)} (s) = \sum_{k = 0}^{\infty} k (k - 1) \cdot (k - n + 1) p_k s^{k - n}$ so that $g_X^{(n)} (1) = \sum_{k = 0}^{\infty} k(k - 1) \cdot (k - n + 1) p_k = E[X(X - 1) \cdot\cdot\cdot (X - n + 1)]$ $\text{Var} [X] = E[X^2] - E^2[X] = E[X(X - 1)] + E[X] - E^2[X] = g_X''(1) + g_X' (1) - [g_X'(1)]^2$ Exercise $21$ Let $M_X (\cdot)$ be the moment generating function for $X$. a. Show that $\text{Var}[X]$ is the second derivative of $e^{-s\mu} M_X(s)$ evaluated at $s = 0$. b. Use this fact to show that $X$ ~ $N(\mu, \sigma^2)$, then $\text{Var} [X] = \sigma^2$. Answer $f(s) = e^{-s \mu} M_X (s)$ $f''(s) = e^{-s\mu} [-\mu M_X' (s) + \mu^2 M_X (s) + M_X''(s) - \mu M_X'(s)]$ Setting $s = 0$ and using the result on moments gives $f''(0) = -\mu^2 + \mu^2 + E[X^2] - \mu^2 = \text{Var} [X]$ Exercise $22$ Use derivatives of $M_{M_m} (s)$ to obtain the mean and variance of the negative binomial ($m, p$) distribution. Answer To simplify writing use $f(s)$ for $M_X (S)$. $f(s) = \dfrac{p^m}{(1 - qe^s)^m}$ $f'(s) = \dfrac{mp^mqe^s}{(1 - qe^s)^{m + 1}}$ $f''(s) = \dfrac{mp^m qe^s}{1 - qe^s)^{m + 1}} + \dfrac{m(m+1) p^m q^2 e^{2s}}{1 - qe^s)^{m + 2}}$ $E[X] = \dfrac{mp^m q}{(1 - q)^{m + 1}} = \dfrac{mq}{p}$ $E[X^2] = \dfrac{mq}{p} + \dfrac{m(m+1)p^mq^2}{(1-q)^{m + 2}}$ $\text{Var} [X] = \dfrac{mq}{p} + \dfrac{m(m + 1) q^2}{p^2} - \dfrac{m^2 q^2}{p^2} = \dfrac{mq}{p^2}$ Exercise $23$ Use moment generating functions to show that variances add for the sum or difference of independent random variables. Answer To simplify writing, set $f(s) = M_X (s)$, $g(s) = M_Y (s)$, and $h(s) = M_X (s) M_Y(s)$ $h'(s) = f'(s) g(s) + f(s) g'(s)$ $h''(s) = f''(s) g(s) + f'(s) g'(s) + f'(s) g'(s) + f(s) g''(s)$ Setting $s = 0$ yields $E[X + Y] = E[X] + E[Y]$ $E[(X + Y)^2] = E[X^2] + 2E[X]E[Y] + E[Y^2]$ $E^2 [X + Y] = E^2[X] + 2E[X] E[Y] + E^2[Y]$ Taking the difference gives $\text{Var}[X + Y] = \text{Var} [X] + \text{Var} [Y]$. A similar treatment with $g(s)$ replaced by $g(-s)$ shows $\text{Var} [X - Y] = \text{Var} [X] + \text{Var} [Y]$. Exercise $24$ The pair $\{X, Y\}$ is iid $N$(3,5). Use the moment generating function to show that $Z = 2X - 2Y + 3$ is normal (see Example 3 from "Transform Methods" for general result). Answer $M_{3X} (s) = M_X (3s) = \text{exp} (\dfrac{9 \cdot 5s^2}{2} + 3 \cdot 3s)$ $M_{-2Y} (s) = M_Y(-2s) = \text{exp} (\dfrac{4 \cdot 5s^2}{2} - 2 \cdot 3s)$ $M_Z (s) = e^{3s} \text{exp} (\dfrac{(45 + 20)s^2}{2} + (9 - 6) s) = \text{exp} (\dfrac{65s^2}{2} + 6s)$ Exercise $25$ Use the central limit theorem to show that for large enough sample size (usually 20 or more), the sample average $A_n = \dfrac{1}{n} \sum_{i = 1}^{n} X_i$ is approximately $N(\mu, \sigma^2/n)$ for any reasonable population distribution having mean value $\mu$ and variance $\sigma^2$. Answer $E[A_n] = \dfrac{1}{n} \sum_{i = 1}^{n} \mu = \mu$ $\text{Var} [A_n] = \dfrac{1}{n^2} \sum_{i = 1}^{n} \sigma^2 = \dfrac{\sigma^2}{n}$ By the central limit theorem, $A_n$ is approximately normal, with the mean and variance above. Exercise $26$ A population has standard deviation approximately three. It is desired to determine the sample size n needed to ensure that with probability 0.95 the sample average will be within 0.5 of the mean value. 1. Use the Chebyshev inequality to estimate the needed sample size. 2. Use the normal approximation to estimate $n$ (see Example 1 from "Simple Random Samples and Statistics"). Answer Chevyshev inequality: $P(\dfrac{|A_n - \mu|}{\sigma/\sqrt{n}} \ge \dfrac{0.5 \sqrt{n}}{3}) \le \dfrac{3^2}{0.5^2 n} \le 0.05$ implies $n \ge 720$ Normal approximation: Use of the table in Example 1 from "Simple Random Samples and Statistics" shows $n \ge (3/0.5)^2 3.84 = 128$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/13%3A_Transform_Methods/13.04%3A_Problems_on_Transform_Methods.txt
Conditional expectation, given a random vector, plays a fundamental role in much of modern probability theory. Various types of “conditioning” characterize some of the more important random sequences and processes. The notion of conditional independence is expressed in terms of conditional expectation. Conditional independence plays an essential role in the theory of Markov processes and in much of decision theory. We first consider an elementary form of conditional expectation with respect to an event. Then we consider two highly intuitive special cases of conditional expectation, given a random variable. In examining these, we identify a fundamental property which provides the basis for a very general extension. We discover that conditional expectation is a random quantity. The basic property for conditional expectation and properties of ordinary expectation are used to obtain four fundamental properties which imply the “expectationlike” character of conditional expectation. An extension of the fundamental property leads directly to the solution of the regression problem which, in turn, gives an alternate interpretation of conditional expectation. Conditioning by an event If a conditioning event $C$ occurs, we modify the original probabilities by introducing the conditional probability measure $P(\cdot |C)$. In making the change form $P(A)$ to $P(A|C) = \dfrac{P(AC)}{P(C)}$ we effectively do two things: • We limit the possible outcomes to event $C$ • We “normalize” the probability mass by taking $P(C)$ as the new unit It seems reasonable to make a corresponding modification of mathematical expectation when the occurrence of event $C$ is known. The expectation $E[X]$ is the probability weighted average of the values taken on by $X$. Two possibilities for making the modification are suggested. • We could replace the prior probability measure $P(\cdot)$ with the conditional probability measure $P(\cdot|C)$ and take the weighted average with respect to these new weights. • We could continue to use the prior probability measure $P(\cdot)$ and modify the averaging process as follows: • Consider the values $P(\omega)$ for only those $\omega \in C$. This may be done by using the random variable $I_C X$ which has value $X(\omega)$ for $\omega \in C$ and zero elsewhere. The expectation $E[I_C X]$ is the probability weighted sum of those values taken on in $C$. • The weighted average is obtained by dividing by $P(C)$. These two approaches are equivalent. For a simple random variable $X = sum_{k = 1}^{n} t_k I_{A_k}$ in canonical form $E[I_C X]/P(C) = \sum_{k = 1}^{n} E[t_k I_C I_{A_k}] /P(C) = \sum_{k = 1}^{n} t_k P(CA_k) /P(C) = \sum_{k = 1}^{n} t_k P(A_k |C)$ The final sum is expectation with respect to the conditional probability measure. Arguments using basic theorems on expectation and the approximation of general random variables by simple random variables allow an extension to a general random variable $X$. The notion of a conditional distribution, given $C$, and taking weighted averages with respect to the conditional probability is intuitive and natural in this case. However, this point of view is limited. In order to display a natural relationship with more the general concept of conditioning with repspect to a random vector, we adopt the following Definition The conditional expectation of $X$, given event $C$ with positive probability, is the quantity $E[X|C] = \dfrac{E[I_C X]}{P(C)} = \dfrac{E[I_C X]}{E[I_C]}$ Remark. The product form $E[X|C] P(C) = E[I_C X]$ is often useful. Example $1$ A numerical example Suppose $X$ ~ exponential ($\lambda$) and $C = \{1/\lambda \le X \le 2/\lambda\}$. Now $I_C = I_M (X)$ where $M = [1/\lambda, 2/\lambda]$. $P(C) = P(X \ge 1/\lambda) - P(X > 2/\lambda) = e^{-1} e^{-2}$ and $E[I_C X] = \int I_M (t) t \lambda e^{-\lambda t}\ dt = \int_{1/\lambda}^{2/\lambda} t\lambda e^{-\lambda t}\ dt = \dfrac{1}{\lambda} (2e^{-1} - 3e^{-2})$ Thus $E[X|C] = \dfrac{2e^{-1} - 3e^{-2}}{\lambda (e^{-1} - e^{-2})} \approx \dfrac{1.418}{\lambda}$ Conditioning by a random vector—discrete case Suppose $X = \sum_{i = 1}^{n} t_i I_{A_i}$ and $Y = \sum_{j = 1}^{m} u_j I_{B_j}$ in canonical form. We suppose $P(A_i) = P(X = t_i) > 0$ and $P(B_j) = P(Y = u_j) > 0$, for each permissible $i, j$. Now $P(Y = u_j |X = t_i) = \dfrac{P(X = t_i, Y = u_j)}{P(X = t_i)}$ We take the expectation relative to the conditional probability $P(\cdot |X = t_i)$ to get $E[g(Y) |X =t_i] = \sum_{j = 1}^{m} g(u_j) P(Y = u_j |X = t_i) = e(t_i)$ Since we have a value for each $t_i$ in the range of $X$, the function $e(\cdot)$ is defined on the range of $X$. Now consider any reasonable set $M$ on the real line and determine the expectation $E[I_M (X) g(Y)] = \sum_{i = 1}^{n} \sum_{j = 1}^{m} I_M (t_i) g(u_j) P(X = t_i, Y = u_j)$ $= \sum_{i = 1}^{n} I_M(t_i) [\sum_{j = 1}^{m} g(u_j) P(Y = u_j|X = t_i)] P(X = t_i)$ $= \sum_{i = 1}^{n} I_M (t_i) e(t_i) P(X = t_i) = E[I_M (X) e(X)]$ We have the pattern $(A)$ $E[I_M(X) g(Y)] = E[I_M(X) e(X)]$ where $e(t_i) = E[g(Y)|X = t_i]$ for all $t_i$ in the range of $X$. We return to examine this property later. But first, consider an example to display the nature of the concept. Example $2$ Basic calculations and interpretation Suppose the pair $\{X, Y\}$ has the joint distribution $P(X = t_i, Y = u_j)$ $X =$ 0 1 4 9 $Y = 2$ 0.05 0.04 0.21 0.15 0 0.05 0.01 0.09 0.10 -1 0.10 0.05 0.10 0.05 $PX$ 0.20 0.10 0.40 0.30 Calculate $E[Y|X = t_i]$ for each possible value $t_i$ taken on by $X$ $E[Y|X = 0] = -1 \dfrac{0.10}{0.20} + 0 \dfrac{0.05}{0.20} + 2 \dfrac{0.05}{0.20}$ $= (-1 \cdot 0.10 + 0 \cdot 0.05 + 2 \cdot 0.05)/0.20 = 0$ $E[Y|X = 1] = (-1 \cdot 0.05 + 0 \cdot 0.01 + 2 \cdot 0.04)/0.10 = 0.30$ $E[Y|X = 4] = (-1 \cdot 0.10 + 0 \cdot 0.09 + 2 \cdot 0.21)/0.40 = 0.80$ $E[Y|X = 9] = (-1 \cdot 0.05 + 0 \cdot 0.10 + 2 \cdot 0.15)/0.10 = 0.83$ The pattern of operation in each case can be described as follows: • For the $i$ th column, multiply each value $u_j$ by $P(X = t_i, Y = u_j)$, sum, then divide by $P(X = t_i)$. The following interpretation helps visualize the conditional expectation and points to an important result in the general case. • For each $t_i$ we use the mass distributed “above” it. This mass is distributed along a vertical line at values $u_j$ taken on by $Y$. The result of the computation is to determine the center of mass for the conditional distribution above $t = t_i$. As in the case of ordinary expectations, this should be the best estimate, in the mean-square sense, of $Y$ when $X = ti$. We examine that possibility in the treatment of the regression problem in Section: The regression problem. Although the calculations are not difficult for a problem of this size, the basic pattern can be implemented simply with MATLAB, making the handling of much larger problems quite easy. This is particularly useful in dealing with the simple approximation to an absolutely continuous pair. X = [0 1 4 9]; % Data for the joint distribution Y = [-1 0 2]; P = 0.01*[ 5 4 21 15; 5 1 9 10; 10 5 10 5]; jcalc % Setup for calculations Enter JOINT PROBABILITIES (as on the plane) P Enter row matrix of VALUES of X X Enter row matrix of VALUES of Y Y Use array operations on matrices X, Y, PX, PY, t, u, and P EYX = sum(u.*P)./sum(P); % sum(P) = PX (operation sum yields column sums) disp([X;EYX]') % u.*P = u_j P(X = t_i, Y = u_j) for all i, j 0 0 1.0000 0.3000 4.0000 0.8000 9.0000 0.8333 The calculations extend to $E[g(X, Y)|X = t_i]$. Instead of values of $u_j$ we use values of $g(t_i, u_j)$ in the calculations. Suppose $Z = g(X, Y) = Y^2 - 2XY$. G = u.^2 - 2*t.*u; % Z = g(X,Y) = Y^2 - 2XY EZX = sum(G.*P)./sum(P); % E[Z|X=x] disp([X;EZX]') 0 1.5000 1.0000 1.5000 4.0000 -4.0500 9.0000 -12.8333 Conditioning by a random vector — absolutely continuous case Suppose the pair $\{X, Y\}$ has joint density function $f_{XY}$. We seek to use the concept of a conditional distribution, given $X = t$. The fact that $P(X = t) = 0$ for each $t$ requires a modification of the approach adopted in the discrete case. Intuitively, we consider the conditional density $f_{Y|X} (u|t) \ge 0$, $\int f_{Y|X} (u|t)\ du = \dfrac{1}{f_X (t)} \int f_{XY} (t, u)\ du = f_X (t)/f_X (t) = 1$ We define, in this case, $E[g(Y)|X = t] = \int g(u) f_{Y|X} (u|t)\ du = e(t)$ The function $e(\cdot)$ is defined for $f_X (t) > 0$, hence effectively on the range of $X$. For any reasonable set $M$ on the real line, $E[I_M (X) g(Y)] = \int \int I_M (t) g(u) f_{XY} (t, u)\ dudt = \int I_M (t) [\int g(u) f_{Y|X} (u|t) \ du] f_X (u) \ dt$ $= \int I_M (t) e(t) f_X (t) \ dt$, where $e(t) = E[g(Y)| X = t]$ Thus we have, as in the discrete case, for each $t$ in the range of $X$. ($A$) $E[I_M(X) g(Y)] = E[I_M(X) e(X)]$ where $e(t) = E[g(Y)|X = t]$ Again, we postpone examination of this pattern until we consider a more general case. Example $3$ Basic calculation and interpretation Suppose the pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{6}{5} (t + 2u)$ on the triangular region bounded by $t = 0$, $u = 1$, and $u = t$ (see Figure 14.1.1). Then $f_X (t) = \dfrac{6}{5} \int_{t}^{1} (t + 2u)\ du = \dfrac{6}{5} (1 + t - 2t^2)$, $0 \le t \le 1$ By definition, then, $f_{Y|X} (u|t) = \dfrac{t+2u}{1+t- 2t^2}$ on the triangle (zero elsewhere) We thus have $E[Y|X = t] = \int u f_{Y|X} (u|t)\ du = \dfrac{1}{1 + t - 2t^2} \int_{t}^{1} (tu + 2u^2)\ du = \dfrac{4 + 3t - 7t^3}{6(1 + t - 2t^2)}$ $(0 \le t < 1)$ Theoretically, we must rule out $t = 1$ since the denominator is zero for that value of $t$. This causes no problem in practice. Figure 14.1.1. The density function for Example 14.1.3 We are able to make an interpretation quite analogous to that for the discrete case. This also points the way to practical MATLAB calculations. • For any $t$ in the range of $X$ (between 0 and 1 in this case), consider a narrow vertical strip of width $\Delta t$ with the vertical line through $t$ at its center. If the strip is narrow enough, then $f_{XY} (t, u)$ does not vary appreciably with $t$ for any $u$. • The mass in the strip is approximately $\text{Mass} \approx \Delta t \int f_{XY} (t, u) \ du = \Delta t f_X (t)$ • The moment of the mass in the strip about the line $u = 0$ is approximately $\text{Momemt} \approx \Delta t \int u f_{XY} (t, u)\ du$ • The center of mass in the strip is $\text{Center of mass} = \dfrac{\text{Moment}}{\text{Mass}} \approx \dfrac{\Delta \int u f_{XY} (t, u) \ du}{\Delta t f_X (t)} = \int u f_{Y|X} (u|t)\ du = e(t)$ This interpretation points the way to the use of MATLAB in approximating the conditional expectation. The success of the discrete approach in approximating the theoretical value in turns supports the validity of the interpretation. Also, this points to the general result on regression in the section, "The Regression Problem". In the MATLAB handling of joint absolutely continuous random variables, we divide the region into narrow vertical strips. Then we deal with each of these by dividing the vertical strips to form the grid structure. The center of mass of the discrete distribution over one of the t chosen for the approximation must lie close to the actual center of mass of the probability in the strip. Consider the MATLAB treatment of the example under consideration. f = '(6/5)*(t + 2*u).*(u>=t)'; % Density as string variable tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density eval(f) % Evaluation of string variable Use array operations on X, Y, PX, PY, t, u, and P EYx = sum(u.*P)./sum(P); % Approximate values eYx = (4 + 3*X - 7*X.^3)./(6*(1 + X - 2*X.^2)); % Theoretical expression plot(X,EYx,X,eYx) % Plotting details (see Figure 14.1.2) — □ Figure 14.1.2. Theoretical and approximate conditional expectation for above. The agreement of the theoretical and approximate values is quite good enough for practical purposes. It also indicates that the interpretation is reasonable, since the approximation determines the center of mass of the discretized mass which approximates the center of the actual mass in each vertical strip. Extension to the general case Most examples for which we make numerical calculations will be one of the types above. Analysis of these cases is built upon the intuitive notion of conditional distributions. However, these cases and this interpretation are rather limited and do not provide the basis for the range of applications—theoretical and practical—which characterize modern probability theory. We seek a basis for extension (which includes the special cases). In each case examined above, we have the property $(A)$ $E[I_M (X) g(Y)] = E[I_M (X) e(X)]$ where $e(t) = E[g(Y) | X = t]$ for all $t$ in the range of $X$. We have a tie to the simple case of conditioning with respect to an event. If $C = \{X \in M\}$ has positive probability, then using $I_C = I_M (X)$ we have $(B)$ $E[I_M(X) g(Y)] = E[g(Y)|X \in M] P(X \in M)$ wo properties of expectation are crucial here: By the uniqueness property (E5), since (A) holds for all reasonable (Borel) sets, then $e(X)$ is unique a.s. (i.e., except for a set of $\omega$ of probability zero). By the special case of the Radon Nikodym theorem (E19), the function $e(\cdot)$ always exists and is such that random variable $e(X)$ is unique a.s. We make a definition based on these facts. Definition The conditional expectation $E[g(Y)| Y =t] = e(t)$ is the a.s. unique function defined on the range of $X$ such that $(A)$ $E[I_M (X) g(Y)] = E[I_M(X) e(X)]$ for all Borel sets $M$ Note that $e(X)$ is a random variable and $e(\cdot)$ is a function. Expectation $E[g(Y)]$ is always a constant. The concept is abstract. At this point it has little apparent significance, except that it must include the two special cases studied in the previous sections. Also, it is not clear why the term conditional expectation should be used. The justification rests in certain formal properties which are based on the defining condition (A) and other properties of expectation. In Appendix F we tabulate a number of key properties of conditional expectation. The condition (A) is called property (CE1). We examine several of these properties. For a detailed treatment and proofs, any of a number of books on measure-theoretic probability may be consulted. (CE1) Defining condition. $e(X) = E[g(Y)|X]$ a.s. iff $E[I_M (X) g(Y)] = E[I_M (X) e(X)]$ for each Borel set $M$ on the codomain of $X$ Note that $X$ and $Y$ do not need to be real valued, although $g(Y)$ is real valued. This extension to possible vector valued $X$ and $Y$ is extremely important. The next condition is just the property (B) noted above. (CE1a) If $P(X \in M) > 0$, then $E[I_M(X) e(X)] = E[g(Y)|X \in M] P(X \in M)$ The special case which is obtained by setting $M$ to include the entire range of $X$ so that $I_M (X(\omega)) = 1$ for all $\omega$ is useful in many theoretical and applied problems. (CE1b) Law of total probability. $E[g(Y)] = E\{E[g(Y)|X]\}$ It may seem strange that we should complicate the problem of determining $E[g(Y)]$ by first getting the conditional expectation $e(X) = E[g(Y)|X]$ then taking expectation of that function. Frequently, the data supplied in a problem makes this the expedient procedure. Exercise $4$ Use of the law of total probability Suppose the time to failure of a device is a random quantity $X$ ~ exponential ($\mu$), where the parameter $u$ is the value of a parameter random variable $H$. Thus $f_{X|H} (t|u) = u e^{-ut}$ for $t \ge 0$ If the parameter random variable $H$ ~ uniform $(a, b)$, determine the expected life $E[X]$ of the device. Solution We use the law of total probability: $E[X] = E\{E[X|H]\} = \int E[X|H = u] f_H (u)\ du$ Now by assumption $E[X|H = u] = 1/u$ and $f_H (u) = \dfrac{1}{b - a}$, $a < u < b$ Thus $E[X] = \dfrac{1}{b -a} \int_{a}^{b} \dfrac{1}{u} du = \dfrac{\text{ln} (b/a)}{b - a}$ For $a =1/100$, $b = 2/100$, $E[X] = 100 \text{ln} (2) \approx 69.31$. The next three properties, linearity, positivity/monotonicity, and monotone convergence, along with the defining condition provide the “expectation like” character. These properties for expectation yield most of the other essential properties for expectation. A similar development holds for conditional expectation, with some reservation for the fact that $e(X)$ is a random variable, unique a.s. This restriction causes little problem for applications at the level of this treatment. In order to get some sense of how these properties root in basic properties of expectation, we examine one of them. (CE2) Linearity. For any constants $a, b$ $E[ag(Y) + bh(Z) |X] = aE[g(Y)|X] + bE[h(Z)|X]$ a.s. VERIFICATION Let $e_1 (X) = E[g(Y)|X]$, $e_2 [X] = E[h(Z)|X]$, and $e(X) = E[ag(Y) + bh (Z) |X]$ a.s. $\begin{array} {lcrlc} {E[I_M (X) e(X)]} & = & {E\{I_M(X)[ag(Y) + bh(Z)]\} \text{ a.s.}} & & {\text{by(CE1)}} \ {} & = & {aE[I_M (X)g(Y)] + bE[I_M(X) h(Z)] \text{ a.s.}} & & {\text{by linearity of expectation}} \ {} & = & {aE[I_M (X)e_1(X)] + bE[I_M(X) e_2(X)] \text{ a.s.}} & & {\text{by (CE1)}} \ {} & = & {E\{I_M(X) [ae_1 (X) + be_2 (X)]\} \text{ a.s.}} & & {\text{by linearity of expectation}}\end{array}$ Since the equalities hold for any Borel $M$, the uniqueness property (E5) for expectation implies $e(X)= ae_1 (X) = be_2 (X)$ a.s. This is property (CE2). An extension to any finite linear combination is easily established by mathematical induction. — □ Property (CE5) provides another condition for independence. (CE5) Independence. $\{X, Y\}$ is an independent pair iff $E[g(Y)|X] = E[g(Y)]$ a.s. for all Borel functions $g$ iff $E[I_N(Y)|X] = E[I_N (Y)]$ a.s. for all Borel sets $N$ on the codomain of $Y$ Since knowledge of $X$ does not affect the likelihood that $Y$ will take on any set of values, then conditional expectation should not be affected by the value of $X$. The resulting constant value of the conditional expectation must be $E[g(Y)]$ in order for the law of total probability to hold. A formal proof utilizes uniqueness (E5) and the product rule (E18) for expectation. Property (CE6) forms the basis for the solution of the regresson problem in the next section. (CE6) $e(X) = E[g(Y)|X]$ a.s. iff $E[h(X) g(Y)] = E[h(X)e(X)]$ a.s. for any Borel function $h$ Examination shows this to be the result of replacing $I_M (X)$ in (CE1) with arbitrary $h(X)$. Again, Again, to get some insight into how the various properties arise, we sketch the ideas of a proof of (CE6). IDEAS OF A PROOF OF (CE6) For $h(X) = I_M(X)$, this is (CE1). For $h(X) = \sum_{i = 1}^{n} a_i I_{M_i} (X)$, the result follows by linearity. For $h \ge 0$, $g \ge 0$, there is a seqence of nonnegative, simple $h_n nearrow h$. Now by positivity, $e(X) \ge 0$. By monotone convergence (CE4), $E[h_n (X) g(Y)] \nearrow E[h(X) g(Y)]$ and $E[h_n(X) e(X)] \nearrow E[h(X) e(X)]$ Since corresponding terms in the sequences are equal, the limits are equal. For $h = h^{+} - h^{-}$, $g \ge 0$, the result follows by linearity (CE2). For $g = g^{+} - g^{-}$, the result again follows by linearity. — □ Properties (CE8) and (CE9) are peculiar to conditional expectation. They play an essential role in many theoretical developments. They are essential in the study of Markov sequences and of a class of random sequences known as submartingales. We list them here (as well as in Appendix F) for reference. (CE8) $E[h(X) g(Y)|X] = h(X) E[g(Y)|X]$ a.s. for any Borel function $h$ This property says that any function of the conditioning random vector may be treated as a constant factor. This combined with (CE10) below provide useful aids to computation. (CE9) Repeated conditioning If $X = h(W)$, then $E\{E[g(Y)|X|W\} = E\{E[g(Y)|W|X\} = E[g(Y)|X]$ a.s. This somewhat formal property is highly useful in many theoretical developments. We provide an interpretation after the development of regression theory in the next section. The next property is highly intuitive and very useful. It is easy to establish in the two elementary cases developed in previous sections. Its proof in the general case is quite sophisticated. (CE10) Under conditions on $g$ that are nearly always met in practice $E[g(X, Y)|X = t] = E[g(t, Y)|X = t]$ a.s. $[P_X]$ If $\{X, Y\}$ is independent, then $E[g(X, Y) |X = t] = E[g(t, Y)]$ a.s. $[P_X]$ It certainly seem reasonable to suppose that if $X = t$, then we should be able to replace $X$ by $t$ in $E[g(X, Y)| X =t]$ to get $E[g(t, Y)|X =t]$. Property (CE10) assures this. If $\{X, Y\}$ is an independent pair, then the value of $X$ should not affect the value of $Y$, so that $E[g(t, Y)|X = t] = E[g(t, Y)]$ a.s. Example $5$ Use of property (CE10) Consider again the distribution for Example 14.1.3. The pair $\{X, Y\}$ has density $f_{XY} (t, u) = \dfrac{6}{5} (t + 2u)$ on the triangular region bounded by $t = 0$, $u = 1$, and $u = t$ We show in Example 14.1.3 that $E[Y|X = t] = \dfrac{4 + 3t - 7 t^3}{6(1 + t - 2t^2)}$ $0 \le t < 1$ Let $Z = 3X^2 + 2XY$. Determine $E[Z|X = t]$. Solution By linearity, (CE8), and (CE10) $E[Z|X = t] = 3t^2 + 2tE[Y|X =t] = 3t^2 + \dfrac{4t + 3t^2 - 7t^4}{3(1 + t - 2t^2)}$ Conditional probability In the treatment of mathematical expectation, we note that probability may be expressed as an expectation $P(E) = E[I_E]$ For conditional probability, given an event, we have $E[I_E|C] = \dfrac{E[I_E I_C]}{P(C)} = \dfrac{P(EC)}{P(C)} = P(E|C)$ In this manner, we extend the concept conditional expectation. Definition The conditional probability of event $E$, given $X$, is $P(E|X) = E[I_E|X]$ Thus, there is no need for a separate theory of conditional probability. We may define the conditional distribution function $F_{Y|X} (u|X) = P(Y \le u|X) = E[I_{(-\infty, u]} (Y)|X]$ Then, by the law of total probability (CE1b), $F_Y (u) = E[F_{Y|X} (u|X)] = \int F_{Y|X} (u|t) F_X (dt)$ If there is a conditional density $f_{Y|X}$ such that $P(Y \in M|X = t) = \int_M f_{Y|X} (r|t)\ dr$ then $F_{Y|X} (u|t) = \int_{-\infty}^{u} f_{Y|X} (r|t)\ dr$ so that $f_{Y|X} (u|t) = \dfrac{\partial}{\partial u} F_{Y|X} (u|t)$ A careful, measure-theoretic treatment shows that it may not be true that $F_{Y|X} (\cdot |t)$ is a distribution function for all $t$ in the range of $X$. However, in applications, this is seldom a problem. Modeling assumptions often start with such a family of distribution functions or density functions. Example $6$ The conditional distribution function As in Example 14.1.4, suppose $X$ ~ exponential $(u)$, where the parameter $u$ is the value of a parameter random variable $H$. If the parameter random variable $H$ ~ uniform $(a, b)$, determine the distribution fuction $F_X$. Solution As in Example 14.1.4, take the assumption on the conditional distribution to mean $f_{X|H} (t|u) = ue^{-ut}$ $t \ge 0$ Then $F_{X|H} (t|u) = \int_{0}^{1} u e^{-us}\ ds = 1 - e^{-ut}$ $0 \le t$ By the law of total probability $F_X (t) = \int F_{X|H} (t|u) f_H (u) \ du = \dfrac{1}{b - a} \int_{a}^{b} (1 - e^{-ut}) \ du = 1 - \dfrac{1}{b - a} \int_{a}^{b} e^{-ut} \ du$ $= 1 - \dfrac{1}{t(b - a)} [e^{-bt} - e^{-at}]$ Differentiation with respect to $t$ yields the expression for $f_X (t)$ $f_X (t) = \dfrac{1}{b - a} [(\dfrac{1}{t^2} + \dfrac{b}{t}) e^{-bt} - (\dfrac{1}{t^2} + \dfrac{a}{t}) e^{-at}]$ $t > 0$ The following example uses a discrete conditional distribution and marginal distribution to obtain the joint distribution for the pair. Example $7$ A random number $N$ of Bernoulli trials A number $N$ is chosen by a random selection from the integers from 1 through 20 (say by drawing a card from a box). A pair of dice is thrown $N$ times. Let $S$ be the number of “matches” (i.e., both ones, both twos, etc.). Determine the joint distribution for $[N, S]$. Solution $N$ ~ uniform on the integers 1 through 20. $P(N = i) = 1/20$ for $1 \le i \le 20$. Since there are 36 pairs of numbers for the two dice and six possible matches, the probability of a match on any throw is 1/6. Since the $i$ throws of the dice constitute a Bernoulli sequence with probability 1/6 of a success (a match), we have $S$ conditionally binomial ($i$, 1/6), given $N = i$. For any pair $(i, j)$, $0 \le j \le i$, $P(N = i, S = j) = P(S = j|N = i) P(N = i)$ Now $E[S|N = i) = i/6$, so that $E[S] = \dfrac{1}{6} \cdot \dfrac{1}{20} \sum_{i = 1}^{20} i = \dfrac{20 \cdot 21}{6 \cdot 20 \cdot 2} = \dfrac{7}{4} = 1.75$ The following MATLAB procedure calculates the joint probabilities and arranges them “as on the plane.” % file randbern.m p = input('Enter the probability of success '); N = input('Enter VALUES of N '); PN = input('Enter PROBABILITIES for N '); n = length(N); m = max(N); S = 0:m; P = zeros(n,m+1); for i = 1:n P(i,1:N(i)+1) = PN(i)*ibinom(N(i),p,0:N(i)); end PS = sum(P); P = rot90(P); disp('Joint distribution N, S, P, and marginal PS') randbern % Call for the procedure Enter the probability of success 1/6 Enter VALUES of N 1:20 Enter PROBABILITIES for N 0.05*ones(1,20) Joint distribution N, S, P, and marginal PS ES = S*PS' ES = 1.7500 % Agrees with the theoretical value The regression problem We introduce the regression problem in the treatment of linear regression. Here we are concerned with more general regression. A pair $\{X, Y\}$ of real random variables has a joint distribution. A value $X(\omega)$ is observed. We desire a rule for obtaining the “best” estimate of the corresponding value $Y(\omega)$. If $Y(\omega)$ is the actual value and $r(X(\omega))$ is the estimate, then $Y(\omega) - r(X(\omega))$ is the error of estimate. The best estimation rule (function) $r(\cdot)$ is taken to be that for which the average square of the error is a minimum. That is, we seek a function $r$ such that $E[(Y - r(X))^2]$ is a minimum In the treatment of linear regression, we determine the best affine function, $u = at + b$. The optimum function of this form defines the regression line of $Y$ on $X$. We now turn to the problem of finding the best function $r$, which may in some cases be an affine function, but more often is not. We have some hints of possibilities. In the treatment of expectation, we find that the best constant to approximate a random variable in the mean square sense is the mean value, which is the center of mass for the distribution. In the interpretive Example 14.2.1 for the discrete case, we find the conditional expectation $E[Y|X = t_i]$ is the center of mass for the conditional distribution at $X = t_i$. A similar result, considering thin vertical strips, is found in Example 14.1.3 for the absolutely continuous case. This suggests the possibility that $e(t) = E[Y|X = t]$ might be the best estimate for $Y$ when the value $X(\omega) = t$ is observed. We investigate this possibility. The property (CE6) proves to be key to obtaining the result. Let $e(X) = E[Y|X]$. We may write (CE6) in the form $E[h(X) (Y - e(X))] = 0$ for any reasonable function $h$. Consider $E[(Y - r(X))^2] = E[(Y - e(X) + e(X) - r(X))^2]$ $= E[(Y - e(X))^2] + E[(e(X) - r(X))^2] + 2E[(Y - e(X))(r(X) - e(X))]$ Now $e(X)$ is fixed (a.s.) and for any choice of $r$ we may take $h(X) = r(X) - e(X)$ to assert that $E[Y - e(X)) (r(X) - e(X))] = E[(Y - e(X)) h(X)] = 0$ Thus $E[(Y - r(X))^2] = E[(Y - e(X))^2] + E[(e(X) - r(X))^2]$ The first term on the right hand side is fixed; the second term is nonnegative, with a minimum at zero iff $r(X) = e(X)$ a.s. Thus, $r = e$ is the best rule. For a given value $X(\omega) = t$ the best mean square esitmate of $Y$ is $u = e(t) = E[Y|X = t]$ The graph of $u = e(t)$ vs $t$ is known as the regression curve of Y on X. This is defined for argument $t$ in the range of $X$, and is unique except possibly on a set $N$ such that $P(X \in N) = 0$. Determination of the regression curve is thus determination of the conditional expectation. Example $8$ Regression curve for an independent pair If the pair $\{X, Y\}$ is independent, then $u = E[Y|X = t] = E[Y]$, so that the regression curve of $Y$ on $X$ is the horizontal line through $u = E[Y]$. This, of course, agrees with the regression line, since $\text{Cov} [X, Y] = 0$ and the regression line is $u = 0 = E[Y]$. The result extends to functions of $X$ and $Y$. Suppose $Z = g(X, Y)$. Then the pair $\{X, Z\}$ has a joint distribution, and the best mean square estimate of $Z$ given $X = t$ is $E[Z|X = t]$. Example $9$ Estimate of a function of $\{X, Y\}$ Suppose the pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = 60t^2 u$ for $0 \le t \le 1$, $0 \le u \le 1 - t$. This is the triangular region bounded by $t = 0$, $u = 0$, and $u = 1 - t$ (see Figure 14.1.3). Integration shows that $f_X (t) = 30t^2 (1 - t)^2$, $0 \le t \le 1$ and $f_{Y|X} (u|t) = \dfrac{2u}{(1 - t)^2}$ on the triangle Consider $Z = \begin{cases} X^2 & \text{for } X \le 1/2 \ 2Y & \text{for } X > 1/2 \end{cases} = I_M(X) X^2 + I_N (X) 2Y$ where $M =$ [0, 1/2] and $N$ = (1/2, 1]. Determine $E[Z|X = t]$. Figure 14.1.3. The density function for Example 14.1.9. Solution By linearity and (CE8). $E[Z|X = t] = E[I_M (X) X^2||X = t] + E[I_N (X) 2Y||X = t] = I_M (t) t^2 + I_N (t) 2E[Y|X = t]$ Now $E[Y|X = t] = \int u f_{Y|X} (u|t) \ du = \dfrac{1}{(1 - t)^2} \int_{0}^{1 - t} 2u^2\ du = \dfrac{2}{3} \cdot \dfrac{(1 - t)^3}{(1 - t)^2} = \dfrac{2}{3} (1 - t)$, $0 \le t < 1$ so that $E[Z|X = t] = I_M (t) t^2 + I_N (t) \dfrac{4}{3} (1 - t)$ Note that the indicator functions separate the two expressions. The first holds on the interval $M =$ [0, 1/2] and the second holds on the interval $N =$ (1/2, 1]. The two expressions $t^2\0 and (4/3)\((1 - t)$ must not be added, for this would give an expression incorrect for all t in the range of $X$. APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 100 Enter number of Y approximation points 100 Enter expression for joint density 60*t.^2.*u.*(u<=1-t) Use array operations on X, Y, PX, PY, t, u, and P G = (t<=0.5).*t.^2 + 2*(t>0.5).*u; EZx = sum(G.*P)./sum(P); % Approximation eZx = (X<=0.5).*X.^2 + (4/3)*(X>0.5).*(1-X); % Theoretical plot(X,EZx,'k-',X,eZx,'k-.') % Plotting details % See Figure 14.1.4 The fit is quite sufficient for practical purposes, in spite of the moderate number of approximation points. The difference in expressions for the two intervals of $X$ values is quite clear. Figure 14.1.4. Theoretical and approximate regression curves for Example 14.1.9 Example $10$ Estimate of a function of $\{X, Y\}$ Suppose the pair $\{X, Y\}$ has joint density $f_{XY} (t, u) = \dfrac{6}{5} (t^2 + u)$, on the unit square $0 \le t \le 1$, $0 \le u \le 1$ (see Figure 14.1.5). The usual integration shows $f_X (t) = \dfrac{3}{5} (2t^2 + 1)$, $0 \le t \le 1$, and $f_{Y|X} (u|t) = 2 \dfrac{t^2 + u}{2t^2 +1}$ on the square Consider $Z = \begin{cases} 2X^2 & \text{for } X \le Y \ 3XY & \text{for } X > Y \end{cases} I_Q (X, Y) 2X^2 + I_{Q^c} (X, Y) 3XY$, where $Q = \{(t, u): u \ge t\}$ Determine $E[Z|X = t]$. Solution $E[Z|X = t] = 2t^2 \int I_Q (t, u) f_{Y|X} (u|t) + 3t\int I_{Q^c} (t, u) u f_{Y|X} (u|t)\ du$ $= \dfrac{4t^2}{2t^2+1} \int_{t}^{1} (t^2 + u)\ du + \dfrac{6t}{2t^2 + 1} \int_{0}^{t} (t^2u + u^2)\ du = \dfrac{-t^5 + 4t^4 + 2t^2}{2t^2 + 1}$, $0 \le t \le 1$ Figure 14.1.5. The density and regions for Example 14.1.10 Note the different role of the indicator functions than in Example 14.1.9. There they provide a separation of two parts of the result. Here they serve to set the effective limits of integration, but sum of the two parts is needed for each $t$. Figure 14.1.6. Theoretical and approximate regression curves for Example 14.1.10 APPROXIMATION tuappr Enter matrix [a b] of X-range endpoints [0 1] Enter matrix [c d] of Y-range endpoints [0 1] Enter number of X approximation points 200 Enter number of Y approximation points 200 Enter expression for joint density (6/5)*(t.^2 + u) Use array operations on X, Y, PX, PY, t, u, and P G = 2*t.^2.*(u>=t) + 3*t.*u.*(u<t); EZx = sum(G.*P)./sum(P); % Approximate eZx = (-X.^5 + 4*X.^4 + 2*X.^2)./(2*X.^2 + 1); % Theoretical plot(X,EZx,'k-',X,eZx,'k-.') % Plotting details % See Figure 14.1.4 The theoretical and approximate are barely distinguishable on the plot. Although the same number of approximation points are use as in Figure 14.1.4 (Example 14.1.9), the fact that the entire region is included in the grid means a larger number of effective points in this example. Given our approach to conditional expectation, the fact that it solves the regression problem is a matter that requires proof using properties of of conditional expectation. An alternate approach is simply to define the conditional expectation to be the solution to the regression problem, then determine its properties. This yields, in particular, our defining condition (CE1). Once that is established, properties of expectation (including the uniqueness property (E5)) show the essential equivalence of the two concepts. There are some technical differences which do not affect most applications. The alternate approach assumes the second moment $E[X^2]$ is finite. Not all random variables have this property. However, those ordinarily used in applications at the level of this treatment will have a variance, hence a finite second moment. We use the interpretation of $e(X) = E[g(Y)|X]$ as the best mean square estimator of $g(Y)$, given $X$, to interpret the formal property (CE9). We examine the special form (CE9a) $E\{E[g(Y)|X]|X, Z\} = E\{E|g(Y)|X, Z]|X\} = E[g(Y)|X]$ Put $e_1 (X,Z) = E[g(Y)|X,Z]$, the best mean square estimator of $g(Y)$, given $(X, Z)$. Then (CE9b) can be expressed $E[e(X)|X, Z] = e(X)$ a.s. and $E[e_1 (X, Z)|X] = e(X)$ a.s. In words, if we take the best estimate of $g(Y)$, given $X$, then take the best mean sqare estimate of that, given $(X,Z)$, we do not change the estimate of $g(Y)$. On the other hand if we first get the best mean sqare estimate of $g(Y)$, given $(X, Z)$, and then take the best mean square estimate of that, given $X$, we get the best mean square estimate of $g(Y)$, given $X$.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/14%3A_Conditional_Expectation_Regression/14.01%3A_Conditional_Expectation_Regression.txt
For the distributions in Exercises 1-3 1. Determine the regression curve of $Y$ on $X$ and compare with the regression line of $Y$ on $X$. 2. For the function $Z = g(X, Y)$ indicated in each case, determine the regression curve of $Z$ on $X$. Exercise $1$ (See Exercise 17 from "Problems on Mathematical Expectation"). The pair $\{X, Y\}$ has the joint distribution (in file npr08_07.m): $P(X = t, Y = u)$ t = -3.1 -0.5 1.2 2.4 3.7 4.9 u = 7.5 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203 4.1 0.0495 0 0.1089 0.0528 0.0363 0.0231 -2.0 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189 -3.8 0.0510 0.0484 0.0726 0.0132 0 0.0077 The regression line of $Y$ on $X$ is $u = 0.5275 t + 0.6924$. $Z = X^2Y + |X + Y|$ Answer The regression line of $Y$ on $X$ is $u = 0.5275t + 0.6924$. npr08_07 Data are in X, Y, P jcalc - - - - - - - - - - - EYx = sum(u.*P)./sum(P); disp([X;EYx]') -3.1000 -0.0290 -0.5000 -0.6860 1.2000 1.3270 2.4000 2.1960 3.7000 3.8130 4.9000 2.5700 G = t.^2.*u + abs(t+u); EZx = sum(G.*P)./sum(P); disp([X;EZx]') -3.1000 4.0383 -0.5000 3.5345 1.2000 6.0139 2.4000 17.5530 3.7000 59.7130 4.9000 69.1757 Exercise $2$ (See Exercise 18 from "Problems on Mathematical Expectation"). The pair $\{X, Y\}$ has the joint distribution (in file npr08_08.m): $P(X = t, Y = u)$ t = 1 3 5 7 9 11 13 15 17 19 u = 12 0.0156 0.0191 0.0081 0.0035 0.0091 0.0070 0.0098 0.0056 0.0091 0.0049 10 0.0064 0.0204 0.0108 0.0040 0.0054 0.0080 0.0112 0.0064 0.0104 0.0056 9 0.0196 0.0256 0.0126 0.0060 0.0156 0.0120 0.0168 0.0096 0.0056 0.0084 5 0.0112 0.0182 0.0108 0.0070 0.0182 0.0140 0.0196 0.0012 0.0182 0.0038 3 0.0060 0.0260 0.0162 0.0050 0.0160 0.0200 0.0280 0.0060 0.0160 0.0040 -1 0.0096 0.0056 0.0072 0.0060 0.0256 0.0120 0.0268 0.0096 0.0256 0.0084 -3 0.0044 0.0134 0.0180 0.0140 0.0234 0.0180 0.0252 0.0244 0.0234 0.0126 -5 0.0072 0.0017 0.0063 0.0045 0.0167 0.0090 0.0026 0.0172 0.0217 0.0223 The regression line of $Y$ on $X$ is $u = -0.2584 t + 5.6110$. $Z = I_Q (X, Y) \sqrt{X} (Y - 4) + I_{Q^c} (X, Y) XY^2$ $Q = \{(t, u) : u \le t \}$ Answer The regression line of $Y$ on $X$ is $u = -0.2584 t + 5.6110$. npr08_08 Data are in X, Y, P jcalc - - - - - - - - - - - - EYx = sum(u.*P)./sum(P); disp([X;EYx]') 1.0000 5.5350 3.0000 5.9869 5.0000 3.6500 7.0000 2.3100 9.0000 2.0254 11.0000 2.9100 13.0000 3.1957 15.0000 0.9100 17.0000 1.5254 19.0000 0.9100 M = u<=t; G = (u-4).*sqrt(t).*M + t.*u.^2.*(1-M); EZx = sum(G.*P)./sum(P); disp([X;EZx]') 1.0000 58.3050 3.0000 166.7269 5.0000 175.9322 7.0000 185.7896 9.0000 119.7531 11.0000 105.4076 13.0000 -2.8999 15.0000 -11.9675 17.0000 -10.2031 19.0000 -13.4690 Exercise $3$ (See Exercise 19 from "Problems on Mathematical Expectation"). Data were kept on the effect of training time on the time to perform a job on a production line. $X$ is the amount of training, in hours, and $Y$ is the time to perform the task, in minutes. The data are as follows (in file npr08_09.m): $P(X = t, Y = u)$ t = 1 1.5 2 2.5 3 u = 5 0.039 0.011 0.005 0.001 0.001 4 0.065 0.070 0.050 0.015 0.010 3 0.031 0.061 0.137 0.051 0.033 2 0.012 0.049 0.163 0.058 0.039 1 0.003 0.009 0.045 0.025 0.017 The regression line of $Y$ on $X$ is $u = -0.7793t + 4.3051$. $Z = (Y -2.8)/X$ Answer The regression line of $Y$ on $X$ is $u = -0.7793t + 4.3051$. npr08_09 Data are in X, Y, P jcalc - - - - - - - - - - - - EYx = sum(u.*P)./sum(P); disp([X;EYx]') 1.0000 3.8333 1.5000 3.1250 2.0000 2.5175 2.5000 2.3933 3.0000 2.3900 G = (u - 2.8)./t; EZx = sum(G.*P)./sum(P); disp([X;EZx]') 1.0000 1.0333 1.5000 0.2167 2.0000 -0.1412 2.5000 -0.1627 3.0000 -0.1367 For the joint densities in Exercises 4-11 below 1. Determine analytically the regression curve of $Y$ on $X$ and compare with the regression line of $Y$ on $X$. 2. Check these with a discrete approximation. Exercise $4$ (See Exercise 10 from "Problems On Random Vectors and Joint Distributions", Exercise 20 from "Problems on Mathematical Expectation", and Exercise 23 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = 1$ for $0 \le t \le 1$. $0 \le u \le 2(1 - t)$. The regression line of $Y$ on $X$ is $u = 1 - t$. $f_X (t) = 2(1 - t)$, $0 \le t \le 1$ Answer The regression line of $Y$ on $X$ is $u = 1 - t$. $f_{Y|X} (u|t) = \dfrac{1}{2(1 - t)}$. $0 \le t \le 1$, $0 \le u \le 2(1 - t)$ $E[Y|X = t] = \dfrac{1}{2(1 - t)} \int_{0}^{2(1-t)} udu = 1 - t$, $0 \le t \le 1$ tuappr: [0 1] [0 2] 200 400 u<=2*(1-t) - - - - - - - - - - - - - EYx = sum(u.*P)./sum(P); plot(X,EYx) % Straight line thru (0,1), (1,0) Exercise $5$ (See Exercise 13 from " Problems On Random Vectors and Joint Distributions", Exercise 23 from "Problems on Mathematical Expectation", and Exercise 24 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = \dfrac{1}{8} (t+u)$ for $0 \le t \le 2$, $0 \le u \le 2$. The regression line of $Y$ on $X$ is $u = -t/11 + 35/33$. $f_{X} (t) = \dfrac{1}{4} (t + 1)$, $0 \le t \le 2$ Answer The regression line of $Y$ on $X$ is $u = -t/11 + 35/33$. $f_{Y|X} (u|t) = \dfrac{(t + u)}{2(t + 1)}$ $0 \le t \le 2$, $0 \le u \le 2$ $E[Y|X = t] = \dfrac{1}{2(t + 1)} \int_{0}^{2} (tu + u^2)\ du = 1 + \dfrac{1}{3t+3}$ $0 \le t \le 2$ tuappr: [0 2] [0 2] 200 200 (1/8)*(t+u) EYx = sum(u.*P)./sum(P); eyx = 1 + 1./(3*X+3); plot(X,EYx,X,eyx) % Plots nearly indistinguishable Exercise $6$ (See Exercise 15 from " Problems On Random Vectors and Joint Distributions", Exercise 25 from "Problems on Mathematical Expectation", and Exercise 25 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$. The regression line of $Y$ on $X$ is $u = 0.0958t + 1.4876$. $f_X (t) = \dfrac{3}{88} (1 + t) (1 + 4t + t^2) = \dfrac{3}{88} (1 + 5t + 5t^2 + t^3)$, $0 \le t \le 2$ Answer The regression line of $Y$ on $X$ is $u = 0.0958t + 1.4876$. $f_{Y|X} (u|t) = \dfrac{2t + 3u^2}{(1 + t)(1 + 4t + t^2)}$ $0 \le u \le 1 + t$ $E[Y|X = t] = \dfrac{1}{(1 + t) (1 + 4t + t^2)} \int_{0}^{1 + t} (2tu + 3u^3)\ du$ $= \dfrac{(t + 1)(t + 3) (3t+1)}{4(1 + 4t +t^2)}$, $0 \le t \le 2$ tuappr: [0 2] [0 3] 200 300 (3/88)*(2*t + 3*u.^2).*(u<=1+t) EYx = sum(u.*P)./sum(P); eyx = (X+1).*(X+3).*(3*X+1)./(4*(1 + 4*X + X.^2)); plot(X,EYx,X,eyx) % Plots nearly indistinguishable Exercise $7$ (See Exercise 16 from " Problems On Random Vectors and Joint Distributions", Exercise 26 from "Problems on Mathematical Expectation", and Exercise 26 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = 12t^2u$ on the parallelogram with vertices (-1, 0), (0, 0), (1, 1), (0, 1) The regression line of $Y$ on $X$ is $u = (4t + 5)/9$. $f_{X} (t) = I_{[-1, 0]} (t) 6t^2 (t + 1)^2 + I_{(0, 1]} (t) 6t^2 (1 - t^2)$ Answer The regression line of $Y$ on $X$ is $u = (23t + 4)/18$. $f_{Y|X} (u|t) = I_{[-1, 0]} (t) \dfrac{2u}{(t + 1)^2} + I_{(0, 1]} (t) \dfrac{2u}{(1 - t^2)}$ on the parallelogram $E[Y|X = t] = I_{[-1, 0]} (t) \dfrac{1}{(t + 1)^2} \int_{0}^{t + 1} 2u\ du + I_{(0, 1]} (t) \dfrac{1}{(1 - t^2)} \int_{t}^{1} 2u \ du$ $= I_{[-1, 0]} (t) \dfrac{2}{3} (t + 1) + I_{(0, 1]} (t) \dfrac{2}{3} \dfrac{t^2 + t + 1}{t + 1}$ tuappr: [-1 1] [0 1] 200 100 12*t.^2.*u.*((u<= min(t+1,1))&(u>=max(0,t))) EYx = sum(u.*P)./sum(P); M = X<=0; eyx = (2/3)*(X+1).*M + (2/3)*(1-M).*(X.^2 + X + 1)./(X + 1); plot(X,EYx,X,eyx) % Plots quite close Exercise $8$ (See Exercise 17 from " Problems On Random Vectors and Joint Distributions", Exercise 27 from "Problems on Mathematical Expectation", and Exercise 27 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{1, 2 - t\}$. The regression line of $Y$ on $X$ is $u = (-124t + 368)/431$ $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{11} t + I_{(1, 2]} (t) \dfrac{12}{11} t (2 - t)^2$ Answer The regression line of $Y$ on $X$ is $u = (-124t + 368)/431$ $f_{Y|X} (u|t) = I_{[0, 1]} (t) 2u + I_{(1, 2]} (t) \dfrac{2u}{(2 - t)^2}$ $E[Y|X = t] = I_{[0, 1]} (t) \int_{0}^{1} 2u^2 \ du + I_{(1, 2]} (t) \dfrac{1}{(2 - t)^2} \int_{0}^{2 - t} 2u^2 \ du$ $= I_{[0, 1]} (t) \dfrac{2}{3} + I_{(1, 2]} (t) \dfrac{2}{3} (2 - t)$ tuappr: [0 2] [0 1] 200 100 (24/11)*t.*u.*(u<=min(1,2-t)) EYx = sum(u.*P)./sum(P); M = X <= 1; eyx = (2/3)*M + (2/3).*(2 - X).*(1-M); plot(X,EYx,X,eyx) % Plots quite close Exercise $9$ (See Exercise 18 from " Problems On Random Vectors and Joint Distributions", Exercise 28 from "Problems on Mathematical Expectation", and Exercise 28 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$. The regression line of $Y$ on $X$ is $u = 1.0561 t - 0.2603$. $f_X (t) = I_{[0, 1]} (t) \dfrac{6}{23} (2 - t) + I_{(1, 2]} (t) \dfrac{6}{23} t^2$ Answer The regression line of $Y$ on $X$ is $u = 1.0561 t - 0.2603$. $f_{Y|X} (u|t) = I_{[0, 1]} (t) \dfrac{t+2u}{2(2-t)} + I_{(1, 2]} (t) \dfrac{t + 2u}{2t^2}$ $0 \le u \le \text{max } (2 - t, t)$ $E[Y|X = t] = I_{[0, 1]} (t) \dfrac{1}{2(2 - t)} \int_{0}^{2 - t} (tu + 2u^2) \ du + I_{(1, 2]} (t) \dfrac{1}{2t^2} \int_{0}^{t} (tu + 2u^2)\ du$ $= I_{[0, 1]} (t) \dfrac{1}{12} (t - 2) ( t - 8) + I_{(1, 2]} (t) \dfrac{7}{12} t$ tuappr: [0 2] [0 2] 200 200 (3/23)*(t+2*u).*(u<=max(2-t,t)) EYx = sum(u.*P)./sum(P); M = X<=1; eyx = (1/12)*(X-2).*(X-8).*M + (7/12)*X.*(1-M); plot(X,EYx,X,eyx) % Plots quite close Exercise $10$ (See Exercise 21 from " Problems On Random Vectors and Joint Distributions", Exercise 31 from "Problems on Mathematical Expectation", and Exercise 29 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} (t, u) = \dfrac{2}{13} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{min } \{2t, 3 - t\}$. The regression line of $Y$ on $X$ is $u = -0.1359 t + 1.0839$. $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{13} t^2 + I_{(1, 2]} (t) \dfrac{6}{13} (3 - t)$ Answer The regression line of $Y$ on $X$ is $u = -0.1359 t + 1.0839$. $f_{Y|X} (t|u) = I_{[0, 1]} (t) \dfrac{t + 2u}{6t^2} + I_{(1,2]} (t) \dfrac{t + 2u}{3(3 - t)}$ $0 \le u \le \text{max } (2t, 3 - t)$ $E[Y|X = t] = I_{[0, 1]} (t) \dfrac{1}{6t^2} \int_{0}^{t} (tu + 2u^2)\ du + I_{(1, 2]} (t) \dfrac{1}{3(3 - t)} \int_{0}^{3 - t} (tu + 2u^2)\ du$ $= I_{[0, 1]} (t) \dfrac{11}{9} t + I_{(1, 2]} (t) \dfrac{1}{18} (t^2 - 15t + 36)$ tuappr: [0 2] [0 2] 200 200 (2/13)*(t+2*u).*(u<=min(2*t,3-t)) EYx = sum(u.*P)./sum(P); M = X<=1; eyx = (11/9)*X.*M + (1/18)*(X.^2 - 15*X + 36).*(1-M); plot(X,EYx,X,eyx) % Plots quite close Exercise $11$ (See Exercise 22 from " Problems On Random Vectors and Joint Distributions", Exercise 32 from "Problems on Mathematical Expectation", and Exercise 30 from "Problems on Variance, Covariance, Linear Regression"). $f_{XY} 9t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2u^2$. for $0 \le u \le 1$. The regression line of $Y$ on $X$ is $u = 0.0817t + 0.5989$. $f_X (t) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 1) + I_{(1, 2]} (t) \dfrac{3}{14} t^2$ Answer The regression line of $Y$ on $X$ is $u = 0.0817t + 0.5989$. $f_{Y|X} (t|u) = I_{[0, 1]} (t) \dfrac{t^2 + 2u}{t^2 + 1} + I_{(1, 2]} (t) 3u^2$ $0 \le u \le 1$ $E[Y|X = t] = I_{[0, 1]} (t) \dfrac{1}{t^2 + 1} \int_{0}^{1} (t^2u + 2u^2)\ du + I_{(1, 2]} (t) \int_{0}^{1} 3u^3 \ du$ $= I_{[0, 1]} (t) \dfrac{3t^2 + 4}{6(t^2 + 1)} + I_{(1, 2]} (t) \dfrac{3}{4}$ tuappr: [0 2] [0 1] 200 100 (3/8)*(t.^2 + 2*u).*(t<=1) + ... (9/14)*t.^2.*u.^2.*(t>1) EYx = sum(u.*P)./sum(P); M = X<=1; eyx = M.*(3*X.^2 + 4)./(6*(X.^2 + 1)) + (3/4)*(1 - M); plot(X,EYx,X,eyx) % Plots quite close For the distributions in Exercises 12-16 below 1. Determine analytically $E[Z|X = t]$ 2. Use a discrete approximation to calculate the same functions. Exercise $12$ $f_{XY} (t, u) = \dfrac{3}{88} (2t + 3u^2)$ for $0 \le t \le 2$, $0 \le u \le 1 + t$, (see Exercise 37 from "Problems on Mathematical Expectation", and Exercise 14.2.6). $f_{X} (t) = \dfrac{3}{88} (1 + t) (1 + 4t + t^2) = \dfrac{3}{88} (1 + 5t + 5t^2 + t^3)$, $0 \le t \le 2$ $Z = I_{[0, 1]} (X) 4X + I_{(1, 2]} (X) (X + Y)$ Answer $Z = I_M (X) 4X + I_N (X) (X + Y)$. Use of linearity, (CE8), and (CE10) gives $E[Z|X = t] = I_M (t) 4t + I_N(t) (t + E[Y|X = t])$ $= I_M (t) 4t + I_N (t) (t + \dfrac{(t + 1)(t + 3) (3t + 1)}{4(1 + 4t + t^2)})$ % Continuation of Exercise 14.2.6 G = 4*t.*(t<=1) + (t + u).*(t>1); EZx = sum(G.*P)./sum(P); M = X<=1; ezx = 4*X.*M + (X + (X+1).*(X+3).*(3*X+1)./(4*(1 + 4*X + X.^2))).*(1-M); plot(X,EZx,X,ezx) % Plots nearly indistinguishable Exercise $13$ $f_{XY} (t, u) = \dfrac{24}{11} tu$ for $0 \le t \le 2$, $0 \le u \text{min } \{1, 2 - t\}$ (see Exercise 38 from "Problems on Mathematical Expectaton", Exercise 14.2.8). $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{11} t + I_{(1, 2]} (t) \dfrac{12}{11} t (2 - t)^2$ $Z = I_{M} (X, Y) \dfrac{1}{2} X + I_M (X, Y) Y^2$, $M = \{(t ,u): u > t\}$ Answer $Z = I_{M} (X, Y) \dfrac{1}{2} X + I_M (X, Y) Y^2$, $M = \{(t ,u): u > t\}$ $I_M(t, u) = I_{[0, 1]} (t) I_{[t, 1]} (u)$ $I_{M^c} (t, u) = I_{[0, 1]} (t) I_{[0, t]}(u) + I_{(1, 2]} (t) I_{[0, 2 - t]} (u)$ $E[Z|X = t] = I_{[0, 1]} (t) [\dfrac{t}{2} \int_{t}^{1} 2u\ du + \int_{0}^{t} u^2 \cdot 2u\ du] + I_{(1, 2]} (t) \int_{0}^{2 - t} u^2 \cdot \dfrac{2u}{(2 - t)^2}\ du$ $= I_{[0, 1]} (t) \dfrac{1}{2} t (1 - t^2 + t^3) + I_{(1, 2]} (t) \dfrac{1}{2} (2- t)^2$ % Continuation of Exercise 14.2.8 Q = u>t; G = (1/2)*t.*Q + u.^2.*(1-Q); EZx = sum(G.*P)./sum(P); M = X <= 1; ezx = (1/2)*X.*(1-X.^2+X.^3).*M + (1/2)*(2-X).^2.*(1-M); plot(X,EZx,X,ezx) % Plots nearly indistinguishable Exercise $14$ $f_{XY} (t, u) = \dfrac{3}{23} (t + 2u)$ for $0 \le t \le 2$, $0 \le u \le \text{max } \{2 - t, t\}$ (see Exercise 39 from "Problems on Mathematical Expectaton", and Exercise 14.2.9). $f_X(t) = I_{[0, 1]} (t) \dfrac{6}{23} (2 - t) + I_{(1, 2]} (t) \dfrac{6}{23} t^2$ $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y) 2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ Answer $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y) 2Y$, $M = \{(t, u): \text{max } (t, u) \le 1\}$ $I_M (t, u) = I_{[0, 1]} (t) I_{[0, 1]} (u)$ $I_{M^c} (t, u) = I_{[0, 1]} (t) I_{[1, 2 -t]} (u) + I_{(1,2]} (t) I_{[0, 1]} (u)$ $E[Z|X = t] = I_{[0, 1]} (t) \dfrac{1}{2(2 - t)} \int_{0}^{1} (t + u) (t + 2u)\ du + \dfrac{1}{2 - t} \int_{1}^{2 - t} u (t + 2u)\ du] + I_{(1, 2]} (t) 2E [Y|X = t]$ $= I_{[0, 1]} (t) \dfrac{1}{12} \cdot \dfrac{2t^3 - 30t^2 + 69t - 60}{t - 2} + I_{(1, 2]} (t) \dfrac{7}{6} 2t$ % Continuation of Exercise 14.2.9 M = X <= 1; Q = (t<=1)&(u<=1); G = (t+u).*Q + 2*u.*(1-Q); EZx = sum(G.*P)./sum(P); ezx = (1/12)*M.*(2*X.^3 - 30*X.^2 + 69*X -60)./(X-2) + (7/6)*X.*(1-M); plot(X,EZx,X,ezx) Exercise $15$ $f_{XY} (t, u) = \dfrac{2}{13} (t + 2u)$, for $0 \le t \le 2$, $0 \le u \le \text{min } \{2t, 3 - t\}$. (see Exercise 31 from "Problems on Mathematical Expectaton", and Exercise 14.2.10). $f_X (t) = I_{[0, 1]} (t) \dfrac{12}{13} t^2 + I_{(1, 2]} (t) \dfrac{6}{13} (3 - t)$ $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y) 2Y^2$, $M = \{(t, u): t \le 1, u \ge 1\}$ Answer $Z = I_M (X, Y) (X + Y) + I_{M^c} (X, Y) 2Y^2$, $M = \{(t, u): t \le 1, u \ge 1\}$ $I_M(t, u) = I_{[0, 1]} (t0 I_{[1, 2]} (u)$ $I_{M^c} (t, u) = I_{[0, 1]} (t) I_{[0, 1)} (u) + I_{(1, 2]} (t) I_{[0, 3 - t]} (u)$ $E[Z|X = t] = I_{[0, 1/2]} (t) \dfrac{1}{6t^2} \int_{0}^{2t} 2u^2 (t + 2u) \ du +$ $I_{(1/2, 1]} (t) [\dfrac{1}{6t^2} \int_{0}^{1} 2u^2 (t + 2u)\ du + \dfrac{1}{6t^2} \int_{1}^{2t} (t + u) (t + 2u)\ du] + I_{(1, 2]} (t) \dfrac{1}{3 (3 - t)} \int_{0}^{3 - t} 2u^2 (t + 2u)\ du$ $= I_{[0, 1/2]} (t) \dfrac{32}{9} t^2 + I_{(1/2, 1]} (t) \dfrac{1}{36} \cdot \dfrac{80t^3 - 6t^2 - 5t + 2}{t^2} + I_{(1, 2]} (t) \dfrac{1}{9} (- t^3 + 15t^2 - 63t + 81)$ tuappr: [0 2] [0 2] 200 200 (2/13)*(t + 2*u).*(u<=min(2*t,3-t)) M = (t<=1)&(u>=1); Q = (t+u).*M + 2*(1-M).*u.^2; EZx = sum(Q.*P)./sum(P); N1 = X <= 1/2; N2 = (X > 1/2)&(X<=1); N3 = X > 1; ezx = (32/9)*N1.*X.^2 + (1/36)*N2.*(80*X.^3 - 6*X.^2 - 5*X + 2)./X.^2 ... + (1/9)*N3.*(-X.^3 + 15*X.^2 - 63.*X + 81); plot(X,EZx,X,ezx) Exercise $16$ $f_{XY} (t, u) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 2u) + I_{(1, 2]} (t) \dfrac{9}{14} t^2 u^2$, for $0 \le u \le 1$. (see Exercise 32 from "Problems on Mathematical Expectaton", and Exercise 14.2.11). $f_X (t) = I_{[0, 1]} (t) \dfrac{3}{8} (t^2 + 1) + I_{(1, 2]} (t) \dfrac{3}{14} t^2$ $Z = I_M (X, Y) X + I_{M^c} (X, Y) XY$, $M = \{(t, u): u \le \text{min } (1 , 2 - t)\}$ Answer $Z = I_M (X, Y) X + I_{M^c} (X, Y) XY$, $M = \{(t, u): u \le \text{min } (1 , 2 - t)\}$ $E[|X = t] = I_{[0, 1]} (t) \int_{0}^{1} \dfrac{t^3+ 2tu}{t^2 + 1} \ du + I_{(1, 2]} (t) [\int_{0}^{2 - t} 3tu^2\ du + \int_{2 - t}^{1} 3tu^3\ du]$ $= I_{[0, 1]} (t) t + I_{(1, 2]} (t) (-\dfrac{13}{4} t+ 12t^2 - 12t^3 + 5t^4 - \dfrac{3}{4} t^5)$ tuappr: [0 2] [0 1] 200 100 (t<=1).*(t.^2 + 2*u)./(t.^2 + 1) +3*u.^2.*(t>1) M = u<=min(1,2-t); G = M.*t + (1-M).*t.*u; EZx = sum(G.*P)./sum(P); N = X<=1; ezx = X.*N + (1-N).*(-(13/4)*X + 12*X.^2 - 12*X.^3 + 5*X.^4 - (3/4)*X.^5); plot(X,EZx,X,ezx) Exercise $17$ Suppose $X$ ~ uniform on 0 through $n$ and $Y$ ~ conditionally uniform on 0 through $i$, given $X = i$. a. Determine $E[Y]$ from $E[Y|X = i]$. b. Determine the joint distribution for $\{X, Y\}$ for $n = 50$ (see Example 7 from "Conditional Expectation, Regression" for a possible approach). Use jcalc to determine $E[Y]$; compare with the theoretical value. Answer a. $E[Y|X = i] = i/2$, so $E[Y] = \sum_{i = 0}^{n} E[Y|X = i] P(X = i) = \dfrac{1}{n + 1} \sum_{i = 1}^{n} i/2 = n/4$ b. $P(X = i) = 1/(n + 1)$, $0 \le i \le n$, $P(Y = k|X = i) = 1/(i + 1)$. $0 \le k \le i$; hence $P(X = i, Y = k) = 1/(n + 1)(i + 1)$, $0 \le i \le n$, $0 \le k \le i$. n = 50; X = 0:n; Y = 0:n; P0 = zeros(n+1,n+1); for i = 0:n P0(i+1,1:i+1) = (1/((n+1)*(i+1)))*ones(1,i+1); end P = rot90(P0); jcalc: X Y P - - - - - - - - - - - EY = dot(Y,PY) EY = 12.5000 % Comparison with part (a): 50/4 = 12.5 Exercise $18$ Suppose $X$ ~ uniform on 1 through $n$ and $Y$ ~ conditionally uniform on 1 through $i$, given $X = i$. a. Determine $E[Y]$ from $E[Y|X = i]$. b. Determine the joint distribution for $\{X, Y\}$ for $n = 50$ (see Example 7 from "Conditional Expectation, Regression" for a possible approach). Use jcalc to determine $E[Y]$; compare with the theoretical value. Answer a. $E[Y|X = i] = (i+1)/2$, so $E[Y] = \sum_{i = 0}^{n} E[Y|X = i] P(X = i) = \dfrac{1}{n + 1} \sum_{i = 1}^{n} \dfrac{i + 1}{2} = \dfrac{n +3}{4}$ b. $P(X = i) = 1/n$, $1 \le i \le n$, $P(Y = k|X = i) = 1/i$. $1 \le k \le i$; hence $P(X = i, Y = k) = 1/ni$, $1 \le i \le n$, $1 \le k \le i$. n = 50; X = 1:n; Y = 1:n; P0 = zeros(n,n); for i = 1:n P0(i,1:i) = (1/(n*i))*ones(1,i); end P = rot90(P0); jcalc: P X Y - - - - - - - - - - - - EY = dot(Y,PY) EY = 13.2500 % Comparison with part (a): 53/4 = 13.25 Exercise $19$ Suppose $X$ ~ uniform on 1 through $n$ and $Y$ ~ conditionally binomial $(i, p)$, given $X = i$. a. Determine $E[Y]$ from $E[Y|X = k]$. b. Determine the joint distribution for $\{X, Y\}$ for $n = 50$ and $p = 0.3$. Use jcalc to determine $E[Y]$; compare with the theoretical value. Answer a. $E[Y|X = i] = ip$, so $E[Y] = \sum_{i = 1}^{n} E[Y|X = i] P(X = i) = \dfrac{p}{n} \sum_{i = 1}^{n} i = \dfrac{p(n + 1)}{2}$ b. $P(X = i) = 1/n$, $1 \le i \le n$, $P(Y = k|X = i)$ = ibinom$(i, p, 0:i)$, $0 \le k \le i$. n = 50; p = 0.3; X = 1:n; Y = 0:n; P0 = zeros(n,n+1); % Could use randbern for i = 1:n P0(i,1:i+1) = (1/n)*ibinom(i,p,0:i); end P = rot90(P0); jcalc: X Y P - - - - - - - - - - - EY = dot(Y,PY) EY = 7.6500 % Comparison with part (a): 0.3*51/2 = 0.765 Exercise $20$ A number $X$ is selected randomly from the integers 1 through 100. A pair of dice is thrown $X$ times. Let $Y$ be the number of sevens thrown on the $X$ tosses. Determine the joint distribution for $\{X, Y\}$ and then determine $E[Y]$. Answer a. $P(X = i) = 1/n$, $E[Y|X = i] = i/6$, so $E[Y] = \dfrac{1}{6} \sum_{i = 0}^{n} i/n = \dfrac{(n + 1)}{12}$ b. n = 100; p = 1/6; X = 1:n; Y = 0:n; PX = (1/n)*ones(1,n); P0 = zeros(n,n+1); % Could use randbern for i = 1:n P0(i,1:i+1) = (1/n)*ibinom(i,p,0:i); end P = rot90(P0); jcalc EY = dot(Y,PY) EY = 8.4167 % Comparison with part (a): 101/12 = 8.4167 Exercise $21$ A number $X$ is selected randomly from the integers 1 through 100. Each of two people draw $X$ times, independently and randomly, a number from 1 to 10. Let $Y$ be the number of matches (i.e., both draw ones, both draw twos, etc.). Determine the joint distribution and then determine $E[Y]$. Answer Same as Exercise 14.2.20, except $p = 1/10$. $E[Y] = (n + 1)/20$ n = 100; p = 0.1; X = 1:n; Y = 0:n; PX = (1/n)*ones(1,n); P0 = zeros(n,n+1); % Could use randbern for i = 1:n P0(i,1:i+1) = (1/n)*ibinom(i,p,0:i); end P = rot90(P0); jcalc - - - - - - - - - - EY = dot(Y,PY) EY = 5.0500 % Comparison with part (a): EY = 101/20 = 5.05 Exercise $22$ $E[Y|X = t] = 10t$ and $X$ has density function $f_X (t) = 4 - 2t$ for $1 \le t \le 2$. Determine $E[Y]$. Answer $E[Y] = \int E[Y|X = t] f_X (t)\ dt = \int_{1}^{2} 10t(4 - 2t) \ dt = 40/3$ Exercise $23$ $E[Y|X = t] = \dfrac{2}{3} (1 - t)$ for $0 \le t < 1$ and $X$ has density function $f_X (t) = 30 t^2 ( 1 - t)^2$ for $0 \le t \le 1$. Determine $E[Y]$. Answer $E[Y] = \int E[Y|X =t] f_X (t)\ dt = \int_{0}^{1} 20t^2 (1 - t)^3\ dt = 1/3$ Exercise $24$ $E[Y|X = t] = \dfrac{2}{3} (2 - t)$ and $X$ has density function $f_X(t) = \dfrac{15}{16} t^2 (2 - t)^2$ $0 \le t < 2$. Determine $E[Y]$. Answer $E[Y] = \int E[Y|X =t] f_X(t)\ dt = \dfrac{5}{8} \int_{0}^{2} t^2 (2 - t)^3\ dt = 2/3$ Exercise $25$ Suppose the pair $\{X, Y\}$ is independent, with $X$ ~ Poisson ($\mu$) and $Y$ ~ Poisson $(\lambda)$. Show that $X$ is conditionally binomial $(n, \mu/(\mu + \lambda))$, given $X + Y = n$. That is, show that $P(X = k|X + Y = n) = C(n, k) p^k (1 - p)^{n - k}$, $0 \le k \le n$, for $p = \mu/(\mu + \lambda)$ Answer $X$ ~ Poisson ($\mu$), $Y$ ~ Poisson $(\lambda)$. Use of property (T1) and generating functions shows that $X + Y$ ~ Poisson $(\mu + \lambda)$ $P(X = k|X + Y = n) = \dfrac{P(X = k, X + Y = n)}{P(X+Y = n)} = \dfrac{P(X = k, Y = n - k)}{P(X + Y) = n}$ $= \dfrac{e^{-\mu} \dfrac{\mu^k}{k!} e^{-\lambda} \dfrac{\lambda^{n -k}}{(n - k)!}}{e^{-(\mu + \lambda)} \dfrac{(\mu + \lambda)^n}{n!}} = \dfrac{n!}{k! (n - k)!} \dfrac{\mu^k \lambda^{n - k}}{(\mu + \lambda)^n}$ Put $p = \mu/(\mu + \lambda)$ and $q = 1 - p = \lambda/(\mu + \lambda)$ to get the desired result. Exercise $26$ Use the fact that $g(X, Y) = g^* (X, Y, Z)$, where $g^* (t, u, v)$ does not vary with $v$. Extend property (CE10) to show $E[g(X, Y)|X = t, Z = v] = E[g(t, Y)|X = t, Z = v]$ a.s. $[P_{XZ}]$ Answer $E[g(X,Y)|X = t, Z = v] = E[g^* (X, Z, Y)| (X, Z) = (t, v)] = E[g^* (t, v, Y)|(X, Z) = (t, v)]$ $= E[g(t, Y)|X = t, Z = v]$ a.s. $[P_{XZ}]$ by (CE10) Exercise $27$ Use the result of Exercise 14.2.26 and properties (CE9a) and (CE10) to show that $E[g(X, Y)|Z = v] = \int E[g(t, Y)|X = t, Z =v] F_{X|Z} (dt|v)$ a.s. $[P_Z]$ Answer By (CE9), $E[g(X, Y)|Z] = E\{E|g(X, Y)|X, Z]|Z\} = E[e(X, Z)|Z]$ a.s. By (CE10), $E[e(X, Z)|Z = v] = E[e(X, v)|Z = v] =$ $\int e(t, v) F_{X|Z} (dt|v)$ a.s. By Exercise 14.2.26, $\int E[g(X, Y)|X = t, Z = v] F_{X|Z} (dt|v) =$ $\int E[g(t, Y)|X = t, Z = v] F_{X|Z} (dt|v)$ a.s. $[P_Z]$ Exercise $28$ A shop which works past closing time to complete jobs on hand tends to speed up service on any job received during the last hour before closing. Suppose the arrival time of a job in hours before closing time is a random variable $T$ ~ uniform [0, 1]. Service time $Y$ for a unit received in that period is conditionally exponential $\beta (2 - u)$, given $T = u$. Determine the distribution unction for $Y$. Answer $F_Y (v) = \int F_{Y|T} (v|u) f_T (u)\ du = \int_{0}^{1} (1 - e^{-\beta (2 - u)v})\ du =$ $1 - e^{-2\beta v} \dfrac{e^{\beta v} - 1}{\beta v} = 1 - e^{\beta v} [\dfrac{1 - e^{-\beta v}}{\beta v}]$, $0 < v$ Exercise $29$ Time to failure $X$ of a manufactured unit has an exponential distribution. The parameter is dependent upon the manufacturing process. Suppose the parameter is the value of random variable $H$ ~ uniform on[0.005, 0.01], and $X$ is conditionally exponential $(u)$, given $H = u$. Determine $P(X > 150)$. Determine $E[X|H = u]$ and use this to determine $E[X]$. Answer $F_{X|H} (t|u) = 1 - e^{ut}$ $f_{H} (u) = \dfrac{1}{0.05} = 200$, $0.005 \le u \le 0.01$ $F_X (t) = 1 - 200 \int_{0.005}^{0.01} e^{-ut}\ du = 1 - \dfrac{200}{t} [e^{-0.005t} - e^{-0.01t}]$ $P(X > 150) = \dfrac{200}{150}[e^{-0.75} - e^{-1.5}] \approx 0.3323$ $E[X|H = u] = 1/u$ $E[X] = 200 \int_{0.005}^{0.01} \dfrac{du}{u} = 200 \text{ln } 2$ Exercise $30$ A system has $n$ components. Time to failure of the $i$th component is $X_i$ and the class $\{X_i: 1 \le i \le n\}$ is iid exponential ($\lambda$). The system fails if any one or more of the components fails. Let $W$ be the time to system failure. What is the probability the failure is due to the $i$th component? Suggestion. Note that $W = X_i$ iff $X_j > X_i$, for all $j \ne i$. Thus $\{W = X_i\} = \{(X_1, X_2, \cdot\cdot\cdot, X_n) \in Q\}$, $Q = \{(t_1, t_2, \cdot\cdot\cdot t_n): t_k > t_i, \forall k \ne i\}$ $P(W = X_i) = E[I_Q (X_1, X_2, \cdot\cdot\cdot, X_n)] = E\{E[I_Q (X_1, X_2, \cdot\cdot\cdot, X_n)|X_i]\}$ Answer Let $Q = \{(t_1, t_2, \cdot\cdot\cdot, t_n): t_k > t_i, k \ne i\}$. Then $P(W = X_i) = E[I_Q (X_1, X_2, \cdot\cdot\cdot, X_n)] = E\{E[I_Q (X_1, X_2, \cdot\cdot\cdot, X_n)|X_i]\}$ $= \int E[I_Q(X_1, X_2, \cdot\cdot\cdot, t_i, \cdot\cdot\cdot X_n)] F_X (dt)$ $E[I_Q (X_1, X_2, \cdot\cdot\cdot, t_i, \cdot\cdot\cdot, X_n)] = \prod_{k \ne i} P(X_k > t) = [1 - F_X (t)]^{n - 1}$ If $F_X$ is continuous, strictly increasing, zero for $t < 0$, put $u = F_X (t)$, $du = f_X (t)\ dt$, $t = 0$ ~ $u = 0, t = \infty$ ~ $u = 1$. Then $P(W = X_i) = \int_{0}^{1} (1 - u)^{n - 1}\ du = \int_{0}^{1} u^{n - 1}\ du = 1/n$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/14%3A_Conditional_Expectation_Regression/14.02%3A_Problems_on_Conditional_Expectation_Regression.txt
Introduction The usual treatments deal with a single random variable or a fixed, finite number of random variables, considered jointly. However, there are many common applications in which we select at random a member of a class of random variables and observe its value, or select a random number of random variables and obtain some function of those selected. This is formulated with the aid of a countingor selecting random variable $N$, which is nonegative, integer valued. It may be independent of the class selected, or may be related in some sequential way to members of the class. We consider only the independent case. Many important problems require optionalrandom variables, sometimes called Markov times. These involve more theory than we develop in this treatment. Some common examples: Total demand of $N$ customers— $N$ independent of the individual demands. Total service time for $N$ units— $N$ independent of the individual service times. Net gain in $N$ plays of a game— $N$ independent of the individual gains. Extreme values of $N$ random variables— $N$ independent of the individual values. Random sample of size $N$— $N$ is usually determined by propereties of the sample observed. Decide when to play on the basis of past results— $N$ dependent on past A useful model—random sums As a basic model, we consider the sum of a random number of members of an iid class. In order to have a concrete interpretation to help visualize the formal patterns, we think of the demand of a random number of customers. We suppose the number of customers Nis independent of the individual demands. We formulate a model to be used for a variety of applications. A basic sequence $\{X_n: 0 \le n\}$ [Demand of $n$ customers] An incremental sequence $\{Y_n:0 \le n\}$ [Individual demands] These are related as follows: $X_n = \sum_{k = 0}^{n} Y_k$ for $n \ge 0$ and $X_n = 0$ for $n < 0$ $Y_n = X_n - X_{n - 1}$ for all $n$ A counting random variable $N$. If $N = n$ then $n$ of the $Y_k$ are added to give the compound demand $D$ (the random sum) $D = \sum_{k = 0}^{N} Y_k = \sum_{k = 0}^{\infty} I_{[N = k]} X_k = \sum_{k = 0}^{\infty} I_{\{k\}} (N) X_k$ Note. In some applications the counting random variable may take on the idealized value $\infty$. For example, in a game that is played until some specified result occurs, this may never happen, so that no finite value can be assigned to $N$. In such a case, it is necessary to decide what value $X_{\infty}$ is to be assigned. For $N$ independent of the $Y_n$ (hence of the $X_n$), we rarely need to consider this possibility. Independent selection from an iid incremental sequence We assume throughout, unless specifically stated otherwise, that: $X_0 = Y_0 = 0$ $\{Y_k: 1 \le k\}$ is iid $\{N, Y_k: 0 \le k\}$ is an independent class We utilize repeatedly two important propositions: $E[h(D)|N = n] = E[h(X_n)]$, $n \ge 0$ $M_D (s) = g_N [M_Y (s)]$. If the $Y_n$ are nonnegative integer valued, then so is $D$ and $g_D (s) = g_N[g_Y (s)]$ DERIVATION We utilize properties of generating functions, moment generating functions, and conditional expectation. $E[I_{\{n\}} (N) h(D)] = E[h(D)|N = n] P(N = n)$ by definition of conditional expectation, given an event, Now, $I_{\{n\}} (N) h(D) = I_{\{n\}} (N) h(X_n)$ and $E[I_{\{n\}} (N) h(X_n)] = P(N = n) E[h(X_n)]$. Hence $E[h(D) |N = n] P(N = n) = P(N = n) E[h(X_n)]$. Division by $P(N = n)$ gives the desired result. By the law of total probability (CE1b), $M_D(s)= E[e^{sD}] = E\{E[e^{sD} |N]\}$. By proposition 1 and the product rule for moment generating functions, $E[e^{sD}|N = n] = E[e^{sX_n}] = \prod_{k = 1}^{n} E[e^{sY_k}] = M_Y^n (s)$ Hence $M_D(s) = \sum_{n = 0}^{\infty} M_Y^n (s) P(N = n) = g_N[M_Y (s)]$ A parallel argument holds for $g_D$ — □ Remark. The result on $M_D$ and $g_D$ may be developed without use of conditional expectation. in the integer-valued case. $M_D(s) = E[e^{sD}] = \sum_{k = 0}^{\infty} E[I_{\{N = n\}} e^{sX_n}] = \sum_{k = 0}^{\infty} P(N = n) E[e^{sX_n}]$ $= \sum_{k = 0}^{\infty} P(N = n) M_Y^n (s) = g_N [M_Y (s)]$ — □ Example $1$ A service shop Suppose the number $N$ of jobs brought to a service shop in a day is Poisson (8). One fourth of these are items under warranty for which no charge is made. Others fall in one of two categories. One half of the arriving jobs are charged for one hour of shop time; the remaining one fourth are charged for two hours of shop time. Thus, the individual shop hour charges $Y_k$ have the common distribution $Y =$ [0 1 2] with probabilities $PY =$ [1/4 1/2 1/4] Make the basic assumptions of our model. Determine $P(D \le 4)$. Solution $g_N(s) = e^{8(s - 1)} g_Y (s) = \dfrac{1}{4} (1 + 2s + s^2)$ According to the formula developed above, $g_D (s) = g_N [g_Y (s)] = \text{exp} ((8/4) (1 + 2s + s^2) - 8) = e^{4s} e^{2s^2} e^{-6}$ Expand the exponentials in power series about the origin, multiply out to get enough terms. The result of straightforward but somewhat tedious calculations is $g_D (s) = e^{-6} ( 1 + 4s + 10s^2 + \dfrac{56}{3} s^3 + \dfrac{86}{3} s^4 + \cdot\cdot\cdot)$ Taking the coefficients of the generating function, we get $P(D \le 4) \approx e^{-6} (1 + 4 + 10 + \dfrac{56}{3} + \dfrac{86}{3}) = e^{-6} \dfrac{187}{3} \approx 0.1545$ Example $2$ A result on Bernoulli trials Suppose the counting random variable $N$ ~ binomial $(n, p)$ and $Y_i = I_{E_i}$, with $P(E_i) = p_0$. Then $g_N = (q + ps)^n$ and $g_Y (s) = q_0 + p_0 s$ By the basic result on random selection, we have $g_D (s) = g_N [g_Y(s)] = [q + p(q_0 + p_0 s)]^n = [(1 - pp_0) + pp_0 s]^n$ so that $D$ ~ binomial $(n, pp_0)$. In the next section we establish useful m-procedures for determining the generating function gD and the moment generating function $M_D$ for the compound demand for simple random variables, hence for determining the complete distribution. Obviously, these will not work for all problems. It may helpful, if not entirely sufficient, in such cases to be able to determine the mean value $E[D]$ and variance $\text{Var} [D]$. To this end, we establish the following expressions for the mean and variance. Example $3$ Mean and variance of the compound demand $E[D] = E[N]E[Y]$ and $\text{Var} [D] = E[N] \text{Var} [Y] + \text{Var} [N] E^2 [Y]$ DERIVATION $E[D] = E[\sum_{n = 0}^{\infty} I_{\{N = n\}} X_n] = \sum_{n = 0}^{\infty} P(N = n) E[X_n]$ $= E[Y] \sum_{n = 0}^{\infty} n P(N = n) = E[Y] E[N]$ $E[D^2] = \sum_{n = 0}^{\infty} P(N = n) E[X_n^2] = \sum_{n = 0}^{\infty} P(N = n) \{\text{Var} [X_n] + E^2 [X_n]\}$ $= \sum_{n = 0}^{\infty} P(N = n) \{n \text{Var} [Y] = n^2 E^2 [Y]\} = E[N] \text{Var} [Y] + E[N^2] E^2[Y]$ Hence $\text{Var} [D] = E[N] \text{Var} [Y] + E[N^2] E^2 [Y] - E[N]^2 E^2[Y] = E[N] \text{Var} [Y] + \text{Var} [N] E^2[Y]$ Example $4$ Mean and variance for Example 15.1.1 $E[N] = \text{Var} [N] = 9$. By symmetry $E[Y] = 1$. $\text{Var} [Y] = 0.25(0 + 2 + 4) - 1 = 0.5$. Hence, $E[D] = 8 \cdot 1 = 8$, $\text{Var} [D] = 8 \cdot 0.5 + 8 \cdot 1 = 12$ Calculations for the compound demand We have m-procedures for performing the calculations necessary to determine the distribution for a composite demand $D$ when the counting random variable $N$ and the individual demands $Y_k$ are simple random variables with not too many values. In some cases, such as for a Poisson counting random variable, we are able to approximate by a simple random variable. The procedure gend If the $Y_i$ are nonnegative, integer valued, then so is $D$, and there is a generating function. We examine a strategy for computation which is implemented in the m-procedure gend. Suppose $g_N (s) = p_0 + p_1 s + p_2 s^2 + \cdot\cdot\cdot p_n s^n$ $g_Y (s) = \pi_0 + \pi_1 s + \pi_2 s^2 + \cdot\cdot\cdot \pi_m s^m$ The coefficients of $g_N$ and $g_Y$ are the probabilities of the values of $N$ and $Y$, respectively. We enter these and calculate the coefficients for powers of $g_Y$: $\begin{array} {lcr} {gN = [p_0\ p_1\ \cdot\cdot\cdot\ p_n]} & {1 \times (n + 1)} & {\text{Coefficients of } g_N} \ {y = [\pi_0\ \pi_1\ \cdot\cdot\cdot\ \pi_n]} & {1 \times (m + 1)} & {\text{Coefficients of } g_Y} \ {\ \ \ \ \ \cdot\cdot\cdot} & { } & { } \ {y2 = \text{conv}(y,y)} & {1 \times (2m + 1)} & {\text{Coefficients of } g_Y^2} \ {y3 = \text{conv}(y,y2)} & {1 \times (3m + 1)} & {\text{Coefficients of } g_Y^3} \ {\ \ \ \ \ \cdot\cdot\cdot} & { } & { } \ {yn = \text{conv}(y,y(n - 1))} & {1 \times (nm + 1)} & {\text{Coefficients of } g_Y^n}\end{array}$ We wish to generate a matrix $P$ whose rows contain the joint probabilities. The probabilities in the $i$th row consist of the coefficients for the appropriate power of $g_Y$ multiplied by the probability $N$ has that value. To achieve this, we need a matrix, each of whose $n + 1$ rows has $nm + 1$ elements, the length of $yn$. We begin by “preallocating” zeros to the rows. That is, we set $P = \text{zeros}(n + 1, n\ ^*\ m + 1)$. We then replace the appropriate elements of the successive rows. The replacement probabilities for the $i$th row are obtained by the convolution of $g_Y$ and the power of $g_Y$ for the previous row. When the matrix $P$ is completed, we remove zero rows and columns, corresponding to missing values of $N$ and $D$ (i.e., values with zero probability). To orient the joint probabilities as on the plane, we rotate $P$ ninety degrees counterclockwise. With the joint distribution, we may then calculate any desired quantities. Example $5$ A compound demand The number of customers in a major appliance store is equally likely to be 1, 2, or 3. Each customer buys 0, 1, or 2 items with respective probabilities 0.5, 0.4, 0.1. Customers buy independently, regardless of the number of customers. First we determine the matrices representing $g_N$ and $g_Y$. The coefficients are the probabilities that each integer value is observed. Note that the zero coefficients for any missing powers must be included. gN = (1/3)*[0 1 1 1]; % Note zero coefficient for missing zero power gY = 0.1*[5 4 1]; % All powers 0 thru 2 have positive coefficients gend Do not forget zero coefficients for missing powers Enter the gen fn COEFFICIENTS for gN gN % Coefficient matrix named gN Enter the gen fn COEFFICIENTS for gY gY % Coefficient matrix named gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view distribution for D, call for gD disp(gD) % Optional display of complete distribution 0 0.2917 1.0000 0.3667 2.0000 0.2250 3.0000 0.0880 4.0000 0.0243 5.0000 0.0040 6.0000 0.0003 EN = N*PN' EN = 2 EY = Y*PY' EY = 0.6000 ED = D*PD' ED = 1.2000 % Agrees with theoretical EN*EY P3 = (D>=3)*PD' P3 = 0.1167 [N,D,t,u,PN,PD,PL] = jcalcf(N,D,P); EDn = sum(u.*P)./sum(P); disp([N;EDn]') 1.0000 0.6000 % Agrees with theoretical E[D|N=n] = n*EY 2.0000 1.2000 3.0000 1.8000 VD = (D.^2)*PD' - ED^2 VD = 1.1200 % Agrees with theoretical EN*VY + VN*EY^2 Example $6$ A numerical example $g_N (s) = \dfrac{1}{5} (1 + s + s^2 + s^3 + s^4)$ $g_Y (s) = 0.1 (5s + 3s^2 + 2s^3$ Note that the zero power is missing from $gY$. corresponding to the fact that $P(Y = 0) = 0$. gN = 0.2*[1 1 1 1 1]; gY = 0.1*[0 5 3 2]; % Note the zero coefficient in the zero position gend Do not forget zero coefficients for missing powers Enter the gen fn COEFFICIENTS for gN gN Enter the gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view distribution for D, call for gD disp(gD) % Optional display of complete distribution 0 0.2000 1.0000 0.1000 2.0000 0.1100 3.0000 0.1250 4.0000 0.1155 5.0000 0.1110 6.0000 0.0964 7.0000 0.0696 8.0000 0.0424 9.0000 0.0203 10.0000 0.0075 11.0000 0.0019 12.0000 0.0003 p3 = (D == 3)*PD' % P(D=3) P3 = 0.1250 P4_12 = ((D >= 4)&(D <= 12))*PD' P4_12 = 0.4650 % P(4 <= D <= 12) Example $7$ Number of successes for random number $N$ of trials. We are interested in the number of successes in $N$ trials for a general counting random variable. This is a generalization of the Bernoulli case in Example 15.1.2. Suppose, as in Example 15.1.2, the number of customers in a major appliance store is equally likely to be 1, 2, or 3, and each buys at least one item with probability $p = 0.6$. Determine the distribution for the number $D$ of buying customers. Solution We use $gN$, $gY$, and gend. gN = (1/3)*[0 1 1 1]; % Note zero coefficient for missing zero power gY = [0.4 0.6]; % Generating function for the indicator function gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view distribution for D, call for gD disp(gD) 0 0.2080 1.0000 0.4560 2.0000 0.2640 3.0000 0.0720 The procedure gend is limited to simple $N$ and $Y_k$, with nonnegative integer values. Sometimes, a random variable with unbounded range may be approximated by a simple random variable. The solution in the following example utilizes such an approximation procedure for the counting random variable $N$. Example $8$ Solution of the shop time Example 15.1.1 The number $N$ of jobs brought to a service shop in a day is Poisson (8). The individual shop hour charges $Y_k$ have the common distribution $Y =$ [0 1 2] with probabilities $PY =$ [1/4 1/2 1/4]. Under the basic assumptions of our model, determine $P(D \le 4)$. Solution Since Poisson $N$ is unbounded, we need to check for a sufficient number of terms in a simple approximation. Then we proceed as in the simple case. pa = cpoisson(8,10:5:30) % Check for sufficient number of terms pa = 0.2834 0.0173 0.0003 0.0000 0.0000 p25 = cpoisson(8,25) % Check on choice of n = 25 p25 = 1.1722e-06 gN = ipoisson(8,0:25); % Approximate gN gY = 0.25*[1 2 1]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view distribution for D, call for gD disp(gD(D<=20,:)) % Calculated values to D = 50 0 0.0025 % Display for D <= 20 1.0000 0.0099 2.0000 0.0248 3.0000 0.0463 4.0000 0.0711 5.0000 0.0939 6.0000 0.1099 7.0000 0.1165 8.0000 0.1132 9.0000 0.1021 10.0000 0.0861 11.0000 0.0684 12.0000 0.0515 13.0000 0.0369 14.0000 0.0253 15.0000 0.0166 16.0000 0.0105 17.0000 0.0064 18.0000 0.0037 19.0000 0.0021 20.0000 0.0012 sum(PD) % Check on sufficiency of approximation ans = 1.0000 P4 = (D<=4)*PD' P4 = 0.1545 % Theoretical value (4 places) = 0.1545 ED = D*PD' ED = 8.0000 % Theoretical = 8 (Example 15.1.4) VD = (D.^2)*PD' - ED^2 VD = 11.9999 % Theoretical = 12 (Example 15.1.4) The m-procedures mgd and jmgd The next example shows a fundamental limitation of the gend procedure. The values for the individual demands are not limited to integers, and there are considerable gaps between the values. In this case, we need to implement the moment generating function $M_D$ rather than the generating function $g_D$. In the generating function case, it is as easy to develop the joint distribution for $\{N, D\}$ as to develop the marginal distribution for $D$. For the moment generating function, the joint distribution requires considerably more computation. As a consequence, we find it convenient to have two m-procedures: mgd for the marginal distribution and jmgd for the joint distribution. Instead of the convolution procedure used in gend to determine the distribution for the sums of the individual demands, the m-procedure mgd utilizes the m-function mgsum to obtain these distributions. The distributions for the various sums are concatenated into two row vectors, to which csort is applied to obtain the distribution for the compound demand. The procedure requires as input the generating function for $N$ and the actual distribution, $Y$ and $PY$, for the individual demands. For $gN$, it is necessary to treat the coefficients as in gend. However, the actual values and probabilities in the distribution for Y are put into a pair of row matrices. If $Y$ is integer valued, there are no zeros in the probability matrix for missing values. Example $9$ Noninteger values A service shop has three standard charges for a certain class of warranty services it performs: $10,$12.50, and \$15. The number of jobs received in a normal work day can be considered a random variable $N$ which takes on values 0, 1, 2, 3, 4 with equal probabilities 0.2. The job types for arrivals may be represented by an iid class $\{Y_i: 1 \le i \le 4\}$, independent of the arrival process. The $Y_i$ take on values 10, 12.5, 15 with respective probabilities 0.5, 0.3, 0.2. Let $C$ be the total amount of services rendered in a day. Determine the distribution for $C$. Solution gN = 0.2*[1 1 1 1 1]; % Enter data Y = [10 12.5 15]; PY = 0.1*[5 3 2]; mgd % Call for procedure Enter gen fn COEFFICIENTS for gN gN Enter VALUES for Y Y Enter PROBABILITIES for Y PY Values are in row matrix D; probabilities are in PD. To view the distribution, call for mD. disp(mD) % Optional display of distribution 0 0.2000 10.0000 0.1000 12.5000 0.0600 15.0000 0.0400 20.0000 0.0500 22.5000 0.0600 25.0000 0.0580 27.5000 0.0240 30.0000 0.0330 32.5000 0.0450 35.0000 0.0570 37.5000 0.0414 40.0000 0.0353 42.5000 0.0372 45.0000 0.0486 47.5000 0.0468 50.0000 0.0352 52.5000 0.0187 55.0000 0.0075 57.5000 0.0019 60.0000 0.0003 We next recalculate Example 15.1.6, above, using mgd rather than gend. Example $10$ Recalculation of Example 15.1.6 In Example 15.1.6, we have $g_N (s) = \dfrac{1}{5} (1 + s + s^2 + s^3 + s^4)$ $g_Y (s) = 0.1 (5s + 3s^2 + 2s^3)$ The means that the distribution for $Y$ is $Y =$ [1 2 3] and $PY =$ 0.1 * [5 3 2]. We use the same expression for $gN$ as in Example 15.1.6. gN = 0.2*ones(1,5); Y = 1:3; PY = 0.1*[5 3 2]; mgd Enter gen fn COEFFICIENTS for gN gN Enter VALUES for Y Y Enter PROBABILITIES for Y PY Values are in row matrix D; probabilities are in PD. To view the distribution, call for mD. disp(mD) 0 0.2000 1.0000 0.1000 2.0000 0.1100 3.0000 0.1250 4.0000 0.1155 5.0000 0.1110 6.0000 0.0964 7.0000 0.0696 8.0000 0.0424 9.0000 0.0203 10.0000 0.0075 11.0000 0.0019 12.0000 0.0003 P3 = (D==3)*PD' P3 = 0.1250 ED = D*PD' ED = 3.4000 P_4_12 = ((D>=4)&(D<=12))*PD' P_4_12 = 0.4650 P7 = (D>=7)*PD' P7 = 0.1421 As expected, the results are the same as those obtained with gend. If it is desired to obtain the joint distribution for $\{N, D\}$, we use a modification of mgd called jmgd. The complications come in placing the probabilities in the $P$ matrix in the desired positions. This requires some calculations to determine the appropriate size of the matrices used as well as a procedure to put each probability in the position corresponding to its $D$ value. Actual operation is quite similar to the operation of mgd, and requires the same data format. A principle use of the joint distribution is to demonstrate features of the model, such as $E[D|N = n] = nE[Y]$, etc. This, of course, is utilized in obtaining the expressions for $M_D (s)$ in terms of $g_N (s)$ and $M_Y (s)$. This result guides the development of the computational procedures, but these do not depend upon this result. However, it is usually helpful to demonstrate the validity of the assumptions in typical examples. Remark. In general, if the use of gend is appropriate, it is faster and more efficient than mgd (or jmgd). And it will handle somewhat larger problems. But both m-procedures work quite well for problems of moderate size, and are convenient tools for solving various “compound demand” type problems.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/15%3A_Random_Selection/15.01%3A__Random_Selection.txt
In the unit on Random Selection, we develop some general theoretical results and computational procedures using MATLAB. In this unit, we extend the treatment to a variety of problems. We establish some useful theoretical results and in some cases use MATLAB procedures, including those in the unit on random selection. The Poisson Decomposition In many problems, the individual demands may be categorized in one of m types. If the random variable $T_i$ is the type of the $i$th arrival and the class $\{T_i: 1 \le i\}$ is iid, we have multinomial trials. For $m = 2$ we have the Bernoulli or binomial case, in which one type is called a success and the other a failure. Multinomial trials We analyze such a sequence of trials as follows. Suppose there are m types, which we number 1 through $m$. Let $E_{ki}$ be the event that type $k$ occurs on the $i$th component trial. For each $i$, the class $\{E_{ki}: 1 \le k \le m\}$ is a partition, since on each component trial exactly one of the types will occur. The type on the $i$th trial may be represented by the type random variable $T_i = \sum_{k = 1}^{m} kI_{E_{ki}}$ we assume $\{T_k: 1 \le i\}$ is iid, with $P(T_i = k) = P(E_{ki}) = p_k$ invariant with $i$ In a sequence of $n$ trials, we let $N_{kn}$ be the number of occurrences of type $k$. Then $N_{kn} = \sum_{i = 1}^{n} I_{E_{ki}}$ with $\sum_{k = 1}^{m} N_{kn} = n$ Now each $N_{kn}$ ~ binomial ($n, p_k$). The class $\{N_{kn}: 1 \le k \le m\}$ cannot be independent, since it sums to $n$. If the values of $m - 1$ of them are known, the value of the other is determined. If $n_1 + n_2 + \cdot\cdot\cdot n_m = n$. the event $\{N_{1n} = n_1, N_{2n} = n_2, \cdot\cdot\cdot, N_{mn} = n_m\}$ is one of the $C(n; n_1, n_2, \cdot\cdot\cdot, n_m) = n!/(n1! n2! \cdot\cdot\cdot n_m!)$ ways of arranging $n_1$ of the $E_{1i}$, $n_2$ of the $E_{2i}$, $\cdot\cdot\cdot$, $n_m$ of the $E_{mi}$. Each such arrangement has probability $p_{1}^{n_1} p_{2}^{n_2} \cdot\cdot\cdot p_{m}^{n_m}$, so that $P(N_{1n} = n_1, N_{2n} = n_2, \cdot\cdot\cdot N_{mn} = n_m) = n! \prod_{k = 1}^{m} \dfrac{p_{k}^{n_k}}{n_k !}$ This set of joint probabilities constitutes the multinomial distribution. For $m = 2$, and type 1 a success, this is the binomial distribution with parameter $(n, p_1)$. A random number of multinomial trials We consider, in particular, the case of a random number $N$ of multinomial trials, where $N$ ~ Poisson $(\mu)$. Let $N_k$ be the number of results of type $k$ in a random number $N$ of multinomial trials. $N_k = \sum_{i = 1}^{N} I_{E_{ki}} = \sum_{n = 1}^{\infty} I_{\{N = n\}} N_{kn}$ with $\sum_{k = 1}^{m} N_k = N$ Poisson decomposition Suppose $N$ ~ Poisson ($\mu$) $\{T_i: 1 \le i\}$ is iid with $P(T_i = k) = p_k$, $1 \le k \le m$ $\{N, T_i : 1 \le i\}$ is independent Then Each $N_k$ ~ Poisson ($\mu p_k$) $\{N_k: 1 \le k \le m\}$ is independent. — □ The usefulness of this remarkable result is enhanced by the fact that the sum of independent Poisson random variables is also Poisson, with $\mu$ for the sum the sum of the $\mu_i$ for the variables added. This is readily established with the aid of the generating function. Before verifying the propositions above, we consider some examples. Example $1$ A shipping problem The number $N$ of orders per day received by a mail order house is Poisson (300). Orders are shipped by next day express, by second day priority, or by regular parcel mail. Suppose 4/10 of the customers want next day express, 5/10 want second day priority, and 1/10 require regular mail. Make the usual assumptions on compound demand. What is the probability that fewer than 150 want next day express? What is the probability that fewer than 300 want one or the other of the two faster deliveries? Solution Model as a random number of multinomial trials, with three outcome types: Type 1 is next day express, Type 2 is second day priority, and Type 3 is regular mail, with respective probabilities $p_1 = 0.4$, $p_2 = 0.5$, and $p_3 = 0.1$. The $N_1$ ~ Poisson $(0.4 \cdot 300 = 120)$, $N_2$ ~ Poisson $(0.5 \cdot 300 = 150)$, and $N_3$ ~ Poisson $(0.1 \cdot 300 = 30)$. Also $N_1 + N_2$ ~ Poisson (120 + 150 = 270). P1 = 1 - cpoisson(120,150) P1 = 0.9954 P12 = 1 - cpoisson(270,300) P12 = 0.9620 Example $2$ Message routing A junction point in a network has two incoming lines and two outgoing lines. The number of incoming messages $N_1$ on line one in one hour is Poisson (50); on line 2 the number is $N_2$ ~ Poisson (45). On incoming line 1 the messages have probability $P_{1a} = 0.33$ of leaving on outgoing line a and $1 - p_{1a}$ of leaving on line b. The messages coming in on line 2 have probability $P_{2a} = 0.47$ of leaving on line a. Under the usual independence assumptions, what is the distribution of outgoing messages on line a? What are the probabilities of at least 30, 35, 40 outgoing messages on line a? Solution By the Poisson decomposition, $N_a$ ~ Poisson $(50 \cdot 0.33 + 45 \cdot 0.47 = 37.65)$. ma = 50*0.33 + 45*0.47 ma = 37.6500 Pa = cpoisson(ma,30:5:40) Pa = 0.9119 0.6890 0.3722 VERIFICATION of the Poisson decomposition $N_k = \sum_{i = 1}^{N} I_{E{ki}}$. This is composite demand with $Y_k = I_{E_{ki}}$, so that $g_{Y_k} (s) = q_k + sp_k = 1 + p_k (s - 1)$. Therefore, $g_{N_k} (s) = g_N [g_{Y_k} (s)] = e^{} = e^{}$ which is the generating function for $N_k$ ~ Poisson $(\mu p_k)$. For any $n_1$, $n_2$, $\cdot\cdot\cdot$, $n_m$, let $n = n_1 + n_2 + \cdot\cdot\cdot + n_m$, and consider $A = \{N_1 = n_1, N_2 = n_2, \cdot\cdot\cdot, N_m = n_m\} = \{N = n\} \cap \{N_{1n} = N_1, N_{2n} = n_2, \cdot\cdot\cdot, N_{mn} = n_m\}$ Since $N$ is independent of the class of $I_{E_{ki}}$, the class $\{\{N = n\}, \{N_{1n} = n_1, N_{2n} = n_2, \cdot\cdot\cdot, N_{mn} = n_m\}\}$ is independent. By the product rule and the multinomial distribution $P(A) = e^{-\mu} \dfrac{\mu^n}{n!} \cdot n! \prod_{k = 1}^{m} \dfrac{p_{k}^{n_k}}{(n_k)!} = \prod_{k = 1}^{m} e^{-\mu p_k} \dfrac{p_{k}^{n_k}}{n_k !} = \prod_{k = 1}^{m} P(N_k = n_k)$ The second product uses the fact that $e^{\mu} = e^{\mu (p_1 + p_2 + \cdot\cdot\cdot + p_m)} = \prod_{k = 1}^{m} e^{\mu p_k}$ Thus, the product rule holds for the class Extreme values Consider an iid class $\{Y_i: 1 \le i\}$ of nonnegative random variables. For any positive integer $n$ we let $V_n = \text{min } \{Y_1, Y_2, \cdot\cdot\cdot, Y_n\}$ and $W_n = \text{max } \{Y_1, Y_2, \cdot\cdot\cdot, Y_n\}$ Then $P(V_n > t) = P^n (Y > t)$ and $P(W_n \le t) = P^n (Y \le t)$ Now consider a random number $N$ of the $Y_i$. The minimum and maximum random variables are $V_N = \sum_{n = 0}^{\infty} I_{\{N = n\}} V_n$ and $W_N = \sum_{n = 0}^{\infty} I_{\{N = n\}} W_n$ — □ Computational formulas If we set $V_0 = W_0 = 0$, then $F_V (t) = P(V \le t) = 1 + P(N = 0) - g_N [P(Y > t)]$ $F_W (t) = g_N [P(Y \le t)]$ These results are easily established as follows. $\{V_N > t\} = \bigvee_{n = 0}^{\infty} \{N = n\} \ \{V_n > t\}$. By additivity and independence of $\{N, V_n\}$ for each $n$ $P(V_N > t) = \sum_{n = 0}^{\infty} P(N = n) P(V_n > t) = \sum_{n = 1}^{\infty} P(N = n) P^n (Y > t)$, since $P(V_0 > t) = 0$ If we add into the last sum the term $P(N = 0) P^0 (Y > t) = P(N = 0)$ then subtract it, we have $P(V_N > t) = \sum_{n = 0}^{\infty} P(N = n) P^n (Y > t) - P(N = 0) = g_N [P(Y > t)] - P(N = 0)$ A similar argument holds for proposition (b). In this case, we do not have the extra term for $\{N = 0\}$, since $P(W_0 \le t) = 1$. Special case. In some cases, $N = 0$ does not correspond to an admissible outcome (see Example 14.2.4, below, on lowest bidder and Example 14.2.6). In that case $F_V (t) = \sum_{n = 1}^{\infty} P(V_n \le t) P(N = n) = \sum_{n = 1}^{\infty} [1 - P^n (Y > t)] P(N = n) = \sum_{n = 1}^{\infty} P(N = n) - \sum_{n = 1}^{\infty} P^n (Y > t) P(N = n)$ Add $P(N = 0) = p^0\ (Y > t) P(N = 0)$ to each of the sums to get $F_V (t) = 1 - \sum_{n = 0}^{\infty} P^n (Y > t) P (N = n) = 1 - g_N [P(Y > t)]$ — □ Example $3$ Maximum service time The number $N$ of jobs coming into a service center in a week is a random quantity having a Poisson (20) distribution. Suppose the service times (in hours) for individual units are iid, with common distribution exponential (1/3). What is the probability the maximum service time for the units is no greater than 6, 9, 12, 15, 18 hours? Solution $P(W_N \le t) = g_N [P(Y \le t)] = e^{20[F_Y (t) - 1]} = \text{exp} (-20e^{-t/3})$ t = 6:3:18; PW = exp(-20*exp(-t/3)); disp([t;PW]') 6.0000 0.0668 9.0000 0.3694 12.0000 0.6933 15.0000 0.8739 18.0000 0.9516 Example $4$ Lowest Bidder A manufacturer seeks bids on a modification of one of his processing units. Twenty contractors are invited to bid. They bid with probability 0.3, so that the number of bids $N$ ~ binomial (20,0.3). Assume the bids Yi (in thousands of dollars) form an iid class. The market is such that the bids have a common distribution symmetric triangular on (150,250). What is the probability of at least one bid no greater than 170, 180, 190, 200, 210? Note that no bid is not a low bid of zero, hence we must use the special case. Solution $P(V \le t) = 1 - g_N [P(Y > t)] = 1 - (0.7 + 0.3p)^{20}$ where $p = P(Y > t)$ Solving graphically for $p = P (V > t)$, we get $p =$ [23/25 41/50 17/25 1/2 8/25] for $t =$ [170 180 190 200 210] Now $g_N (s) = (0.7 + 0.3s)^{20}$. We use MATLAB to obtain t = [170 180 190 200 210]; p = [23/25 41/50 17/25 1/2 8/25]; PV = 1 - (0.7 + 0.3*p).^20; disp([t;p;PV]') 170.0000 0.9200 0.3848 180.0000 0.8200 0.6705 190.0000 0.6800 0.8671 200.0000 0.5000 0.9612 210.0000 0.3200 0.9896 Example $5$ Example 15.2.4 with a general counting variable Suppose the number of bids is 1, 2 or 3 with probabilities 0.3, 0.5, 0.2, respectively. Determine $P(V \le t)$ in each case. Solution The minimum of the selected $Y$'s is no greater than $t$ if and only if there is at least one $Y$ less than or equal to $t$. We determine in each case probabilities for the number of bids satisfying $Y \le t$. For each $t$, we are interested in the probability of one or more occurrences of the event $Y \le t$. This is essentially the problem in Example 7 from "Random Selection", with probability $p = P(Y \le t)$. t = [170 180 190 200 210]; p = [23/25 41/50 17/25 1/2 8/25]; % Probabilities Y <= t are 1 - p gN = [0 0.3 0.5 0.2]; % Zero for missing value PV = zeros(1,length(t)); for i=1:length(t) gY = [p(i),1 - p(i)]; [d,pd] = gendf(gN,gY); PV(i) = (d>0)*pd'; % Selects positions for d > 0 and end % adds corresponding probabilities disp([t;PV]') 170.0000 0.1451 180.0000 0.3075 190.0000 0.5019 200.0000 0.7000 210.0000 0.8462 Example 15.2.4 may be worked in this manner by using gN = ibinom(20,0.3,0:20). The results, of course, are the same as in the previous solution. The fact that the probabilities in this example are lower for each t than in Example 15.2.4 reflects the fact that there are probably fewer bids in each case. Example $6$ Batch testing Electrical units from a production line are first inspected for operability. However, experience indicates that a fraction $p$ of those passing the initial operability test are defective. All operable units are subsequenly tested in a batch under continuous operation ( a “burn in” test). Statistical data indicate the defective units have times to failure $Y_i$ iid, exponential ($\lambda$, whereas good units have very long life (infinite from the point of view of the test). A batch of $n$ units is tested. Let $V$ be the time of the first failure and $N$ be the number of defective units in the batch. If the test goes $t$ units of time with no failure (i.e., $V > t$), what is the probability of no defective units? Solution Since no defective units implies no failures in any reasonable test time, we have $\{N = 0\} \subset \{V > t \}$ so that $P(N = 0|V > t) = \dfrac{P(N = 0)}{P(V > t)}$ Since $N = 0$ does not yield a minimum value, we have $P(V > t) = g_N [P(Y > t)]$. Now under the condition above, the number of defective units $N$ ~ binomial ($n, p$), so that $g_N (s) = (q + ps)^n$. If $N$ is large and $p$ is reasonably small, $N$ is approximately Poisson $(np)$ with $g_N (s) = e^{np (s - 1)}$ and $P(N = 0) = e^{-np}$. Now $P(Y > t) = e^{-\lambda t}$; for large $n$ $P(N = 0|V > t) = \dfrac{e^{-np}}{e^{np[P(Y > t) - 1]}} = e^{-np P(Y >t)} = e^{-npe^{-lambda t}}$ For $n = 5000$, $p = 0.001$, $\lambda = 2$, and $t = 1, 2, 3, 4, 5$, MATLAB calculations give t = 1:5; n = 5000; p = 0.001; lambda = 2; P = exp(-n*p*exp(-lambda*t)); disp([t;P]') 1.0000 0.5083 2.0000 0.9125 3.0000 0.9877 4.0000 0.9983 5.0000 0.9998 It appears that a test of three to five hours should give reliable results. In actually designing the test, one should probably make calculations with a number of different assumptions on the fraction of defective units and the life duration of defective units. These calculations are relatively easy to make with MATLAB. Bernoulli trials with random execution times or costs Consider a Bernoulli sequence with probability $p$ of success on any component trial. Let $N$ be the number of the trial on which the first success occurs. Let $Y_i$ be the time (or cost) to execute the $i$th trial. Then the total time (or cost) from the beginning to the completion of the first success is $T = \sum_{i = 1}^{N} Y_i$ (composite "demand" with $N - 1$ ~ geometric $p$) We suppose the $Y_i$ form an iid class, independent of $N$. Now $N - 1$ ~ geometric ($p$) implies $g_N (s) = ps/(1 - qs)$, so that $M_T (s) = g_N [M_Y (s)] = \dfrac{pM_Y (s)}{1 - qM_Y (s)}$ There are two useful special cases: $Y_i$ ~ exponential $(\lambda)$, so that $M_Y (s) = \dfrac{}{}$. $M_T (s) = \dfrac{}{} = \dfrac{}{}$ which implies $T$ ~ exponential ($p \lambda$). $Y_i - 1$ ~ geometric $(p_0)$, so that $g_Y (s) = \dfrac{\lambda}{\lambda - s}$ $g_T (s) = \dfrac{p \lambda/ (\lambda -s)}{1 - q\lambda/(\lambda -s)} = \dfrac{p \lambda}{p\lambda - s}$ so that $T - 1$ ~ geometric $(pp_0)$. Example $7$ Job interviews Suppose a prospective employer is interviewing candidates for a job from a pool in which twenty percent are qualified. Interview times (in hours) $Y_i$ are presumed to form an iid class, each exponential (3). Thus, the average interview time is 1/3 hour (twenty minutes). We take the probability for success on any interview to be $p = 0.2$. What is the probability a satisfactory candidate will be found in four hours or less? What is the probability the maximum interview time will be no greater than 0.5, 0.75, 1, 1.25, 1.5 hours? Solution $T$ ~ exponential ($0.2 \cdot 3 = 0.6$), so that $P(T \le 4) = 1 - e^{-0.6 \cdot 4} = 0.9093$. $P(W \le t) = g_N [P(Y \le t)] = \dfrac{0.2 (1 - e^{-3t})}{1 - 0.8 (1 - e^{-3t})} = \dfrac{1 - e^{-3t}}{1 + 4e^{-3t}}$ MATLAB computations give t = 0.5:0.25:1.5; PWt = (1 - exp(-3*t))./(1 + 4*exp(-3*t)); disp([t;PWt]') 0.5000 0.4105 0.7500 0.6293 1.0000 0.7924 1.2500 0.8925 1.5000 0.9468 The average interview time is 1/3 hour; with probability 0.63 the maximum is 3/4 hour or less; with probability 0.79 the maximum is one hour or less; etc. In the general case, solving for the distribution of $T$ requires transform theory, and may be handled best by a program such as Maple or Mathematica. For the case of simple $Y_i$ we may use approximation procedures based on properties of the geometric series. Since $N - 1$ ~ geometric $(p)$. $g_N 9s) = \dfrac{ps}{1 - qs} = ps \sum_{k = 0}^{\infty} (qs)^k = ps [\sum_{k = 0}^{n} (qs)^k + \sum_{k = m + 1}^{\infty} (qs)^k] = ps[\sum_{k = 0}^{n} (qs)^k + (qs)^{n + 1} \sum_{k = 0}^{\infty} (qs)^k]$ $= ps[\sum_{k = 0}^{n} (qs)^k] + (qs)^{n + 1} g_N 9s) = g_n (s) + (qs)^{n + 1} g_N (s)$ Note that $g_n (s)$ has the form of the generating function for a simple approximation $N_n$ which matches values and probabilities with $N$ up to $k = n$. Now $g_T (s) = g_n[g_Y (s)] + (qs)^{n + 1} g_N [g_Y (s)]$ The evaluation involves convolution of coefficients which effectively sets $s = 1$. Since $g_N (1) = g_Y (1) = 1$. $(qs)^{n + 1} g_N [g_Y (s)]$ for $s = 1$ reduces to $q^{n + 1} = P(N > n)$ which is negligible if $n$ is large enough. Suitable $n$ may be determined in each case. With such an $n$, if the $Y_i$ are nonnegative, integer-valued, we may use the gend procedure on $g_n [g_Y (s)]$, where $g_n (s) = ps + pqs^2 + pq^2s^3 + \cdot\cdot\cdot + pq^n s^{n + 1}$ For the integer-valued case, as in the general case of simple $Y_i$, we could use mgd. However, gend is usually faster and more efficient for the integer-valued case. Unless $q$ is small, the number of terms needed to approximate $g_n$ is likely to be too great. Example $8$ Approximating the generating function Let $p = 0.3$ and $Y$ be uniformly distributed on $\{1, 2, \cdot\cdot\cdot, 10\}$. Determine the distribution for $T = \sum_{k = 1}^{N} Y_k$ Solution p = 0.3; q = 1 - p; a = [30 35 40]; % Check for suitable n b = q.^a b = 1.0e-04 * % Use n = 40 0.2254 0.0379 0.0064 n = 40; k = 1:n; gY = 0.1*[0 ones(1,10)]; gN = p*[0 q.^(k-1)]; % Probabilities, 0 <= k <= 40 gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Values are in row matrix D; probabilities are in PD. To view the distribution, call for gD. sum(PD) % Check sum of probabilities ans = 1.0000 FD = cumsum(PD); % Distribution function for D plot(0:100,FD(1:101)) % See Figure 15.2.1 P50 = (D<=50)*PD' P50 = 0.9497 P30 = (D<=30)*PD' P30 = 0.8263 Figure 15.2.1. Execution Time Distribution Function $F_D$. The same results may be achieved with mgd, although at the cost of more computing time. In that case, use $gN$ as in Example 15.2.8, but use the actual distribution for $Y$. Arrival times and counting processes Suppose we have phenomena which take place at discrete instants of time, separated by random waiting or interarrival times. These may be arrivals of customers in a store, of noise pulses on a communications line, vehicles passing a position on a road, the failures of a system, etc. We refer to these occurrences as arrivals and designate the times of occurrence as arrival times. A stream of arrivals may be described in three equivalent ways. • Arrival times: $\{S_n: 0 \le n\}$, with $0 = S_0 < S_1 < \cdot\cdot\cdot$ a.s. (basic sequence) • Interarrival times: $\{W_i: 1 \le i\}$, with each $W_i > 0$ a.s. (incremental sequence) The strict inequalities imply that with probability one there are no simultaneous arrivals. The relations between the two sequences are simply $S_0 = 0$, $S_n = \sum_{i = 1}^{n} W_i$ and $W_n = S_n - S_{n - 1}$ for all $n \ge 1$ The formulation indicates the essential equivalence of the problem with that of the compound demand. The notation and terminology are changed to correspond to that customarily used in the treatment of arrival and counting processes. The stream of arrivals may be described in a third way. • Counting processes: $N_t = N(t)$ is the number of arrivals in time period $(0, t]$. It should be clear that this is a random quantity for each nonnegative $t$. For a given $t, \omega$ the value is $N (t, \omega)$. Such a family of random variables constitutes a random process. In this case the random process is a counting process. We thus have three equivalent descriptions for the stream of arrivals. $\{S_n: 0 \le n\}$ $\{W_n: 1 \le n\}$ $\{N_t: 0 \le t\}$ Several properties of the counting process $N$ should be noted: $N(t + h) - N(t)$ counts the arrivals in the interval $(t, t + h]$, $h > 0$, so that $N(t + h) \ge N(t)$ for $h > 0$. $N_0 = 0$ and for $t >0$ we have $N_t = \sum_{i = 1}^{\infty} I_{(0, t]} (S_i) = \text{max } \{n: S_n \le t\} = \text{min } \{n: S_{n + 1} > t\}$ For any given $\omega$, $N(\cdot, \omega)$ is a nondecreasing, right-continuous, integer-valued function defined on $[0, \infty)$, with $N(0, \omega) = 0$. The essential relationships between the three ways of describing the stream of arrivals is displayed in $W_n = S_n - S_{n - 1}$, $\{N_t \ge n\} = \{S_n \le t\}$, $\{N_t = n\} = \{S_n \le t < S_{n + 1}\}$ This imples $P(N_t = n) = P(S_n \le t) - P(S_{n + 1} \le t) = P(S_{n + 1} > t) - P(S_n > t)$ Although there are many possibilities for the interarrival time distributions, we assume $\{W_i: 1 \le i\}$ is iid, with $W_i > 0$ a.s. Under such assumptions, the counting process is often referred to as a renewal process and the interrarival times are called renewal times. In the literature on renewal processes, it is common for the random variable to count an arrival at $t = 0$. This requires an adjustment of the expressions relating $N_t$ and the $S_i$. We use the convention above. Exponential iid interarrival times The case of exponential interarrival times is natural in many applications and leads to important mathematical results. We utilize the following propositions about the arrival times $S_n$, the interarrival times $W_i$, and the counting process $N$. If $\{W_i: 1 \le i\}$ is iid exponential ($\lambda$), then $S_n$ ~ gamma $(n, \lambda)$ for all $n \ge 1$. This is worked out in the unit on TRANSFORM METHODS, in the discussion of the connection between the gamma distribution and the exponential distribution. $S_n$ ~ gamma $(n, \lambda)$ for all $n \ge 1$, and $S_0 = 0$, iff $N_t$ ~ Poisson $(\lambda t)$ for all $t > 0$. This follows the result in the unit DISTRIBUTION APPROXI9MATIONS on the relationship between the Poisson and gamma distributions, along with the fact that $\{N_t \ge n\} = \{S_n \le t\}$. Remark. The counting process is a Poisson process in the sense that $N_t$ ~ Poisson ($\lambda t$) for all $t > 0$. More advanced treatments show that the process has independent, stationary increments. That is $N(t + h) - N(t) = N(h)$ for all $t, h > 0$, and For $t_1 < t_2 \le t_3 < t_4 \le \cdot\cdot\cdot \le t_{m - 1} < t_m$, the class $\{N(t_2) - N(N_1), N(t_4) - N(t_3), \cdot\cdot\cdot, N(t_m) - N(t_{m -1})\}$ is independent. In words, the number of arrivals in any time interval depends upon the length of the interval and not its location in time, and the numbers of arrivals in nonoverlapping time intervals are independent. Example $9$ Emergency calls Emergency calls arrive at a police switchboard with interarrival times (in hours) exponential (15). Thus, the average interarrival time is 1/15 hour (four minutes). What is the probability the number of calls in an eight hour shift is no more than 100, 120, 140? p = 1 - cpoisson(8*15,[101 121 141]) p = 0.0347 0.5243 0.9669 We develop next a simple computational result for arrival processes for which $S_n$ ~ gamma $(n, \lambda)$ Example $10$ Gamma arrival times Suppose the arrival times $S_n$ ~ gamma ($n, \lambda$) and $g$ is such that $\int_{0}^{\infty} |g| < \infty$ and $E[\sum_{n = 1}^{\infty} |g(S_n)|] < \infty$ Then $E[\sum_{n = 1}^{\infty} g(S_n)] = \lambda \int_{0}^{\infty} g$ VERIFICATION We use the countable sums property (E8b) for expectation and the corresponding property for integrals to assert $E[\sum_{n = 1}^{\infty} g(S_n)] = \sum_{n = 1}^{\infty} E[g(S_n)] = \sum_{n = 1}^{\infty} \int_{0}^{\infty} g(t) f_n (t)\ dt$ where $f_n (t) = \dfrac{\lambda e^{-\lambda t} (\lambda t)^{n - 1}}{(n - 1)!}$ We may apply (E8b) to assert $\sum_{n = 1}^{\infty} \int_{0}^{\infty} gf_n = \int_{0}^{\infty} g \sum_{n = 1}^{\infty} f_n$ Since $\sum_{n = 1}^{\infty} f_n (t) = \lambda e^{-\lambda t} \sum_{n = 1}^{\infty} \dfrac{(\lambda t)^{n - 1}}{(n - 1)!} = \lambda e^{-\lambda t} e^{\lambda t} = \lambda$ the proposition is established. Example $11$ Discounted replacement costs A critical unit in a production system has life duration exponential $(\lambda)$. Upon failure the unit is replaced immediately by a similar unit. Units fail independently. Cost of replacement of a unit is c dollars. If money is discounted at a rate $\alpha$, then a dollar spent tunits of time in the future has a current value $e^{\alpha t}$. If $S_n$ is the time of replacement of the $n$th unit, then $S_n$ ~ gamma $(n, \lambda)$ and the present value of all future replacements is $C = \sum_{n = 1}^{\infty} ce^{-\alpha S_n}$ The expected replacement cost is $E[C] = E[\sum_{n =1}^{\infty} g(S_n)]$ where $g(t) = ce^{-\infty}$ Hence $E[C] = \lambda \int_{0}^{\infty} ce^{-\alpha t} \ dt = \dfrac{\lambda c}{\alpha}$ Suppose unit replacement cost $c = 1200$, average time (in years) to failure $1/\lambda = 1/4$, and the discount rate per year $\alpha = 0.08$ (eight percent). Then $E[C] = \dfrac{1200 \cdot 4}{0.08} = 60,000$ Example $12$ Random costs Suppose the cost of the $n$th replacement in Example 15.2.11 is a random quantity $C_n$, with $\{C_n, S_n\}$ independent and $E[C_n] = c$, invariant with $n$. Then $E[C] = E[\sum_{n = 1}^{\infty} C_n e^{-\alpha S_n}] = \sum_{n = 1}^{\infty} E[C_n] E[e^{-\alpha S_n}] = \sum_{n = 1}^{\infty} cE[e^{-\alpha S_n}] = \dfrac{\lambda c}{\alpha}$ The analysis to this point assumes the process will continue endlessly into the future. Often, it is desirable to plan for a specific, finite period. The result of Example 15.2.10 may be modified easily to account for a finite period, often referred to as a finite horizon. Example $13$ Finite horizon Under the conditions assumed in Example 15.2.10, above, let $N_t$ be the counting random variable for arrivals in the interval $(0, t]$. If $Z_t = \sum_{n = 1}^{N_t} g(S_n)$, then $E[Z_t] = \lambda \int_{0}^{t} g(u)\ du$ VERIFICATION Since $N_t \ge n$ iff $S_n \le t$. $\sum_{n = 1}^{N_t} g(S_n) = \sum_{n = 0}^{\infty} I_{(0, t]} (S_n) g(S_n)$. In the result of Example 15.2.10, replace $g$ by $I_{(0, t]} g$ and note that $\int_{0}^{\infty} I_{(0, t]} (u) g(u)\ du = \int_{0}^{t} g(u)\ du$ Example $14$ Replacement costs, finite horizon Under the condition of Example 15.2.11, consider the replacement costs over a two-year period. Solution $E[C] = \lambda c\int_{0}^{t} e^{-\alpha u} \ du = \dfrac{\lambda c}{\alpha} (1 - e^{-\alpha t})$ Thus, the expected cost for the infinite horizon $\lambda c/ \alpha$ is reduced by the factor $1 - e^{-\alpha t}$. For $t = 2$ and the number in Example 15.2.11, the reduction factor is $1 - e^{-0.16} = 0.1479$ to give $E[C] = 60000 \cdot 0.1479 = 8871.37$. In the important special case that $g(u) = ce^{-\alpha u}$, the exporession for $E[\sum_{n = 1}^{\infty} g(S_n)]$ may be put into a form which does not require the interarrival times to be exponential. Example $15$ General interarrival, exponential g Suppose $S_0 = 0$ and $S_n = \sum_{i = 1}^{n} W_i$, where $\{W_i: 1 \le i\}$ is iid. Let $\{V_n: 1 \le n\}$ be a class such that each $E[V_n] = c$ and each pair $\{V_n ,S_n\}$ is independent. Then for $\alpha > 0$ $E[C] = E[\sum_{n = 1}^{\infty} V_n e^{-\alpha S_n}] = c \cdot \dfrac{M_W (-\alpha)}{1 - M_W (-\alpha)}$ where $M_W$ is the moment generating function for $W$. DERIVATION First we note that $E[V_n e^{-\alpha S_n}] = cM_{S_n} (-\alpha) = cM_W^n (-\alpha)$ Hence, by properties of expectation and the geometric series $E[C] = c \sum_{n =1}^{\infty} M_W^n (- \alpha) = \dfrac{M_W (-\alpha)}{1 - M_W (-\alpha)}$, provided $|M_W (-\alpha)| < 1$ Since $\alpha > 0$ and $W > 0$, we have $0 < e^{-\alpha W} < 1$, so that $M_W (-\alpha) = E[e^{-\alpha W}] < 1$ Example $16$ Uniformly distributed interarrival times Suppose each $W_i$ ~ uniform $(a, b)$. Then (see Appendix C), $M_W (-\alpha) = \dfrac{e^{-a \alpha} - e^{-b \alpha}}{\alpha (b - a)}$ so that $E[C] = c \cdot \dfrac{e^{-a \alpha} - e^{-b \alpha}}{\alpha (b - a) - [e^{-a \alpha} - e^{-b \alpha}]}$ Let $a = 1$, $b = 5$, $c = 100$ and $\alpha = 0$. Then, a = 1; b = 5; c = 100; A = 0.08; MW = (exp(-a*A) - exp(-b*A))/(A*(b - a)) MW = 0.7900 EC = c*MW/(1 - MW) EC = 376.1643
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/15%3A_Random_Selection/15.02%3A_Some_Random_Selection_Problems.txt
Exercise $1$ (See Exercise 3 from "Problems on Random Variables and Joint Distributions") A die is rolled. Let $X$ be the number of spots that turn up. A coin is flipped $X$ times. Let $Y$ be the number of heads that turn up. Determine the distribution for $Y$. Answer PX = [0 (1/6)*ones(1,6)]; PY = [0.5 0.5]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN PX Enter gen fn COEFFICIENTS for gY PY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. disp(gD) % Compare with P8-3 0 0.1641 1.0000 0.3125 2.0000 0.2578 3.0000 0.1667 4.0000 0.0755 5.0000 0.0208 6.0000 0.0026 Exercise $2$ (See Exercise 4 from "Problems on Random Variables and Joint Distributions") As a variation of Exercise 15.3.1, suppose a pair of dice is rolled instead of a single die. Determine the distribution for $Y$. Answer PN = (1/36)*[0 0 1 2 3 4 5 6 5 4 3 2 1]; PY = [0.5 0.5]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN PN Enter gen fn COEFFICIENTS for gY PY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. disp(gD) 0 0.0269 1.0000 0.1025 2.0000 0.1823 3.0000 0.2158 4.0000 0.1954 5.0000 0.1400 6.0000 0.0806 7.0000 0.0375 8.0000 0.0140 % (Continued next page) 9.0000 0.0040 10.0000 0.0008 11.0000 0.0001 12.0000 0.0000 Exercise $3$ (See Exercise 5 from "Problems on Random Variables and Joint Distributions") Suppose a pair of dice is rolled. Let $X$ be the total number of spots which turn up. Roll the pair an additional $X$ times. Let $Y$ be the number of sevens that are thrown on the $X$ rolls. Determine the distribution for $Y$. What is the probability of three or more sevens? Answer PX = (1/36)*[0 0 1 2 3 4 5 6 5 4 3 2 1]; PY = [5/6 1/6]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN PX Enter gen fn COEFFICIENTS for gY PY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. disp(gD) 0 0.3072 1.0000 0.3660 2.0000 0.2152 3.0000 0.0828 4.0000 0.0230 5.0000 0.0048 6.0000 0.0008 7.0000 0.0001 8.0000 0.0000 9.0000 0.0000 10.0000 0.0000 11.0000 0.0000 12.0000 0.0000 P = (D>=3)*PD' P = 0.1116 Exercise $4$ (See Example 7 from "Conditional Expectation, Regression") A number $X$ is chosen by a random selection from the integers 1 through 20 (say by drawing a card from a box). A pair of dice is thrown $X$ times. Let $Y$ be the number of “matches” (i.e., both ones, both twos, etc.). Determine the distribution for $Y$. Answer gN = (1/20)*[0 ones(1,20)]; gY = [5/6 1/6]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. disp(gD) 0 0.2435 1.0000 0.2661 2.0000 0.2113 3.0000 0.1419 4.0000 0.0795 5.0000 0.0370 6.0000 0.0144 7.0000 0.0047 8.0000 0.0013 9.0000 0.0003 10.0000 0.0001 11.0000 0.0000 12.0000 0.0000 13.0000 0.0000 14.0000 0.0000 15.0000 0.0000 16.0000 0.0000 17.0000 0.0000 18.0000 0.0000 19.0000 0.0000 20.0000 0.0000 Exercise $5$ (See Exercise 20 from "Problems on Conditional Expectation, Regression") A number $X$ is selected randomly from the integers 1 through 100. A pair of dice is thrown $X$ times. Let $Y$ be the number of sevens thrown on the $X$ tosses. Determine the distribution for $Y$. Determine $E[Y]$ and $P(Y \le 20)$. Answer gN = 0.01*[0 ones(1,100)]; gY = [5/6 1/6]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. EY = dot(D,PD) EY = 8.4167 P20 = (D<=20)*PD' P20 = 0.9837 Exercise $6$ (See Exercise 21 from "Problems on Conditional Expectation, Regression") A number $X$ is selected randomly from the integers 1 through 100. Each of two people draw $X$ times independently and randomly a number from 1 to 10. Let $Y$ be the number of matches (i.e., both draw ones, both draw twos, etc.). Determine the distribution for $Y$. Determine $E[Y]$ and $P(Y \le 10)$. Answer gN = 0.01*[0 ones(1,100)]; gY = [0.9 0.1]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. EY = dot(D,PD) EY = 5.0500 P10 = (D<=10)*PD' P10 = 0.9188 Exercise $7$ Suppose the number of entries in a contest is $N$ ~ binomial (20, 0.4). There are four questions. Let $Y_i$ be the number of questions answered correctly by the $i$th contestant. Suppose the $Y_i$ are iid, with common distribution $Y =$ [1 2 3 4] $PY =$ [0.2 0.4 0.3 0.1] Let $D$ be the total number of correct answers. Determine $E[D]$, $\text{Var} [D]$, $P(15 \le D \le 25)$, and $P(10 \le D \le 30)$. Answer gN = ibinom(20,0.4,0:20); gY = 0.1*[0 2 4 3 1]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. ED = dot(D,PD) ED = 18.4000 VD = (D.^2)*PD' - ED^2 VD = 31.8720 P1 = ((15<=D)&(D<=25))*PD' P1 = 0.6386 P2 = ((10<=D)&(D<=30))*PD' P2 = 0.9290 Exercise $8$ Game wardens are making an aerial survey of the number of deer in a park. The number of herds to be sighted is assumed to be a random variable $N$ ~ binomial (20, 0.5). Each herd is assumed to be from 1 to 10 in size, with probabilities Value 1 2 3 4 5 6 7 8 9 10 Probability 0.05 0.10 0.15 0.20 0.15 0.10 0.10 0.05 0.05 0.05 Let $D$ be the number of deer sighted under this model. Determine $P(D \le t)$ for $t = 25, 50, 75, 100$ and $P(D \ge 90)$. Answer gN = ibinom(20,0.5,0:20); gY = 0.01*[0 5 10 15 20 15 10 10 5 5 5]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. k = [25 50 75 100]; P = zeros(1,4); for i = 1:4 P(i) = (D<=k(i))*PD'; end disp(P) 0.0310 0.5578 0.9725 0.9998 Exercise $9$ A supply house stocks seven popular items. The table below shows the values of the items and the probability of each being selected by a customer. Value 12.50 25.00 30.50 40.00 42.50 50.00 60.00 Probability 0.10 0.15 0.20 0.20 0.15 0.10 0.10 Suppose the purchases of customers are iid, and the number of customers in a day is binomial (10,0.5). Determine the distribution for the total demand $D$. 1. How many different possible values are there? What is the maximum possible total sales? 2. Determine $E[D]$ and $P(D \le t)$ for $t = 100, 150, 200, 250, 300$. Determine $P(100 < D \le 200)$. Answer gN = ibinom(10,0.5,0:10); Y = [12.5 25 30.5 40 42.5 50 60]; PY = 0.01*[10 15 20 20 15 10 10]; mgd Enter gen fn COEFFICIENTS for gN gN Enter VALUES for Y Y Enter PROBABILITIES for Y PY Values are in row matrix D; probabilities are in PD. To view the distribution, call for mD. s = size(D) s = 1 839 M = max(D) M = 590 t = [100 150 200 250 300]; P = zeros(1,5); for i = 1:5 P(i) = (D<=t(i))*PD'; end disp(P) 0.1012 0.3184 0.6156 0.8497 0.9614 P1 = ((100<D)&(D<=200))*PD' P1 = 0.5144 Exercise $10$ A game is played as follows: 1. A wheel is spun, giving one of the integers 0 through 9 on an equally likely basis. 2. A single die is thrown the number of times indicated by the result of the spin of the wheel. The number of points made is the total of the numbers turned up on the sequence of throws of the die. 3. A player pays sixteen dollars to play; a dollar is returned for each point made. Let $Y$ represent the number of points made and $X = Y - 16$ be the net gain (possibly negative) of the player. Determine the maximum value of $X, E[X], \text{Var} [X], P(X > 0), P(X \ge 10), P(X \ge 16)$ Answer gn = 0.1*ones(1,10); gy = (1/6)*[0 ones(1,6)]; [Y,PY] = gendf(gn,gy); [X,PX] = csort(Y-16,PY); M = max(X) M = 38 EX = dot(X,PX) % Check EX = En*Ey - 16 = 4.5*3.5 EX = -0.2500 % 4.5*3.5 - 16 = -0.25 VX = dot(X.^2,PX) - EX^2 VX = 114.1875 Ppos = (X>0)*PX' Ppos = 0.4667 P10 = (X>=10)*PX' P10 = 0.2147 P16 = (X>=16)*PX' P16 = 0.0803 Exercise $11$ Marvin calls on four customers. With probability $p_1 = 0.6$ he makes a sale in each case. Geraldine calls on five customers, with probability $p_2 = 0.5$ of a sale in each case. Customers who buy do so on an iid basis, and order an amount $Y_i$ (in dollars) with common distribution: $Y =$ [200 220 240 260 280 300] $PY =$ [0.10 0.15 0.25 0.25 0.15 0.10] Let $D_1$ be the total sales for Marvin and $D_2$ the total sales for Geraldine. Let $D = D_1 + D_2$. Determine the distribution and mean and variance for $D_1$, $D_2$, and $D$. Determine $P(D_1 \ge D_2)$ and $P(D \ge 1500)$, $P(D \ge 1000)$, and $P(D \ge 750)$. Answer gnM = ibinom(4,0.6,0:4); gnG = ibinom(5,0.5,0:5); Y = 200:20:300; PY = 0.01*[10 15 25 25 15 10]; [D1,PD1] = mgdf(gnM,Y,PY); [D2,PD2] = mgdf(gnG,Y,PY); ED1 = dot(D1,PD1) ED1 = 600.0000 % Check: ED1 = EnM*EY = 2.4*250 VD1 = dot(D1.^2,PD1) - ED1^2 VD1 = 6.1968e+04 ED2 = dot(D2,PD2) ED2 = 625.0000 % Check: ED2 = EnG*EY = 2.5*250 VD2 = dot(D2.^2,PD2) - ED2^2 VD2 = 8.0175e+04 [D1,D2,t,u,PD1,PD2,P] = icalcf(D1,D2,PD1,PD2); Use array opertions on matrices X, Y, PX, PY, t, u, and P [D,PD] = csort(t+u,P); ED = dot(D,PD) ED = 1.2250e+03 eD = ED1 + ED2 % Check: ED = ED1 + ED2 eD = 1.2250e+03 % (Continued next page) VD = dot(D.^2,PD) - ED^2 VD = 1.4214e+05 vD = VD1 + VD2 % Check: VD = VD1 + VD2 vD = 1.4214e+05 P1g2 = total((t>u).*P) P1g2 = 0.4612 k = [1500 1000 750]; PDk = zeros(1,3); for i = 1:3 PDk(i) = (D>=k(i))*PD'; end disp(PDk) 0.2556 0.7326 0.8872 Exercise $12$ A questionnaire is sent to twenty persons. The number who reply is a random number $N$ ~ binomial (20, 0.7). If each respondent has probability $p = 0.8$ of favoring a certain proposition, what is the probability of ten or more favorable replies? Of fifteen or more? Answer gN = ibinom(20,0.7,0:20); gY = [0.2 0.8]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. P10 = (D>=10)*PD' P10 = 0.7788 P15 = (D>=15)*PD' P15 = 0.0660 pD = ibinom(20,0.7*0.8,0:20); % Alternate: use D binomial (pp0) D = 0:20; p10 = (D>=10)*pD' p10 = 0.7788 p15 = (D>=15)*pD' p15 = 0.0660 Exercise $13$ A random number $N$ of students take a qualifying exam. A grade of 70 or more earns a pass. Suppose $N$ ~ binomial (20, 0.3). If each student has probability $p = 0.7$ of making 70 or more, what is the probability all will pass? Ten or more will pass? Answer gN = ibinom(20,0.3,0:20); gY = [0.3 0.7]; gend Do not forget zero coefficients for missing powers Enter gen fn COEFFICIENTS for gN gN Enter gen fn COEFFICIENTS for gY gY Results are in N, PN, Y, PY, D, PD, P May use jcalc or jcalcf on N, D, P To view the distribution, call for gD. Pall = (D==20)*PD' Pall = 2.7822e-14 pall = (0.3*0.7)^20 % Alternate: use D binomial (pp0) pall = 2.7822e-14 P10 = (D >= 10)*PD' P10 = 0.0038 Exercise $14$ Five hundred questionnaires are sent out. The probability of a reply is 0.6. The probability that a reply will be favorable is 0.75. What is the probability of at least 200, 225, 250 favorable replies? Answer n = 500; p = 0.6; p0 = 0.75; D = 0:500; PD = ibinom(500,p*p0,D); k = [200 225 250]; P = zeros(1,3); for i = 1:3 P(i) = (D>=k(i))*PD'; end disp(P) 0.9893 0.5173 0.0140 Exercise $15$ Suppose the number of Japanese visitors to Florida in a week is $N1$ ~ Poisson (500) and the number of German visitors is $N2$ ~ Poisson (300). If 25 percent of the Japanese and 20 percent of the Germans visit Disney World, what is the distribution for the total number $D$ of German and Japanese visitors to the park? Determine $P(D \ge k)$ for $k = 150, 155, \cdot\cdot\cdot, 245, 250$. Answer $JD$ ~ Poisson (500*0.25 = 125); $GD$ ~ Poisson (300*0.20 = 60); $D$ ~ Poisson (185). k = 150:5:250; PD = cpoisson(185,k); disp([k;PD]') 150.0000 0.9964 155.0000 0.9892 160.0000 0.9718 165.0000 0.9362 170.0000 0.8736 175.0000 0.7785 180.0000 0.6532 185.0000 0.5098 190.0000 0.3663 195.0000 0.2405 200.0000 0.1435 205.0000 0.0776 210.0000 0.0379 215.0000 0.0167 220.0000 0.0067 225.0000 0.0024 230.0000 0.0008 235.0000 0.0002 240.0000 0.0001 245.0000 0.0000 250.0000 0.0000 Exercise $16$ A junction point in a network has two incoming lines and two outgoing lines. The number of incoming messages $N_1$ on line one in one hour is Poisson (50); on line 2 the number is $N_2$ ~ Poisson (45). On incoming line 1 the messages have probability $P_{1a} = 0.33$ of leaving on outgoing line a and $1 - p_{1a}$ of leaving on line b. The messages coming in on line 2 have probability $p_{2a} = 0.47$ of leaving on line a. Under the usual independence assumptions, what is the distribution of outgoing messages on line a? What are the probabilities of at least 30, 35, 40 outgoing messages on line a? Answer m1a = 50*0.33; m2a = 45*0.47; ma = m1a + m2a; PNa = cpoisson(ma,[30 35 40]) PNa = 0.9119 0.6890 0.3722 Exercise $17$ A computer store sells Macintosh, HP, and various other IBM compatible personal computers. It has two major sources of customers: 1. Students and faculty from a nearby university 2. General customers for home and business computing. Suppose the following assumptions are reasonable for monthly purchases. • The number of university buyers $N1$ ~ Poisson (30). The probabilities for Mac, HP, others are 0.4, 0.2, 0.4, respectively. • The number of non-university buyers $N2$ ~ Poisson (65). The respective probabilities for Mac, HP, others are 0.2, 0.3, 0.5. • For each group, the composite demand assumptions are reasonable, and the two groups buy independently. What is the distribution for the number of Mac sales? What is the distribution for the total number of Mac and Dell sales? Answer Mac sales Poisson (30*0.4 + 65*0.2 = 25); HP sales Poisson (30*0.2 + 65*0.3 = 25.5); total Mac plus HP sales Poisson(50.5). Exercise $18$ The number $N$ of “hits” in a day on a Web site on the internet is Poisson (80). Suppose the probability is 0.10 that any hit results in a sale, is 0.30 that the result is a request for information, and is 0.60 that the inquirer just browses but does not identify an interest. What is the probability of 10 or more sales? What is the probability that the number of sales is at least half the number of information requests (use suitable simple approximations)? Answer X = 0:30; Y = 0:80; PX = ipoisson(80*0.1,X); PY = ipoisson(80*0.3,Y); icalc: X Y PX PY - - - - - - - - - - - - PX10 = (X>=10)*PX' % Approximate calculation PX10 = 0.2834 pX10 = cpoisson(8,10) % Direct calculation pX10 = 0.2834 M = t>=0.5*u; PM = total(M.*P) PM = 0.1572 Exercise $19$ The number $N$ of orders sent to the shipping department of a mail order house is Poisson (700). Orders require one of seven kinds of boxes, which with packing costs have distribution Cost (dollars) 0.75 1.25 2.00 2.50 3.00 3.50 4.00 Probability 0.10 0.15 0.15 0.25 0.20 0.10 0.05 What is the probability the total cost of the $2.50 boxes is no greater than$475? What is the probability the cost of the $2.50 boxes is greater than the cost of the$3.00 boxes? What is the probability the cost of the $2.50 boxes is not more than$50.00 greater than the cost of the $3.00 boxes? Suggestion. Truncate the Poisson distributions at about twice the mean value. Answer X = 0:400; Y = 0:300; PX = ipoisson(700*0.25,X); PY = ipoisson(700*0.20,Y); icalc Enter row matrix of X-values X Enter row matrix of Y-values Y Enter X probabilities PX Enter Y probabilities PY Use array operations on matrices X, Y, PX, PY, t, u, and P P1 = (2.5*X<=475)*PX' P1 = 0.8785 M = 2.5*t<=(3*u + 50); PM = total(M.*P) PM = 0.7500 Exercise $20$ One car in 5 in a certain community is a Volvo. If the number of cars passing a traffic check point in an hour is Poisson (130), what is the expected number of Volvos? What is the probability of at least 30 Volvos? What is the probability the number of Volvos is between 16 and 40 (inclusive)? Answer P1 = cpoisson(130*0.2,30) = 0.2407 P2 = cpoisson(26,16) - cpoisson(26,41) = 0.9819 Exercise $21$ A service center on an interstate highway experiences customers in a one-hour period as follows: • Northbound: Total vehicles: Poisson (200). Twenty percent are trucks. • Southbound: Total vehicles: Poisson (180). Twenty five percent are trucks. • Each truck has one or two persons, with respective probabilities 0.7 and 0.3. • Each car has 1, 2, 3, 4, or 5 persons, with probabilities 0.3, 0.3, 0.2, 0.1, 0.1, respectively Under the usual independence assumptions, let $D$ be the number of persons to be served. Determine $E[D]$, $\text{Var} [D]$, and the generating function $g_D (s)$. Answer $T$ ~ Poisson (200*0.2 = 180*0.25 = 85), $P$ ~ Poisson (200*0.8 + 180*0.75 = 295). a = 85 b = 200*0.8 + 180*0.75 b = 295 YT = [1 2]; PYT = [0.7 0.3]; EYT = dot(YT,PYT) EYT = 1.3000 VYT = dot(YT.^2,PYT) - EYT^2 VYT = 0.2100 YP = 1:5; PYP = 0.1*[3 3 2 1 1]; EYP = dot(YP,PYP) EYP = 2.4000 VYP = dot(YP.^2,PYP) - EYP^2 VYP = 1.6400 EDT = 85*EYT EDT = 110.5000 EDP = 295*EYP EDP = 708.0000 ED = EDT + EDP ED = 818.5000 VT = 85*(VYT + EYT^2) VT = 161.5000 VP = 295*(VYP + EYP^2) VP = 2183 VD = VT + VP VD = 2.2705e+03 NT = 0:180; % Possible alternative gNT = ipoisson(85,NT); gYT = 0.1*[0 7 3]; [DT,PDT] = gendf(gNT,gYT); EDT = dot(DT,PDT) EDT = 110.5000 VDT = dot(DT.^2,PDT) - EDT^2 VDT = 161.5000 NP = 0:500; gNP = ipoisson(295,NP); gYP = 0.1*[0 3 2 2 1 1]; [DP,PDP] = gendf(gNP,gYP); % Requires too much memory $g_{DT} (s) = \text{exp} (85(0.7s + 0.3s^2 - 1))$ $g_{DP} (s) = \text{exp} (295(0.1(3s + 3s^2 2s^3 + s^4 + s^5) - 1))$ $g_D (s) = g_{DT} (s) g_{DP} (s)$ Exercise $22$ The number $N$ of customers in a shop in a given day is Poisson (120). Customers pay with cash or by MasterCard or Visa charge cards, with respective probabilties 0.25, 0.40, 0.35. Make the usual independence assumptions. Let $N_1, N_2, N_3$ be the numbers of cash sales, MasterCard charges, Visa card charges, respectively. Determine $P(N_1 \ge 30)$, $P(N_2 \ge 60)$, $P(N_3 \ge 50$, and $P(N_2 > N_3)$. Answer X = 0:120; PX = ipoisson(120*0.4,X); Y = 0:120; PY = ipoisson(120*0.35,Y); icalc Enter row matrix of X values X Enter row matrix of Y values Y Enter X probabilities PX Enter Y probabilities PY Use array opertions on matrices X, Y, PX, PY, t, u, and P M = t > u; PM = total(M.*P) PM = 0.7190 Exercise $23$ A discount retail store has two outlets in Houston, with a common warehouse. Customer requests are phoned to the warehouse for pickup. Two items, a and b, are featured in a special sale. The number of orders in a day from store A is $N_A$ ~ Poisson (30); from store B, the nember of orders is $N_B$ ~ Poisson (40). For store A, the probability an order for a is 0.3, and for b is 0.7. For store B, the probability an order for a is 0.4, and for b is 0.6. What is the probability the total order for item b in a day is 50 or more? Answer P = cpoisson(30*0.7+40*0.6,50) = 0.2468 Exercise $24$ The number of bids on a job is a random variable $N$ ~ binomial (7, 0.6). Bids (in thousands of dollars) are iid with $Y$ uniform on [3, 5]. What is the probability of at least one bid of$3,500 or less? Note that “no bid” is not a bid of 0. Answer % First solution --- FY(t) = 1 - gN[P(Y>t)] P = 1-(0.4 + 0.6*0.75)^7 P = 0.6794 % Second solution --- Positive number of satisfactory bids, % i.e. the outcome is indicator for event E, with P(E) = 0.25 pN = ibinom(7,0.6,0:7); gY = [3/4 1/4]; % Generator function for indicator [D,PD] = gendf(pN,gY); % D is number of successes Pa = (D>0)*PD' % D>0 means at least one successful bid Pa = 0.6794 Exercise $25$ The number of customers during the noon hour at a bank teller's station is a random number $N$ with distribution $N =$ 1 : 10, $PN =$ 0.01 * [5 7 10 11 12 13 12 11 10 9] The amounts they want to withdraw can be represented by an iid class having the common distribution $Y$ ~ exponential (0.01). Determine the probabilities that the maximum withdrawal is less than or equal to $t$ for $t = 100, 200, 300, 400, 500$. Answer Use $F_W (t) = g_N[P(Y \le T)]$ gN = 0.01*[0 5 7 10 11 12 13 12 11 10 9]; t = 100:100:500; PY = 1 - exp(-0.01*t); FW = polyval(fliplr(gN),PY) % fliplr puts coeficients in % descending order of powers FW = 0.1330 0.4598 0.7490 0.8989 0.9615 Exercise $26$ A job is put out for bids. Experience indicates the number $N$ of bids is a random variable having values 0 through 8, with respective probabilities Value 0 1 2 3 4 5 6 7 8 Probability 0.05 0.10 0.15 0.20 0.20 0.10 0.10 0.07 0.03 The market is such that bids (in thousands of dollars) are iid, uniform [100, 200]. Determine the probability of at least one bid of $125,000 or less. Answer Probability of a successful bid $PY = (125 - 100)/100 = 0.25$ PY =0.25; gN = 0.01*[5 10 15 20 20 10 10 7 3]; P = 1 - polyval(fliplr(gN),PY) P = 0.9116 Exercise $27$ A property is offered for sale. Experience indicates the number $N$ of bids is a random variable having values 0 through 10, with respective probabilities Value 0 1 2 3 4 5 6 7 8 9 10 Probability 0.05 0.15 0.15 0.20 0.10 0.10 0.05 0.05 0.05 0.05 0.05 The market is such that bids (in thousands of dollars) are iid, uniform [150, 200] Determine the probability of at least one bid of$180,000 or more. Answer Consider a sequence of $N$ trials with probabilty $p = (180 - 150)/50 = 0.6$. gN = 0.01*[5 15 15 20 10 10 5 5 5 5 5]; gY = [0.4 0.6]; [D,PD] = gendf(gN,gY); P = (D>0)*PD' P = 0.8493 Exercise $28$ A property is offered for sale. Experience indicates the number $N$ of bids is a random variable having values 0 through 8, with respective probabilities Number 0 1 2 3 4 5 6 7 8 Probability 0.05 0.15 0.15 0.20 0.15 0.10 0.10 0.05 0.05 The market is such that bids (in thousands of dollars) are iid symmetric triangular on [150 250]. Determine the probability of at least one bid of $210,000 or more. Answer gN = 0.01*[5 15 15 20 15 10 10 5 5]; PY = 0.5 + 0.5*(1 - (4/5)^2) PY = 0.6800 >> PW = 1 - polyval(fliplr(gN),PY) PW = 0.6536 %alternate gY = [0.68 0.32]; [D,PD] = gendf(gN,gY); P = (D>0)*PD' P = 0.6536 Exercise $29$ Suppose $N$ ~ binomial (10, 0.3) and the $Y_i$ are iid, uniform on [10, 20]. Let $V$ be the minimum of the $N$ values of the $Y_i$. Determine $P(V > t)$ for integer values from 10 to 20. Answer gN = ibinom(10,0.3,0:10); t = 10:20; p = 0.1*(20 - t); P = polyval(fliplr(gN),p) - 0.7^10 P = Columns 1 through 7 0.9718 0.7092 0.5104 0.3612 0.2503 0.1686 0.1092 Columns 8 through 11 0.0664 0.0360 0.0147 0 Pa = (0.7 + 0.3*p).^10 - 0.7^10 % Alternate form of gN Pa = Columns 1 through 7 0.9718 0.7092 0.5104 0.3612 0.2503 0.1686 0.1092 Columns 8 through 11 0.0664 0.0360 0.0147 0 Exercise $30$ Suppose a teacher is equally likely to have 0, 1, 2, 3 or 4 students come in during office hours on a given day. If the lengths of the individual visits, in minutes, are iid exponential (0.1), what is the probability that no visit will last more than 20 minutes. Answer gN = 0.2*ones(1,5); p = 1 - exp(-2); FW = polyval(fliplr(gN),p) FW = 0.7635 gY = [p 1-p]; % Alternate [D,PD] = gendf(gN,gY); PW = (D==0)*PD' PW = 0.7635 Exercise $31$ Twelve solid-state modules are installed in a control system. If the modules are not defective, they have practically unlimited life. However, with probability $p = 0.05$ any unit could have a defect which results in a lifetime (in hours) exponential (0.0025). Under the usual independence assumptions, what is the probability the unit does not fail because of a defective module in the first 500 hours after installation? Answer p = 1 - exp(-0.0025*500); FW = (0.95 + 0.05*p)^12 FW = 0.8410 gN = ibinom(12,0.05,0:12); gY = [p 1-p]; [D,PD] = gendf(gN,gY); PW = (D==0)*PD' PW = 0.8410 Exercise $32$ The number $N$ of bids on a painting is binomial (10, 0.3). The bid amounts (in thousands of dollars) $Y_i$ form an iid class, with common density function $f_Y (t) =0.005 (37 - 2t), 2 \le t \le 10$. What is the probability that the maximum amount bid is greater than$5,000? Answer $P(Y \le 5) = 0.005 \int_{2}^{5} (37 - 2t)\ dt = 0.45$ p = 0.45; P = 1 - (0.7 + 0.3*p)^10 P = 0.8352 gN = ibinom(10,0.3,0:10); gY = [p 1-p]; [D,PD] = gendf(gN,gY); % D is number of "successes" Pa = (D>0)*PD' Pa = 0.8352 Exercise $33$ A computer store offers each customer who makes a purchase of \$500 or more a free chance at a drawing for a prize. The probability of winning on a draw is 0.05. Suppose the times, in hours, between sales qualifying for a drawing is exponential (4). Under the usual independence assumptions, what is the expected time between a winning draw? What is the probability of three or more winners in a ten hour day? Of five or more? Answer $N_t$ ~ Poisson ($\lambda t$), $N_{Dt}$ ~ Poisson ($\lambda pt$), $W_{Dt}$ exponential ($\lambda p$). p = 0.05; t = 10; lambda = 4; EW = 1/(lambda*p) EW = 5 PND10 = cpoisson(lambda*p*t,[3 5]) PND10 = 0.3233 0.0527 Exercise $34$ Noise pulses arrrive on a data phone line according to an arrival process such that for each $t > 0$ the number $N_t$ of arrivals in time interval $(0, t]$, in hours, is Poisson $(7t)$. The $i$th pulse has an “intensity” $Y_i$ such that the class $\{Y_i: 1 \le i\}$ is iid, with the common distribution function $F_Y (u) = 1 - e^{-2u^2}$ for $u \ge 0$. Determine the probability that in an eight-hour day the intensity will not exceed two. Answer $N_8$ is Poisson (7*8 = 56) $g_N (s) = e^{56(s - 1)}$. t = 2; FW2 = exp(56*(1 - exp(-t^2) - 1)) FW2 = 0.3586 Exercise $35$ The number $N$ of noise bursts on a data transmission line in a period $(0, t]$ is Poisson ($\mu$). The number of digit errors caused by the $i$th burst is $Y_i$, with the class $\{Y_i: 1 \le i\}$ iid, $Y_i - 1$ ~ geometric $(p)$. An error correcting system is capable or correcting five or fewer errors in any burst. Suppose $\mu = 12$ and $p = 0.35$. What is the probability of no uncorrected error in two hours of operation? Answer $F_W (k) = g_N [P(Y \le k)]P(Y \le k) - 1 - q^{k - 1}\ \ N_t$ ~ Poisson (12$t$) q = 1 - 0.35; k = 5; t = 2; mu = 12; FW = exp(mu*t*(1 - q^(k-1) - 1)) FW = 0.0138
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/15%3A_Random_Selection/15.03%3A_Problems_on_Random_Selection.txt
In the unit on Conditional Independence , the concept of conditional independence of events is examined and used to model a variety of common situations. In this unit, we investigate a more general concept of conditional independence, based on the theory of conditional expectation. This concept lies at the foundations of Bayesian statistics, of many topics in decision theory, and of the theory of Markov systems. We examine in this unit, very briefly, the first of these. In the unit on Markov Sequences, we provide an introduction to the third. The concept The definition of conditional independence of events is based on a product rule which may be expressed in terms of conditional expectation, given an event. The pair $\{A, B\}$ is conditionally independent, given $C$, iff $E[I_A I_B|C] = P(AB|C) = P(A|C) P(B|C) = E[I_A|C] E[I_B|C]$ If we let $A = X^{-1} (M)$ and $B = Y^{-1} (N)$, then $I_A = I_M (X)$ and $I_B = I_N (Y)$. It would be reasonable to consider the pair $\{X, Y\}$ conditionally independent, given event $C$, iff the product rule $E[I_M(X) I_N (Y)|C] = E[I_M (X)|C] E[I_N (Y) |C]$ holds for all reasonable $M$ and $N$ (technically, all Borel $M$ and $N$). This suggests a possible extension to conditional expectation, given a random vector. We examine the following concept. Definition The pair $\{X, Y\}$ is conditionally independent, givenZ, designated $\{X, Y\}$ ci $|Z$, iff $E[I_M (X) I_N (Y)|Z] = E[I_M(X)|Z] E[I_N (Y)|Z]$ for all Borel $M$. $N$ Remark. Since it is not necessary that $X$, $Y$, or $Z$ be real valued, we understand that the sets $M$ and $N$ are on the codomains for $X$ and $Y$, respectively. For example, if $X$ is a three dimensional random vector, then $M$ is a subset of $R^3$. As in the case of other concepts, it is useful to identify some key properties, which we refer to by the numbers used in the table in Appendix G. We note two kinds of equivalences. For example, the following are equivalent. (CI1) $E[I_M(X) I_N (Y)|Z] = E[I_M(X)|Z][E[I_N (Y)|Z]$ a.s. for all Borel sets $M, N$ (CI5) $E[g(X, Z) h(Y, Z)|Z] = E[g(X, Z)|Z] E[h(Y,Z)|Z]$ a.s. for all Borel functions $g, h$ Because the indicator functions are special Borel functions, (CI1) is a special case of (CI5). To show that (CI1) implies (CI5), we need to use linearity, monotonicity, and monotone convergence in a manner similar to that used in extending properties (CE1) to (CE6) for conditional expectation. A second kind of equivalence involves various patterns. The properties (CI1), (CI2), (CI3), and (CI4) are equivalent, with (CI1) being the defining condition for $\{X, Y\}$ ci $|Z$. (CI1) $E[I_M(X) I_N (Y)|Z] = E[I_M(X)|Z][E[I_N (Y)|Z]$ a.s. for all Borel sets $M, N$ (CI2) $E[I_M (X)|Z, Y] = E[I_M(X)|Z]$ a.s. for all Borel sets $M$ (CI3) $E[I_M (X) I_Q (Z)|Z, Y] = E[I_M (X) I_Q (Z)|Z]$ a.s. for all Borel sets $M, Q$ (CI4) $E[I_M(X) I_Q (Z)|Y] = E\{E[I_M(X) I_Q (Z)|Z]|Y\}$ a.s. for all Borel sets $M, Q$ As an example of the kinds of argument needed to verify these equivalences, we show the equivalence of (CI1) and (CI2). • (CI1) implies (CI2). Set $e_1 (Y, Z) = E[I_M (X) |Z, Y]$ and $e_2 (Y, Z) = E[I_M (X)|Z]$. If we show $E[I_N (Y) I_Q (Z) e_1 (Y, Z) = E[I_N (Y) I_Q (Z) e_2 (Y,Z)]$ for all Borel $N, Q$ then by the uniqueness property (E5b) for expectation we may assert $e_1 (Y, Z) = e_2 (Y, Z)$ a.s. Using the defining property (CE1) for conditional expectation, we have $E\{I_N (Y) I_Q (Z) E[I_M (X) |Z, Y]\} = E[I_N (Y) I_Q (Z) I_M (X)]$ On the other hand, use of (CE1), (CE8), (CI1), and (CE1) yields $E\{I_N (Y) I_Q (Z) E[I_M (X)|Z]\} = E\{I_Q (Z) E[I_N(Y) E[I_M (X)|Z]|Z]\}$ $= E\{I_Q (Z) E[I_M (X)|Z] E[I_N (Y)|Z]\} = E\{I_Q (Z0 E[I_M (X) I_N (Y)|Z]\}$ $= E[I_N (Y) I_Q (Z0 I_M (X)$ which establishes the desired equality. • (CI2) implies (CI1). Using (CE9), (CE8), (CI2), and (CE8), we have $E[I_M (X) I_N (Y)|Z] = E\{E[I_M (X) I_N (Y)|Z, Y]|Z\}$ $= E[I_N(Y) E[I_M(X) |Z, Y]|Z\} = E\{I_N (Y) E[I_M (X)|Z]|Z\}$ $= E[I_M(X)|Z] E[I_N(Y)|Z]$ Use of property (CE8) shows that (CI2) and (CI3) are equivalent. Now just as (CI1) extends to (CI5), so also (CI3) is equivalent to (CI6) $E[g(X, Z)|Z, Y] = E[g(X, Z)|Z]$ a.s. for all Borel functions $g$ Property (CI6) provides an important interpretation of conditional independence: $E[g(X, Z)|Z]$ is the best mean-square estimator for $g(X, Z)$, given knowledge of $Z$. The conditon $\{X, Y\}$ ci $|Z$ implies that additional knowledge about Y does not modify that best estimate. This interpretation is often the most useful as a modeling assumption. Similarly, property (CI4) is equivalent to (CI8) $E[g(X, Z)|Y] = E\{E[g(X, Z)|Z]|Y\}$ a.s. for all Borel functions $g$ The additional properties in Appendix G are useful in a variety of contexts, particularly in establishing properties of Markov systems. We refer to them as needed. The Bayesian approach to statistics In the classical approach to statistics, a fundamental problem is to obtain information about the population distribution from the distribution in a simple random sample. There is an inherent difficulty with this approach. Suppose it is desired to determine the population mean $\mu$. Now $\mu$ is an unknown quantity about which there is uncertainty. However, since it is a constant, we cannot assign a probability such as $P(a < \mu \le b)$. This has no meaning. The Bayesian approach makes a fundamental change of viewpoint. Since the population mean is a quantity about which there is uncertainty, it is modeled as a random variable whose value is to be determined by experiment. In this view, the population distribution is conceived as randomly selected from a class of such distributions. One way of expressing this idea is to refer to a state of nature. The population distribution has been “selected by nature” from a class of distributions. The mean value is thus a random variable whose value is determined by this selection. To implement this point of view, we assume The value of the parameter (say $\mu$ in the discussion above) is a “realization” of a parameter random variable $H$. If two or more parameters are sought (say the mean and variance), they may be considered components of a parameter random vector. The population distribution is a conditional distribution, given the value of $H$. The Bayesian model If $X$ is a random variable whose distribution is the population distribution and $H$ is the parameter random variable, then $\{X, H\}$ have a joint distribution. For each $u$ in the range of $H$, we have a conditional distribution for $X$, given $H = u$. We assume a prior distribution for $H$. This is based on previous experience. We have a random sampling process, given $H$: i.e., $\{X_i: 1 \le i \le n\}$ is conditionally iid, given $H$. Let $W = (X_1, X_2, \cdot\cdot\cdot, X_n)$ and consider the joint conditional distribution function $F_{W|H} (t_1, t_2, \cdot\cdot\cdot, t_n|u) = P(X_1 \le t_1, X_2 \le t_2, \cdot\cdot\cdot, X_n \le t_n|H = u)$ $= E[\prod_{i = 1}^{n} I_{(-\infty, t_i]} (X_i)|H = u]] = \prod_{i = 1}^{n} E[I_{(-\infty, t_i]} (X_i)|H = u] = \prod_{i = 1}^{n} F_{X|H} (t_i|u)$ If $X$ has conditional density, given H, then a similar product rule holds. Population proportion We illustrate these ideas with one of the simplest, but most important, statistical problems: that of determining the proportion of a population which has a particular characteristic. Examples abound. We mention only a few to indicate the importance. The proportion of a population of voters who plan to vote for a certain candidate. The proportion of a given population which has a certain disease. The fraction of items from a production line which meet specifications. The fraction of women between the ages eighteen and fifty five who hold full time jobs. The parameter in this case is the proportion $p$ who meet the criterion. If sampling is at random, then the sampling process is equivalent to a sequence of Bernoulli trials. If $H$ is the parameter random variable and $S_n$ is the number of “successes” in a sample of size $n$, then the conditional distribution for $S_n$, given $H = u$, is binomial $(n, u)$. To see this, consider $X_i = I_{E_i}$, with $P(E_i|H = u) = E[X_i|H = u] = e(u) = u$ Anaysis is carried out for each fixed $u$ as in the ordinary Bernoulli case. If $S_n = \sum_{i = 1}^{n} X_i = \sum_{i = 1}^{n} I_{E_i}$ We have the result $E[I_{\{k\}} (S_i) |H = u] = P(S_n = k|H = u) = C(n, k) u^k (1 - u)^{n - k}$ and $E[S_n|H = u] = nu$ The objective We seek to determine the best mean-square estimate of $H$, given $S_n = k$. If $H = u$, we know $E[S_n|H] = nu$. Sampling gives $S_n = k$. We make a Bayesian reversal to get an exression for $E[H|S_n = k]$. To complete the task, we must assume a prior distribution for $H$ on the basis of prior knowledge, if any. The Bayesian reversal Since $\{S_n = k\}$ is an event with positive probability, we use the definition of the conditional expectation, given an event, and the law of total probability (CE1b) to obtain $E[H|S_n = k] = \dfrac{E[HI_{\{k\}} (S_n)]}{E[I_{\{k\}} (S_n)]} = \dfrac{E\{HE[I_{\{k\}} (S_n)|H]\}}{E\{E[I_{\{k\}} (S_n)|H]\}} = \dfrac{\int uE[I_{\{k\}} (S_n)|H = u] f_H (u)\ du}{\int E[I_{\{k\}} (S_n)|H = u] f_H (u)\ du}$ $= \dfrac{C(n, k) \int u^{k + 1} (1 - u)^{n - k} f_{H} (u)\ du}{C(n, k) \int u^{k} (1 - u)^{n - k} f_{H} (u)\ du}$ A prior distribution for $H$ The beta $(r, s)$ distribution (see Appendix G), proves to be a “natural” choice for this purpose. Its range is the unit interval, and by proper choice of parameters $r, s$, the density function can be given a variety of forms (see Figures 16.1.1 and 16.2.2). Figure 16.1.1. The Beta(r,s) density for $r = 2$, $s = 1, 2, 10$. Figure 16.1.2. The Beta(r,s) density for $r = 5$, $s = 2, 5, 10$. Its analysis is based on the integrals $\int_{0}^{1} u^{r - 1} (1 - u)^{s - 1}\ du = \dfrac{\Gamma (r) \Gamma (s)}{\Gamma (r + s)}$ with $\Gamma (a + 1) = a \Gamma (a)$ For $H$ ~ beta ($r, s$), the density is given by $f_H (t) = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} t^{r - 1} (1 - t)^{s - 1} = A(r, s) t^{r - 1} (1 - t)^{s - 1}$ $0 < t < 1$ For $r \ge 2$, $s \ge 2$, $f_{H}$ has a maximum at $(r - 1)/(r + s - 2)$. For $r, s$ positive integers, $f_H$ is a polynomial on [0, 1], so that determination of the distribution function is easy. In any case, straightforward integration, using the integral formula above, shows $E[H] = \dfrac{r}{r + s}$ and $\text{Var} [H] = \dfrac{rs}{(r + s)^2 (r + s + 1)}$ If the prior distribution for $H$ is beta $(r, s)$, we may complete the determination of $E[H|S_n = k]$ as follows. $E[H|S_n = k] = \dfrac{A(r, s) \int_{0}^{1} u^{k + 1} (1 - u)^{n - k} u^{r - 1} (1 - u)^{s - 1}\ du}{A(r, s) \int_{0}^{1} u^{k} (1 - u)^{n - k} u^{r - 1} (1 - u)^{s - 1}\ du} = \dfrac{\int_{0}^{1} u^{k + r} (1 - u)^{n + s - k - 1}\ du}{\int_{0}^{1} u^{k + r - 1} (1 - u)^{n + s - k - 1}\ du}$ $= \dfrac{\Gamma (r + k + 1) \Gamma (n + s - k)}{\Gamma(r + s + n + 1)} \cdot \dfrac{\Gamma (r + s + n)}{\Gamma (r + k) \Gamma (n + s - k)} = \dfrac{k + r}{n + r + s}$ We may adapt the analysis above to show that $H$ is conditionally beta $(r + k, s + n - k)$, given $S_n = k$. $F_{H|S} (t|k) = \dfrac{E[I_t (H) I_{\{k\}} (S_n)}{E[I_{\{k\}} (S_n)}$ where $I_t (H) = I_{[0, t]} (H)$ The analysis goes through exactly as for $E[H|S_n = k]$, except that $H$ is replaced by $I_t (H)$. In the integral expression for the numerator, one factor $u$ is replaced by $I_t (u)$. For $H$ ~ beta $(r, s)$, we get $F_{H|S} (t|k) = \dfrac{\Gamma (r + s + n)}{\Gamma (r + k) \Gamma (n + s - k)} \int_{0}^{t} u^{k + r - 1} (1 - u)^{n + s - k - 1} \ du = \int_{0}^{t} f_{H|S} (u|k)\ du$ The integrand is the density for beta $(r + k, n + s - k)$. Any prior information on the distribution for $H$ can be utilized to select suitable $r, s$. If there is no prior information, we simply take $r = 1$, $s = 1$, which corresponds to $H$ ~ uniform on (0, 1). The value is as likely to be in any subinterval of a given length as in any other of the same length. The information in the sample serves to modify the distribution for $H$, conditional upon that information. Example $1$ Population proportion with a beta prior It is desired to estimate the portion of the student body which favors a proposed increase in the student blanket tax to fund the campus radio station. A sample of size $n = 20$ is taken. Fourteen respond in favor of the increase. Assuming prior ignorance (i.e., that $H$ beta (1,1)), what is the conditional distribution given $s_{20} = 14$? After the first sample is taken, a second sample of size $n = 20$ is taken, with thirteen favorable responses. Analysis is made using the conditional distribution for the first sample as the prior for the second. Make a new estimate of $H$. Figure 16.1.3. Conditional densities for repeated sampling, Example 16.1.1. Solution For the first sample the parameters are $r = s = 1$. According the treatment above, $H$ is conditionally beta $(k + r, n + s - k) = (15, 7)$. The density has a maximum at $(r + k - 1)/(r + k + n + s - k - 2) = k/n$. The conditional expectation, however, is $(r + k)/(r + s + n) = 15/22 \approx 0.6818$. For the second sample, with the conditional distribution as the new prior, we should expect more sharpening of the density about the new mean-square estimate. For the new sample, $n = 20$, $k = 13$, and the prior $H$ ~ beta (15, 7). The new conditional distribution has parameters $r^* = (28 - 1)/(28 + 14 - 2) = 27/40 = 0.6750$. The best estimate of $H$ is 28/(28 + 14) = 2/3. The conditonal densities in the two cases may be plotted with MATLAB (see Figure 16.1.1). t = 0:0.01:1; plot(t,beta(15,7,t),'k-',t,beta(28,14,t),'k--') As expected, the maximum for the second is somewhat larger and occurs at a slightly smaller $t$, reflecting the smaller $k$. And the density in the second case shows less spread, resulting from the fact that prior information from the first sample is incorporated into the analysis of the second sample. The same result is obtained if the two samples are combined into one sample of size 40. It may be well to compare the result of Bayesian analysis with that for classical statistics. Since, in the latter, case prior information is not utilized, we make the comparison with the case of no prior knowledge $(r = s = 1)$. For the classical case, the estimator for $\mu$ is the sample average; for the Bayesian case with beta prior, the estimate is the conditional expectation of $H$, given $S_n$. If $S_n = k$: Classical estimate = $k/n$ Bayesian estimate = $(k + 1)/(n + 2)$ For large sample size $n$, these do not differ significantly. For small samples, the difference may be quite important. The Bayesian estimate is often referred to as the small sample estimate, although there is nothing in the Bayesian procedure which calls for small samples. In any event, the Bayesian estimate seems preferable for small samples, and it has the advantage that prior information may be utilized. The sampling procedure upgrades the prior distribution. The essential idea of the Bayesian approach is the view that an unknown parameter about which there is uncertainty is modeled as the value of a random variable. The name Bayesian comes from the role of Bayesian reversal in the analysis. The application of Bayesian analysis to the population proportion required Bayesian reversal in the case of discrete $S_n$. We consider, next, this reversal process when all random variables are absolutely continuous. The Bayesian reversal for a joint absolutely continuous pair In the treatment above, we utilize the fact that the conditioning random variable $S_n$ is discrete. Suppose the pair $\{W, H\}$ is jointly absolutely continuous, and $f_{W|H} (t|u)$ and $f_H (u)$ are specified. To determine $E[H|W = t] = \int u f_{H|W} (u|t)\ du$ we need $f_{H|W} (u|t)$. This requires a Bayesian reversal of the conditional densities. Now by definition $f_{H|W} (u|t) = \dfrac{f_{WH} (t, u)}{f_W (t)}$ and $f_{WH} (t, u) = f_{W|H} (t|u) f_H (u)$ Since by the rule for determining the marginal density $f_W (t) = \int f_{WH} (t, u)\ du = \int f_{W|H} (t|u) f_H (u)\ du$ we have $f_{H|W} (u|t) = \dfrac{f_{W|H} (t|u) f_H(u)}{\int f_{W|H} (t|u) f_H(u) \ du}$ and $E[H|W = t] = \dfrac{\int u f_{W|H} (t|u) f_H(u)\ du}{\int f_{W|H} (t|u) f_H(u)\ du}$ Example $2$ A Bayesian reversal Suppose $H$ ~ exponential $(\lambda)$ and the $X_i$ are conditionally iid, exponential ($u$), given $H = u$. A sample of size $n$ is taken. Put $W = (X_1, X_2, \cdot\cdot\cdot, X_n)$, and $t^* = t_1 + t_2 + \cdot\cdot\cdot + t_n$. Determine the best mean-square estimate of $H$, given $W = t$. Solution $f_{X|H} (t_i|u) = ue^{-ut_i}$ so that $f_{W|H} (t|u) = \prod_{i = 1}^{n} ue^{-ut_i} = u^n e^{-ut^*}$ Hence $E[H|W = t] = \int uf_{H|W} (u|t)\ du = \dfrac{\int_{0}^{\infty} u^{n + 1} e^{-ut^*} \lambda e^{-\lambda u}\ du}{\int_{0}^{\infty} u^{n} e^{-ut^*} \lambda e^{-\lambda u}\ du}$ $= \dfrac{\int_{0}^{\infty} u^{n + 1} e^{-(\lambda + t^*)u}\ du}{\int_{0}^{\infty} u^{n} e^{-(\lambda + t^*)u}\ du} = \dfrac{(n + 1)!}{(\lambda + t^*)^{n + 2}} \cdot \dfrac{(\lambda + t^*)^{n + 1}}{n!} = \dfrac{n + 1}{(\lambda + t^*)}$ where $t^* = \sum_{i = 1}^{n} t_i$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/16%3A_Conditional_Independence_Given_a_Random_Vector/16.01%3A_Conditional_Independence_Given_a_Random_Vector.txt
Elements of Markov Sequences Markov sequences (Markov chains) are often studied at a very elementary level, utilizing algebraic tools such as matrix analysis. In this section, we show that the fundamental Markov property is an expression of conditional independence of “past” and “future," given the “present.” The essential Chapman-Kolmogorov equation is seen as a consequence of this conditional independence. In the usual time-homogeneous case with finite state space, the Chapman-Kolmogorov equation leads to the algebraic formulation that is widely studied at a variety of levels of mathematical sophistication. With the background laid, we only sketch some of the more common results. This should provide a probabilistic perspective for a more complete study of the algebraic analysis. Markov sequences We wish to model a system characterized by a sequence of states taken on at discrete instants which we call transition times. At each transition time, there is either a change to a new state or a renewal of the state immediately before the transition. Each state is maintained unchanged during the period or stage between transitions. At any transition time, the move to the next state is characterized by a conditional transition probability distribution. We suppose that the system is memoryless, in the sense that the transition probabilities are dependent upon the current state (and perhaps the period number), but not upon the manner in which that state was reached. The past influences the future only through the present. This is the Markov property, which we model in terms of conditional independence. For period $i$, the state is represented by a value of a random variable $X_i$, whose value is one of the members of a set E, known as the state space. We consider only a finite state space and identify the states by integers from 1 to $M$. We thus have a sequence $X_N = \{X_n: n \in N\}$, where $N = \{0, 1, 2, \cdot\cdot\cdot\}$ We view an observation of the system as a composite trial. Each $\omega$ yields a sequence of states $\{X_0 (\omega), X_1 (\omega), \cdot\cdot\cdot\}$ which is referred to as a realization of the sequence, or a trajectory. We suppose the system is evolving in time. At discrete instants of time $t_1, t_2, \cdot\cdot\cdot$ the system makes a transition from one state to the succeeding one (which may be the same). Initial period: $n = 0$, $t \in [0, t_1)$, state is $X_0 (\omega)$; at $t_1$ the transition is to $X_1 (\omega)$ Period one: $n = 1$, $t \in [t_1, t_2)$, state is $X_1 (\omega)$; at $t_2$ the transition is to $X_2 (\omega)$ ...... Period $k$: $n = k$, $t \in [t_k, t_{k = 1})$, state is $X_k (\omega)$; at $t_{k +1}$ move to $X_{k +1} (\omega)$ ...... The parameter $n$ indicates the period $t \in [t_n, t_{n + 1})$. If the periods are of unit length, then $t_n = n$. At $t_{n + 1}$, there is a transition from the state $X_n (\omega)$ to the state $X_{n + 1} (\omega)$ for the next period. To simplify writing, we adopt the following convention: $U_n = (X_0, X_1, \cdot\cdot\cdot, X_n) \in E_n$ $U_{m,n} = (X_m, \cdot\cdot\cdot, X_n)$ and $U^{n} = (X_n, X_{n + 1}, \cdot\cdot\cdot) \in E^n$ The random vector $U_n$ is called the past at $n$ of the sequence $X_N$ and $U^n$ is the future at $n$. In order to capture the notion that the system is without memory, so that the future is affected by the present, but not by how the present is reached, we utilize the notion of conditional independence, given a random vector, in the following Definition The sequence $X_N$ is Markov iff (M) $\{X_{n + 1}, U_n\}$ ci $|X_n$ for all $n \ge 0$ Several conditions equivalent to the Markov condition (M) may be obtained with the aid of properties of conditional independence. We note first that (M) is equivalent to $P(X_{n + 1} = k|X_n = j, U_{n - 1} \in Q) = P(X_{n + 1} = k|X_n = j)$ for each $n \ge 0$, $j, k \in E$, and $Q \subset E^{n - 1}$ The state in the next period is conditioned by the past only through the present state, and not by the manner in which the present state is reached. The statistics of the process are determined by the initial state probabilities and the transition probabilities $P(X_{n + 1} = k|X_n = j)$ $\forall j, k \in E$, $n \ge 0$ The following examples exhibit a pattern which implies the Markov condition and which can be exploited to obtain the transition probabilities. Example $1$ One-dimensional random walk An object starts at a given initial position. At discrete instants $t_1, t_2, \cdot\cdot\cdot$ the object moves a random distance along a line. The various moves are independent of each other. Let $Y_0$ be the initial position $Y_k$ be the amount the object moves at time $t = t_k$ $\{Y_k: 1 \le k\}$ iid $X_n = \sum_{k =0}^{n} Y_k$ be the position after $n$ moves We note that $X_{n + 1} = g(X_n, Y_{n + 1})$. Since the position after the transition at $t_{n + 1}$ is affected by the past only by the value of the position $X_n$ and not by the sequence of positions which led to this position, it is reasonable to suppose that the process $X_N$ is Markov. We verify this below. Example $2$ A class of branching processes Each member of a population is able to reproduce. For simplicity, we suppose that at certain discrete instants the entire next generation is produced. Some mechanism limits each generation to a maximum population of $M$ members. Let $Z_{in}$ be the number propagated by the $i$th member of the $n$th generation. $Z_{in} = 0$ indicates death and no offspring, $Z_{in} = k$ indicates a net of $k$ propagated by the $i$th member (either death and $k$ offspring or survival and $k - 1$ offspring). The population in generation $n + 1$ is given by $X_{n + 1} = \text{min } \{M, \sum_{i = 1}^{X_n} Z_{in}\}$ We suppose the class $\{Z_{in}: 1 \le i \le M, 0 \le n\}$ is iid. Let $Y_{n + 1} = (Z_{in}, Z_{2n}, \cdot\cdot\cdot, Z_{Mn})$. Thne $\{Y_{n + 1}, U_n\}$ is independent. It seems reasonable to suppose the sequence $X_N$ is Markov. Example $3$ An inventory problem A certain item is stocked according to an $(m, M)$ inventory policy, as follows: • If stock at the end of a period is less than $m$, order up to $M$. • If stock at the end of a period is $m$ or greater, do not order. Let $X_0$ be the initial stock, and $X_n$ be the stock at the end of the $n$th period (before restocking), and let $D_n$ be the demand during the $n$th period. Then for $n \ge 0$, $X_{n + 1} = \begin{cases} \text{max } \{M - D_{n + 1}, 0\} & \text{if } 0 \le X_n < m \ \text{max } \{X_n - D_{n + 1}, 0\} & \text{if } m \le X_n \end{cases} = g(X_n, D_{n + 1})$ If we suppose $\{D_n: 1 \le n\}$ is independent, then $\{D_{n = 1}, U_n\}$ is independent for each $n \ge 0$, and the Markov condition seems to be indicated. Remark. In this case, the actual transition takes place throughout the period. However, for purposes of analysis, we examine the state only at the end of the period (before restocking). Thus, the transitions are dispersed in time, but the observations are at discrete instants. Example $4$ Remaining lifetime A piece of equipment has a lifetime which is an integral number of units of time. When a unit fails, it is replaced immediately with another unit of the same type. Suppose • $X_n$ is the remaining lifetime of the unit in service at time $n$ • $Y_{n + 1}$ is the lifetime of the unit installed at time $n$, with $\{Y_n: 1 \le n\}$ iid Then $X_{n + 1} = \begin{cases} X_n - 1 & \text{if } X_n \ge 1 \ Y_{n + 1} - 1 & \text{if } X_n = 0 \end{cases} = g(X_n, Y_{n + 1})$ Remark. Each of these four examples exhibits the pattern $\{X_0, Y_n: 1 \le n\}$ is independent $X_{n + 1} = g_{n + 1} (X_n, Y_{n + 1})$, $n \ge 0$ We now verify the Markov condition and obtain a method for determining the transition probabilities. A pattern yielding Markov sequences Suppose $\{Y_n : 0 \le n\}$ is independent (call these the driving random variables). Set $X_0 = g_0 (Y_0)$ and $X_{n + 1} = g_{n + 1} (X_n, Y_{n + 1})$ $\forall n \ge 0$ Then $X_N$ is Markov $P(X_{n+1} \in Q|X_n = u) = P[g_{n + 1} (u, Y_{n + 1}) \in Q]$ for all $n, u$, and any Borel set $Q$. VERIFICATION It is apparent that if $Y_0, Y_1, \cdot\cdot\cdot, Y_n$ are known, then $U_n$ is known. Thus $U_n = h_n (Y_0, Y_1, \cdot\cdot\cdot, Y_n)$, which ensures each pair $\{Y_{n + 1}, U_n\}$ is independent. By property (CI13), with $X = Y_{n +1}$, $Y = X_n$, and $Z = U_{n - 1}$, we have $\{Y_{n + 1}, U_{n - 1}\}$ ci$|X_n$ Since $X_{n + 1} = g_{n + 1} (Y_{n + 1}, X_n)$ and $U_n = h_n (X_n, U_{n - 1})$, property (CI9) ensures $\{X_{n + 1}, U_n\}$ ci$|X_n$ $\forall n \ge 0$ which is the Markov property. $P(X_{n + 1} \in Q|X_n = u) = E\{I_Q [g_{n + 1} (X_n,Y_{n + 1})]|X_n = u\}$ a.s. $= E\{I_Q [g_{n + 1} (u, Y_{n + 1})]\}$ a.s. $[P_X]$ by (CE10b) $= P[g_{n + 1} (u, Y_{n + 1}) \in Q]$ by (E1a) — □ The application of this proposition, below, to the previous examples shows that the transition probabilities are invariant with $n$. This case is important enough to warrant separate classification. Definition If $P(X_{n + 1} \in Q|X_n = u)$ is invariant with $n$, for all Borel sets $Q$, all $u \in E$, the Markov process $X_N$ is said to be homogeneous. As a matter of fact, this is the only case usually treated in elementary texts. In this regard, we note the following special case of the proposition above. Homogenous Markov sequences If $\{Y_n: 1 \le n\}$ is iid and $g_{n + 1} = g$ for all $n$, then the process is a homogeneous Markov process, and $P(X_{n + 1} \in Q|X_n = u) = P[g(u, Y_{n + 1}) \in Q]$, invariant with $n$ — □ Remark. In the homogeneous case, the transition probabilities are invariant with $n$. In this case, we write $P(X_{n + 1} = j|X_n = i) = p(i, j)$ or $p_{ij}$ (invariant with $n$) These are called the (one-step) transition probabilities. The transition probabilities may be arranged in a matrix P called the transition probability matrix, usually referred to as the transition matrix, P = $[p(i, j)]$ The element $p(i, j)$ on row $i$ and column $j$ is the probability $P(X_{n + 1} = j|X_n = i)$. Thus, the elements on the $i$th row constitute the conditional distribution for $X_{n + 1}$, given $X_n = i$. The transition matrix thus has the property that each row sums to one. Such a matrix is called a stochastic matrix. We return to the examples. From the propositions on transition probabilities, it is apparent that each is Markov. Since the function $g$ is the same for all $n$ and the driving random variables corresponding to the $Y_i$ form an iid class, the sequences must be homogeneous. We may utilize part (b) of the propositions to obtain the one-step transition probabilities. Example $5$ Random walk continued $g_n (u, Y_{n + 1}) = u + Y_{n + 1}$. so that $g_n$ is invariant with $n$. Since $\{Y_n: 1 \le n\}$ is iid, $P(X_{n+1} = k|X_n = j) = P(j + Y = k) = P(Y = k - j) = p_{k - j}$ where $p_k = P(Y = k)$ Example $6$ Branching process continued $g(j, Y_{n + 1}) = \text{min } \{M, \sum_{i = 1}^{j} Z_{in}\}$ and E = $\{0, 1, \cdot\cdot\cdot, M\}$. If $\{Z_{in}: 1 \le i \le M\}$ is iid, then $W_{jn} = \sum_{i = 1}^{j} Z_{in}$ ensures $\{W_{jn}: 1 \le n\}$ is iid for each $j \in$ E We thus have $P(X_{n + 1} = k|X_n = j) = \begin{cases} P(W_{jn} = k) & \text{for } 0 \le k < M \ P(W_{jn} \ge M) & \text{for } k \ge M \end{cases} 0 \le j \le M$ With the aid of moment generating functions, one may determine distributions for $W_1 = Z_1, W_2 = Z_1 + Z_2, \cdot\cdot\cdot, W_{M} = Z_1 + \cdot\cdot\cdot + Z_M$ These calculations are implemented in an m-procedure called branchp. We simply need the distribution for the iid $Z_{in}$. % file branchp.m % Calculates transition matrix for a simple branching % process with specified maximum population. disp('Do not forget zero probabilities for missing values of Z') PZ = input('Enter PROBABILITIES for individuals '); M = input('Enter maximum allowable population '); mz = length(PZ) - 1; EZ = dot(0:mz,PZ); disp(['The average individual propagation is ',num2str(EZ),]) P = zeros(M+1,M+1); Z = zeros(M,M*mz+1); k = 0:M*mz; a = min(M,k); z = 1; P(1,1) = 1; for i = 1:M % Operation similar to genD z = conv(PZ,z); Z(i,1:i*mz+1) = z; [t,p] = csort(a,Z(i,:)); P(i+1,:) = p; end disp('The transition matrix is P') disp('To study the evolution of the process, call for branchdbn') PZ = 0.01*[15 45 25 10 5]; % Probability distribution for individuals branchp % Call for procedure Do not forget zero probabilities for missing values of Z Enter PROBABILITIES for individuals PZ Enter maximum allowable population 10 The average individual propagation is 1.45 The transition matrix is P To study the evolution of the process, call for branchdbn disp(P) % Optional display of generated P Columns 1 through 7 1.0000 0 0 0 0 0 0 0.1500 0.4500 0.2500 0.1000 0.0500 0 0 0.0225 0.1350 0.2775 0.2550 0.1675 0.0950 0.0350 0.0034 0.0304 0.1080 0.1991 0.2239 0.1879 0.1293 0.0005 0.0061 0.0307 0.0864 0.1534 0.1910 0.1852 0.0001 0.0011 0.0075 0.0284 0.0702 0.1227 0.1623 0.0000 0.0002 0.0017 0.0079 0.0253 0.0579 0.1003 0.0000 0.0000 0.0003 0.0020 0.0078 0.0222 0.0483 0.0000 0.0000 0.0001 0.0005 0.0021 0.0074 0.0194 0.0000 0.0000 0.0000 0.0001 0.0005 0.0022 0.0068 0.0000 0.0000 0.0000 0.0000 0.0001 0.0006 0.0022 Columns 8 through 11 0 0 0 0 0 0 0 0 0.0100 0.0025 0 0 0.0705 0.0315 0.0119 0.0043 0.1481 0.0987 0.0559 0.0440 0.1730 0.1545 0.1179 0.1625 0.1381 0.1574 0.1528 0.3585 0.0832 0.1179 0.1412 0.5771 0.0406 0.0698 0.1010 0.7591 0.0169 0.0345 0.0590 0.8799 0.0062 0.0147 0.0294 0.9468 Note that $p(0, 0) = 1$. If the population ever reaches zero, it is extinct and no more births can occur. Also, if the maximum population (10 in this case) is reached, there is a high probability of returning to that value and very small probability of becoming extinct (reaching zero state). Example $7$ Inventory problem (continued) In this case, $g(j, D_{n + 1}) = \begin{cases} \text{max } \{M - D_{n + 1}, 0\} & \text{for } 0 \le j < m \ \text{max } \{j - D_{n + 1}, 0\} & \text{for } m \le j \le M \end{cases}$ Numerical example $m = 1$ $M = 3$ $D_n$ is Poisson (1) To simplify writing, use $D$ for $D_n$. Because of the invariance with $n$, set $P(X_{n + 1} = k|X_n = j) = p(j, k) = P(g(j, D_{n + 1}) = k]$ The various cases yield $g(0, D) = \text{max } \{3 - D, 0\}$ $g(0, D) = 0$ iff $D \ge 3$ imples $p(0, 0) = P(D \ge 3)$ $g(0, D) = 1$ iff $D = 2$ imples $p(0, 1) = P(D = 2)$ $g(0, D) = 2$ iff $D = 1$ imples $p(0, 2) = P(D = 1)$ $g(0, D) = 3$ iff $D = 0$ imples $p(0, 3) = P(D = 0)$ $g(1, D) = \text{max } \{1 - D, 0\}$ $g(1, D) = 0$ iff $D \ge 1$ imples $p(1, 0) = P(D \ge 1)$ $g(1, D) = 1$ iff $D = 0$ imples $p(1, 1) = P(D = 0)$ $g(1, D) = 2, 3$ is impossible $g(2, D) = \text{max } \{2 - D, 0\}$ $g(2, D) = 0$ iff $D \ge 2$ imples $p(2, 0) = P(D \ge 2)$ $g(2, D) = 1$ iff $D = 1$ imples $p(2, 1) = P(D = 1)$ $g(2, D) = 2$ iff $D = 0$ imples $p(2, 2) = P(D = 0)$ $g(2, D) = 3$ is impossible $g(3, D) = \text{max } \{3 - D, 0\} = g(0, D)$ so that $p(3, k) = p(0, k)$ The various probabilities for $D$ may be obtained from a table (or may be calculated easily with cpoisson) to give the transition probability matrix P = $\begin{bmatrix} 0.0803 & 0.1839 & 0.3679 & 0.3679 \ 0.6321 & 0.3679 & 0 & 0 \ 0.2642 & 0.3679 & 0.3679 & 0 \ 0.0803 & 0.1839 & 0.3679 & 0.3679 \end{bmatrix}$ The calculations are carried out “by hand” in this case, to exhibit the nature of the calculations. This is a standard problem in inventory theory, involving costs and rewards. An m-procedure inventory1 has been written to implement the function $g$. % file inventory1.m % Version of 1/27/97 % Data for transition probability calculations % for (m,M) inventory policy M = input('Enter value M of maximum stock '); m = input('Enter value m of reorder point '); Y = input('Enter row vector of demand values '); PY = input('Enter demand probabilities '); states = 0:M; ms = length(states); my = length(Y); % Calculations for determining P [y,s] = meshgrid(Y,states); T = max(0,M-y).*(s < m) + max(0,s-y).*(s >= m); P = zeros(ms,ms); for i = 1:ms [a,b] = meshgrid(T(i,:),states); P(i,:) = PY*(a==b)'; end P We consider the case $M = 5$, the reorder point $m = 3$. and demand is Poisson (3). We approximate the Poisson distribution with values up to 20. inventory1 Enter value M of maximum stock 5 % Maximum stock Enter value m of reorder point 3 % Reorder point Enter row vector of demand values 0:20 % Truncated set of demand values Enter demand probabilities ipoisson(3,0:20) % Demand probabilities P = 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 0.5768 0.2240 0.1494 0.0498 0 0 0.3528 0.2240 0.2240 0.1494 0.0498 0 0.1847 0.1680 0.2240 0.2240 0.1494 0.0498 Example $8$ Remaining lifetime (continued) $g(0, Y) = Y - 1$, so that $p(0, k) = P(Y - 1 = k) = P(Y = k + 1)$ $g(j, Y) = j - 1$ for $j \ge 1$, so that $p(j, k) = \delta_{j - 1, k}$ for $j \ge 1$ The resulting transition probability matrix is P = $\begin{bmatrix} p_1 & p_2 & p_3 & \cdot\cdot\cdot \ 1 & 0 & 0 & \cdot\cdot\cdot \ 0 & 1 & 0 & \cdot\cdot\cdot \ \cdot\cdot\cdot & & & \cdot\cdot\cdot \ \cdot\cdot\cdot \ \cdot\cdot\cdot & & & \cdot\cdot\cdot \end{bmatrix}$ The matrix is an infinite matrix, unless $Y$ is simple. If the range of $Y$ is $\{1, 2, \cdot\cdot\cdot, M\}$ then the state space E is $\{0, 1, \cdot\cdot\cdot, M - 1\}$. Various properties of conditional independence, particularly (CI9), (CI10), and (CI12), may be used to establish the following. The immediate future $X_{n + 1}$ may be replaced by any finite future $U_{n, n+p}$ and the present $X_n$ may be replaced by any extended present $U_{m,n}$. Some results of abstract measure theory show that the finite future $U_{n, n+p}$ may be replaced by the entire future $U^n$. Thus, we may assert Extended Markov property $X_N$ is Markov iff (M*) $\{U^n, U_m\}$ ci$|U_{m,n}$ $\forall 0 \le m \le n$ — □ The Chapman-Kolmogorov equation and the transition matrix As a special case of the extended Markov property, we have $\{U^{n + k}, U_n\}$ ci$|X_{n + k}$ for all $n \ge 0$, $k \ge 1$ Setting $g(U^{n + k}, X_{n + k}) = X_{n + k + m}$ and $h(U_n, X_{n + k}) = X_n$ in (CI9), we get $\{X_{n + k + m}, X_n\}$ ci$|X_{n + k}$ for all $n \ge 0$, $k, m \ge 1$ This is the Chapman-Kolmogorov equation, which plays a central role in the study of Markov sequences. For a discrete state space E, with $P(X_n = j|X_m = i) = p_{m, n} (i, j)$ this equation takes the form ($CK'$) $p_{m, q} (i, k) = \sum_{j \in E} p_{m,n} (i, j) p_{n,q} (j, k)$ $0 \le m < n < q$ To see that this is so, consider $P(X_q = k|X_m = i) = E[I_{\{k\}} (X_q)|X_m = i] = E\{E[I_{\{k\}} (X_q)|X_n] |X_m = i\}$ $= \sum_{j} E[I_{\{k\}} (X_q)|X_n = j] p_{m, n} (i, j) = \sum_{j} p_{n, q} (j, k) p_{m, n} (i, j)$ Homogeneous case For this case, we may put ($CK'$) in a useful matrix form. The conditional probabilities $p^m$ of the form $p^m (i, k) = P(X_{n + m} = k|X_n = i)$ invariant in $n$ are known as the m-step transition probabilities. The Chapman-Kolmogorov equation in this case becomes ($CK''$) $p^{m + n} (i, k) = \sum_{j \in E} p^m (i, j) p^n (j, k)$ $\forall i, j \in$ E In terms of the m-step transition matrix P$^{(m)} = [p^m (i, k)]$, this set of sums is equivalent to the matrix product ($CK''$) P$^{(m + n)}$ = P$^{(m)}$P$^{(n)}$ Now P$^{(2)}$ = P$^{(1)}$P$^{(1)}$ = PP = P$^{2}$, P$^{(3)}$ = P$^{(2)}$P$^{(1)}$ = P$^{3}$, etc. A simple inductive argument based on ($CK''$) establishes The product rule for transition matrices The m-step probability matrix P$^{(m)}$ = P$^{m}$, the $m$th power of the transition matrix P — □ Example $9$ The inventory problem (continued) For the inventory problem in Example 16.2.7, the three-step transition probability matrix P$^{(3)}$ is obtained by raising P to the third power to get P$^{(3)}$ = P$^{3}$ = $\begin{bmatrix} 0.2930 & 0.2917 & 0.2629 & 0.1524 \ 0.2619 & 0.2730 & 0.2753 & 0.1898 \ 0.2993 & 0.2854 & 0.2504 & 0.1649 \ 0.2930 & 0.2917 & 0.2629 & 0.1524 \end{bmatrix}$ — □ We consider next the state probabilities for the various stages. That is, we examine the distributions for the various $X_n$, letting $p_k (n) = P(X_n = k)$ for each $k \in$ E. To simplify writing, we consider a finite state space E = $\{1, \cdot\cdot\cdot, M\}$, We use $\pi(n)$ for the rowmatrix $\pi(n) = [p_1 (n) p_2 (n) \cdot\cdot\cdot p_M (n)]$ As a consequence of the product rule, we have Probability distributions for any period For a homogeneous Markov sequence, the distribution for any $X_n$ is determined by the initial distribution (i.e., for $X_0$) and the transition probability matrix P VERIFICATION Suppose the homogeneous sequence $X_N$ has finite state-space E = $\{1, 2, \cdot\cdot\cdot, M\}$. For any $n \ge 0$, let $p_j (n) P(X_n = j)$ for each $j \in$ E. Put $\pi(n) = [p_1 (n) p_2 (n) \cdot\cdot\cdot p_M (n)]$ Then $\pi(0) =$ the initial probability distribution $\pi(1) = \pi(0)$P ...... $\pi(n) = \pi(n - 1)$P = $\pi(0)$ P$^{(n)} = \pi (0)$ P$^{n}$ = the $n$th-period distribution The last expression is an immediate consequence of the product rule. Example $10$ Inventory problem (continued) In the inventory system for Examples 3, 7 and 9, suppose the initial stock is $M = 3$. This means that $\pi (0) =$ [0 0 0 1] The product of $\pi (0)$ and $P^3$ is the fourth row of $P^3$, so that the distribution for $X_3$ is $\pi(3) = [p_0 (3)\ \ p_1 (3)\ \ p_2 (3)\ \ p_3 (3)] = [0.2930\ \ 0.2917\ \ 0.2629\ \ 0.1524]$ Thus, given a stock of $M = 3$ at startup, the probability is 0.2917 that $X_3 = 1$. This is the probability of one unit in stock at the end of period number three. Remarks • A similar treatment shows that for the nonhomogeneous case the distribution at any stage is determined by the initial distribution and the class of one-step transition matrices. In the nonhomogeneous case, transition probabilities $p_{n, n+1} (i, j)$ depend on the stage $n$. • A discrete-parameter Markov process, or Markov sequence, is characterized by the fact that each member $X_{n + 1}$ of the sequence is conditioned by the value of the previous member of the sequence. This one-step stochastic linkage has made it customary to refer to a Markov sequence as a Markov chain. In the discrete-parameter Markov case, we use the terms process, sequence, or chain interchangeably. The transition diagram and the transition matrix The previous examples suggest that a Markov chain is a dynamic system, evolving in time. On the other hand, the stochastic behavior of a homogeneous chain is determined completely by the probability distribution for the initial state and the one-step transition probabilities $p(i, j)$ as presented in the transition matrix P. The time-invariant transition matrix may convey a static impression of the system. However, a simple geometric representation, known as the transition diagram, makes it possible to link the unchanging structure, represented by the transition matrix, with the dynamics of the evolving system behavior. Definition A transition diagram for a homogeneous Markov chain is a linear graph with one node for each state and one directed edge for each possible one-step transition between states (nodes). We ignore, as essentially impossible, any transition which has zero transition probability. Thus, the edges on the diagram correspond to positive one-step transition probabilities between the nodes connected. Since for some pair $(i, j)$ of states, we may have $p(i, j) > 0$ but $p(j, i) = 0$ we may have a connecting edge between two nodes in one direction, but none in the other. The system can be viewed as an object jumping from state to state (node to node) at the successive transition times. As we follow the trajectory of this object, we achieve a sense of the evolution of the system. Example $11$ Transition diagram for inventory example Consider, again, the transition matrix P for the inventory problem (rounded to three decimals). P = $\begin{bmatrix} 0.080 & 0.184 & 0.368 & 0.368 \ 0.632 & 0.368 & 0 & 0 \ 0.264 & 0.368 & 0.368 & 0 \ 0.080 & 0.184 & 0.368 & 0.368 \end{bmatrix}$ Figure 16.2.1 shows the transition diagram for this system. At each node corresponding to one of the possible states, the state value is shown. In this example, the state value is one less than the state number. For convenience, we refer to the node for state $k + 1$. which has state value $k$, as node $k$. If the state value is zero, there are four possibilities: remain in that condition with probability 0.080; move to node 1 with probability 0.184; move to node 2 with probability 0.368; or move to node 3 with probability 0.368. These are represented by the “self loop” and a directed edge to each of the nodes representing states. Each of these directed edges is marked with the (conditional) transition probability. On the other hand, probabilities of reaching state value 0 from each of the others is represented by directed edges into the node for state value 0. A similar situation holds for each other node. Note that the probabilities on edges leaving a node (including a self loop) must total to one, since these correspond to the transition probability distribution from that node. There is no directed edge from the node 2 to node 3, since the probability of a transition from value 2 to value 3 is zero. Similary, there is no directed edge from node 1 to either node 2 or node 3. Figure 16.2.1. Transition diagram for the inventory system of Example 16.2.11 There is a one-one relation between the transition diagram and the transition matrix P. The transition diagram not only aids in visualizing the dynamic evolution of a chain, but also displays certain structural properties. Often a chain may be decomposed usefully into subchains. Questions of communication and recurrence may be answered in terms of the transition diagram. Some subsets of states are essentially closed, in the sense that if the system arrives at any one state in the subset it can never reach a state outside the subset. Periodicities can sometimes be seen, although it is usually easier to use the diagram to show that periodicities cannot occur. Classification of states Many important characteristics of a Markov chain can be studied by considering the number of visits to an arbitrarily chosen, but fixed, state. Definition For a fixed state $j$, let $T_1$ = the time (stage number) of the first visit to state $j$ (after the initial period). $F_{k} (i, j) = P(T_i = k|X_0 = i)$, the probability of reaching state $j$ for the first time from state $i$ in $k$ steps. $F(i, j) = P(T_i < \infty|X_0 = i) = \sum_{k = 1}^{\infty} F_k (i, j)$, the probability of ever reaching state $j$ from state $i$. A number of important theorems may be developed for $F_k$ and $F$, although we do not develop them in this treatment. We simply quote them as needed. An important classification of states is made in terms of $F$. Definition State $j$ is said to be transient iff $F(j, j) < 1$, and is said to be recurrent iff $F(j, j) = 1$. Remark. If the state space E is infinite, recurrent states fall into one of two subclasses: positive or null. Only the positive case is common in the infinite case, and that is the only possible case for systems with finite state space. Sometimes there is a regularity in the structure of a Markov sequence that results in periodicities. Definition For state $j$, let $\delta = \text{greatest common denominatior of } \{n: p^n (j, j) > 0\}$ If $\delta > 1$, then state $j$ is periodic with period $\delta$; otherwise, state $j$ is aperiodic. Usually if there are any self loops in the transition diagram (positive probabilities on the diagonal of the transition matrix P) the system is aperiodic. Unless stated otherwise, we limit consideration to the aperiodic case. Definition A state $j$ is called ergodic iff it is positive, recurrent, and aperiodic. It is called absorbing iff $F(j, j) = 1$. A recurrent state is one to which the system eventually returns, hence is visited an infinity of times. If it is absorbing, then once it is reached it returns each step (i.e., never leaves). An arrow notation is used to indicate important relations between states. Definition We say State $i$ reaches $j$, denoted $i \to j$, iff $p^n (i, j) > 0$ for some $n > 0$. State $i$ and $j$ communicate, denoted $i \leftrightarrow j$ iff both $i$ reaches $j$ and $j$ reaches $i$. By including $j$ reaches $j$ in all cases, the relation $\leftrightarrow$ is an equivalence relation (i.e., is reflexive, transitive, and idempotent). With this relationship, we can define important classes. Definition A class of states is communicating iff every state in the class may be reached from every other state in the class (i.e. every pair communicates). A class is closed if no state outside the class can be reached from within the class. The following important conditions are intuitive and may be established rigorously: $i \leftrightarrow j$ implies $i$ is recurrent iff $j$ is recurrent $i \to j$ and $i$ recurrent implies $i \leftrightarrow j$ $i \to j$ and $i$ recurrent implies $j$ recurrent Limit theorems for finite state space sequences The following propositions may be established for Markov sequences with finite state space: • There are no null states, and not all states are transient. • If a class of states is irreducible (i.e.,has no proper closed subsets), then • All states are recurrent • All states are aperiodic or all are periodic with the same period. • If a class C is closed, irreducible, and i is a transient state (necessarily not in $C$). then $F(i, j) = F(i, k)$ for all $j, k \in C$ A limit theorem If the states in a Markov chain are ergodic (i.e., positive, recurrent, aperiodic), then $\text{lim}_{n} p^n (i, j) = \pi_j > 0$ $\sum_{j = 1}^{M} \pi_j = 1$ $\pi_j = \sum_{i = 1}^{M} \pi_i p(i, j)$ If, as above, we let $\pi (n) = [p_1 (n)\ p_2(n) \cdot\cdot\cdot p_M (n)]$ so that $\pi (n) = \pi (0)$ P$^{n}$ the result above may be written $\pi (n) = \pi (0)$ P$^{n}$ $\to$ $\pi (0)$ P$_{0}$ where P$_{0}$ = $\begin{bmatrix} \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \ \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \ \cdot\cdot\cdot & \cdot\cdot\cdot & \cdot\cdot\cdot & \cdot\cdot\cdot \ \pi_1 & \pi_2 & \cdot\cdot\cdot & \pi_m \end{bmatrix}$ Each row of P$_0 = \text{lim}_n$ P$^n$ is the long run distribution $\pi = \text{lim}_n \pi (n)$. Definition A distribution is stationary iff $\pi = \pi$P The result above may be stated by saying that the long-run distribution is the stationary distribution. A generating function analysis shows the convergence is exponential in the following sense |P$^n$ - P$_0$| $\le$ $\alpha |\lambda|^n$ where $|\lambda|$ is the largest absolute value of the eigenvalues for P other than $\lambda = 1$. Example $12$ The long run distribution for the inventory example We use MATLAB to check the eigenvalues for the transition probability P and to obtain increasing powers of P. The convergence process is readily evident. P = 0.0803 0.1839 0.3679 0.3679 0.6321 0.3679 0 0 0.2642 0.3679 0.3679 0 0.0803 0.1839 0.3679 0.3679 E = abs(eig(P)) E = 1.0000 0.2602 0.2602 0.0000 format long N = E(2).^[4 8 12] N = 0.00458242348096 0.00002099860496 0.00000009622450 >> P4 = P^4 P4 = 0.28958568915950 0.28593792666752 0.26059678211310 0.16387960205989 0.28156644866011 0.28479107531968 0.26746979455342 0.16617268146679 0.28385952806702 0.28250048636032 0.26288737107246 0.17075261450021 0.28958568915950 0.28593792666752 0.26059678211310 0.16387960205989 >> P8 = P^8 P8 = 0.28580046500309 0.28471421248816 0.26315895715219 0.16632636535655 0.28577030590344 0.28469190218618 0.26316681807503 0.16637097383535 0.28581491438224 0.28471028095839 0.26314057837998 0.16633422627939 0.28580046500309 0.28471421248816 0.26315895715219 0.16632636535655 >> P12 = P^12 P12 = 0.28579560683438 0.28470680858266 0.26315641543927 0.16634116914369 0.28579574073314 0.28470680714781 0.26315628010643 0.16634117201261 0.28579574360207 0.28470687626748 0.26315634631961 0.16634103381085 0.28579560683438 0.28470680858266 0.26315641543927 0.16634116914369 >> error4 = max(max(abs(P^16 - P4))) % Use P^16 for P_0 error4 = 0.00441148012334 % Compare with 0.0045824... >> error8 = max(max(abs(P^16 - P8))) error8 = 2.984007206519035e-05 % Compare with 0.00002099 >> error12 = max(max(abs(P^16 - P12))) error12 = 1.005660185959822e-07 % Compare with 0.00000009622450 The convergence process is clear, and the agreement with the error is close to the predicted. We have not determined the factor $a$, and we have approximated the long run matrix $P_0$ with $P^{16}$. This exhibits a practical criterion for sufficient convergence. If the rows of $P^n$ agree within acceptable precision, then $n$ is sufficiently large. For example, if we consider agreement to four decimal places sufficient, then P10 = P^10 P10 = 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663 0.2858 0.2847 0.2632 0.1663 shows that $n = 10$ is quite sufficient. Simulation of finite homogeneous Markov sequences In the section, "The Quantile Function", the quantile function is used with a random number generator to obtain a simple random sample from a given population distribution. In this section, we adapt that procedure to the problem of simulating a trajectory for a homogeneous Markov sequences with finite state space. Elements and terminology 1. States and state numbers. We suppose there are m states, usually carrying a numerical value. For purposes of analysis and simulation, we number the states 1 through m. Computation is carried out with state numbers; if desired, these can be translated into the actual state values after computation is completed. 2. Stages, transitions, period numbers, trajectories and time. We use the term stage and period interchangeably. It is customary to number the periods or stages beginning with zero for the initial stage. The period number is the number of transitions to reach that stage from the initial one. Zero transitions are required to reach the original stage (period zero), one transition to reach the next (period one), two transitions to reach period two, etc. We call the sequence of states encountered as the system evolves a trajectory or a chain. The terms “sample path” or “realization of the process” are also used in the literature. Now if the periods are of equal time length, the number of transitions is a measure of the elapsed time since the chain originated. We find it convenient to refer to time in this fashion. At time $k$ the chain has reached the period numbered $k$. The trajectory is $k + 1$ stages long, so time or period number is one less than the number of stages. 3. The transition matrix and the transition distributions. For each state, there is a conditional transition probability distribution for the next state. These are arranged in a transition matrix. The $i$th row consists of the transition distribution for selecting the next-period state when the current state number is $i$. The transition matrix $P$ thus has nonnegative elements, with each row summing to one. Such a matrix is known as a stochastic matrix. The fundamental simulation strategy 1. A fundamental strategy for sampling from a given population distribution is developed in the unit on the Quantile Function. If $Q$ is the quantile function for the population distribution and $U$ is a random variable distributed uniformly on the interval [0, 1], then $X = Q(U)$ has the desired distribution. To obtain a sample from the uniform distribution use a random number generator. This sample is “transformed” by the quantile function into a sample from the desired distribution. 2. For a homogeneous chain, if we are in state $k$, we have a distribution for selecting the next state. If we use the quantile function for that distribution and a number produced by a random number generator, we make a selection of the next state based on that distribution. A succession of these choices, with the selection of the next state made in each case from the distribution for the current state, constitutes a valid simulation of a trajectory. Arrival times and recurrence times The basic simulation produces one or more trajectories of a specified length. Sometimes we are interested in continuing until first arrival at (or visit to) a specific target state or any one of a set of target states. The time (in transitions) to reach a target state is one less than the number of stages in the trajectory which begins with the initial state and ends with the target state reached. • If the initial state is not in the target set, we speak of the arrival time. • If the initial state is in the target set, the arrival time would be zero. In this case, we do not stop at zero but continue until the next visit to a target state (possibly the same as the initial state). We call the number of transitions in this case the recurrence time. • In some instances, it may be desirable to know the time to complete visits to a prescribed number of the target states. Again there is a choice of treatment in the case the initial set is in the target set. Data files For use of MATLAB in simulation, we find it convenient to organize the appropriate data in an m-file. • In every case, we need the transition matrix $P$. Its size indicates the number of states (say by the length of any row or column). • If the states are to have values other than the state numbers, these may be included in the data file, although they may be added later, in response to a prompt. • If long trajectories are to be produced, it may be desirable to determine the fraction of times each state is realized. A comparison with the long-run probabilities for the chain may be of interest. In this case, the data file may contain the long-run probability distribution. Usually, this is obtained by taking one row of a sufficiently large power of the transition matrix. This operation may be performed after the data file is called for but before the simulation procedure begins. An example data file used to illustrate the various procedures is shown below. These data were generated artificially and have no obvious interpretations in terms of a specific systems to be modeled. However, they are sufficiently complex to provide nontrivial illustrations of the simulation procedures. % file markovp1.m % Artificial data for a Markov chain, used to % illustrate the operation of the simulation procedures. P = [0.050 0.011 0.155 0.155 0.213 0.087 0.119 0.190 0.008 0.012 0.103 0.131 0.002 0.075 0.013 0.081 0.134 0.115 0.181 0.165 0.103 0.018 0.128 0.081 0.137 0.180 0.149 0.051 0.009 0.144 0.051 0.098 0.118 0.154 0.057 0.039 0.153 0.112 0.117 0.101 0.016 0.143 0.200 0.062 0.099 0.175 0.108 0.054 0.062 0.081 0.029 0.085 0.156 0.158 0.011 0.156 0.088 0.090 0.055 0.172 0.110 0.059 0.020 0.212 0.016 0.113 0.086 0.062 0.204 0.118 0.084 0.171 0.009 0.138 0.140 0.150 0.023 0.003 0.125 0.157 0.105 0.123 0.121 0.167 0.149 0.040 0.051 0.059 0.086 0.099 0.192 0.093 0.191 0.061 0.094 0.123 0.106 0.065 0.040 0.035]; states = 10:3:37; PI = [0.0849 0.0905 0.1125 0.1268 0.0883 0.1141 ... 0.1049 0.0806 0.0881 0.1093]; % Long-run distribution The largest absolute value of the eigenvalues (other than one) is 0.1716. Since $0.1716^{16} \approx 5.6 \cdot 10^{-13}$, we take any row of $P^{16}$ as the long-run probabilities. These are included in the matrix PI in the m-file, above. The examples for the various procedures below use this set of artificial data, since the purpose is to illustrate the operation of the procedures. The setup and the generating m-procedures The m-procedure chainset sets up for simulation of Markov chains. It prompts for input of the transition matrix P, the states (if different from the state numbers), the long-run distribution (if available), and the set of target states if it is desired to obtain arrival or recurrence times. The procedure determines the number of states from the size of P and calculates the information needed for the quantile function. It then prompts for a call for one of the generating procedures. The m-procedure mchain, as do the other generating procedures below, assumes chainset has been run, so that commonly used data are available in appropriate form. The procedure prompts for the number of stages (length of the trajectory to be formed) and for the initial state. When the trajectory is produced, the various states in the trajectory and the fraction or relative frequency of each is displayed. If the long-run distribution has been supplied by chainset, this distribution is included for comparison. In the examples below, we reset the random number generator (set the “seed” to zero) for purposes of comparison. However, in practice, it may be desirable to make several runs without resetting the seed, to allow greater effective “randomness.” Example $13$ markovp1 % Call for data chainset % Call for setup procedure Enter the transition matrix P Enter the states if not 1:ms states % Enter the states States are 1 10 2 13 3 16 4 19 5 22 6 25 7 28 8 31 9 34 10 37 Enter the long-run probabilities PI % Enter the long-run distribution Enter the set of target states [16 22 25] % Not used with mchain Call for for appropriate chain generating procedure rand('seed',0) mchain % Call for generating procedure Enter the number n of stages 10000 % Note the trajectory length Enter the initial state 16 State Frac P0 % Statistics on the trajectory 10.0000 0.0812 0.0849 13.0000 0.0952 0.0905 16.0000 0.1106 0.1125 19.0000 0.1226 0.1268 22.0000 0.0880 0.0883 25.0000 0.1180 0.1141 28.0000 0.1034 0.1049 31.0000 0.0814 0.0806 34.0000 0.0849 0.0881 37.0000 0.1147 0.1093 To view the first part of the trajectory of states, call for TR disp(TR') 0 1 2 3 4 5 6 7 8 9 10 16 16 10 28 34 37 16 25 37 10 13 The fact that the fractions or relative frequencies approximate the long-run probabilities is an expression of a fundamental limit property of probability theory. This limit property, which requires somewhat sophisticated technique to establish, justifies a relative frequency interpretation of probability. The procedure arrival assumes the setup provided by chainset, including a set $E$ of target states. The procedure prompts for the number r of repetitions and the initial state. Then it produces $r$ succesive trajectories, each starting with the prescribed initial state and ending on one of the target states. The arrival times vary from one run to the next. Various statistics are computed and displayed or made available. In the single-run case ($r = 1$), the trajectory may be displayed. An auxiliary procedure plotdbn may be used in the multirun case to plot the distribution of arrival times. Example $14$ Arrival time to a target set of states rand('seed',0) arrival % Assumes chainset has been run, as above Enter the number of repetitions 1 % Single run case The target state set is: 16 22 25 Enter the initial state 34 % Specified initial state The arrival time is 6 % Data on trajectory The state reached is 16 To view the trajectory of states, call for TR disp(TR') % Optional call to view trajectory 0 1 2 3 4 5 6 34 13 10 28 34 37 16 rand('seed',0) arrival Enter the number of repetitions 1000 % Call for 1000 repetitions The target state set is: 16 22 25 Enter the initial state 34 % Specified initial state The result of 1000 repetitions is: % Run data (see optional calls below) Term state Rel Freq Av time 16.0000 0.3310 3.3021 22.0000 0.3840 3.2448 25.0000 0.2850 4.3895 The average arrival time is 3.59 The standard deviation is 3.207 The minimum arrival time is 1 The maximum arrival time is 23 To view the distribution of arrival times, call for dbn To plot the arrival time distribution, call for plotdbn plotdbn % See Figure 16.2.2 Figure 16.2.2. Time distribution for Example 16.2.14 It would be difficult to establish analytically estimates of arrival times. The simulation procedure gives a reasonable “feel” for these times and how they vary. The procedure recurrence is similar to the procedure arrival. If the initial state is not in the target set, it behaves as does the procedure arrival and stops on the first visit to the target set. However, if the initial state is in the target set, the procedures are different. The procedure arrival stops with zero transitions, since it senses that it has “arrived.” We are usually interested in having at least one transition– back to the same state or to another state in the target set. We call these times recurrence times. Example $15$ rand('seed',0) recurrence Enter the number of repititions 1 The target state set is: 16 22 25 Enter the initial state 22 Figure 16.2.3. Transition time distribution for Example 16.2.15 The recurrence time is 1 The state reached is 16 To view the trajectory of state numbers, call for TR disp(TR') 0 1 22 16 recurrence Enter the number of repititions 1000 The target state set is: 16 22 25 Enter the initial state 25 The result of 1000 repetitions is: Term state Rel Freq Av time 16.0000 0.3680 2.8723 22.0000 0.2120 4.6745 25.0000 0.4200 3.1690 The average recurrence time is 3.379 The standard deviation is 3.0902 The minimum recurrence time is 1 The maximum recurrence time is 20 To view the distribution of recurrence times, call for dbn To plot the recurrence time distribution, call for plotdbn % See Figure 16.2.3 The procedure kvis stops when a designated number $k$ of states are visited. If $k$ is greater than the number of target states, or if no $k$ is designated, the procedure stops when all have been visited. For $k = 1$, the behavior is the same as arrival. However, that case is better handled by the procedure arrival, which provides more statistics on the results. Example $16$ rand('seed',0) kvis % Assumes chainset has been run Enter the number of repetitions 1 The target state set is: 16 22 25 Enter the number of target states to visit 2 Enter the initial state 34 The time for completion is 7 To view the trajectory of states, call for TR disp(TR') 0 1 2 3 4 5 6 7 34 13 10 28 34 37 16 25 rand('seed',0) kvis Enter the number of repetitions 100 The target state set is: 16 22 25 Enter the number of target states to visit % Default-- visit all three Enter the initial state 31 The average completion time is 17.57 The standard deviation is 8.783 The minimum completion time is 5 The maximum completion time is 42 To view a detailed count, call for D. The first column shows the various completion times; the second column shows the numbers of trials yielding those times The first goal of this somewhat sketchy introduction to Markov processes is to provide a general setting which gives insight into the essential character and structure of such systems. The important case of homogenous chains is introduced in such a way that their algebraic structure appears as a logical consequence of the Markov propertiy. The general theory is used to obtain some tools for formulating homogeneous chains in practical cases. Some MATLAB tools for studying their behavior are applied to an artificial example, which demonstrates their general usefulness in studying many practical, applied problems.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/16%3A_Conditional_Independence_Given_a_Random_Vector/16.02%3A_Elements_of_Markov_Sequences.txt
Exercise $1$ The pair $\{X, Y\}$ ci $|H$. $X$ ~ exponential ($u/3$), given $H = u$; $Y$ ~ exponential $(u/5)$, given $H = u$; and $H$ ~ uniform [1, 2]. Determine a general formula for $P(X > r, Y > s)$, then evaluate for $r = 3$, $s = 10$. Answer $P(X > r, Y > s|H = u) = e^{-ur/3} e^{us/5} = e^{-au}$, $a = \dfrac{r}{3} + \dfrac{s}{5}$ $P(X > r, Y > s) = \int e^{-au} f_H (u)\ du = \int_{1}^{2} e^{-au}\ du = \dfrac{1}{a} [e^{-a} - e^{-2a}]$ For $r = 3$, $s= 10$, $a = 3$, $P(X > 3, Y > 10) = \dfrac{1}{3} (e^{-3} - e^{-6}) = 0.0158$ Exercise $2$ A small random sample of size $n = 12$ is taken to determine the proportion of the student body which favors a proposal to expand the student Honor Council by adding two additional members “at large.” Prior information indicates that this proportion is about 0.6 = 3/5. From a Bayesian point of view, the population proportion is taken to be the value of a random variable $H$. It seems reasonable to assume a prior distribution $H$ ~ beta (4,3), giving a maximum of the density at (4 - 1)/(4 + 3 - 2) = 3/5. Seven of the twelve interviewed favor the proposition. What is the best mean-square estimate of the proportion, given this result? What is the conditional distribution of $H$, given this result? Answer $H$ ~ Beta ($r, s$), $r = 4$, $s = 3$, $n = 12$, $k = 7$ $E[H|S = k] = \dfrac{k + r}{n + r + s} = \dfrac{7 + 4}{12 + 4 + 3} = \dfrac{11}{19}$ Exercise $3$ Let $\{X_i: 1 \le i \le n\}$ be a random sample, given $H$. Set $W = (X_1, X_2, \cdot\cdot\cdot, X_n)$. Suppose $X$ conditionally geometric $(u)$, given $H = u$; i.e., suppose $P(X = k|H = u) = u(1 - u)^k$ for all $k \ge 0$. If $H$ ~ uniform on [0, 1], determine the best mean square estimator for $H$, given $W$. Answer $E[H|W = k] = \dfrac{E[HI_{\{k\}} (W)]}{E[I_{\{k\}} (W)} = \dfrac{E[HI_{\{k\}} (W)|H]}{E[I_{\{k\}} (W)|H}$ $= \dfrac{\int u P(W = k|H = u) f_H (u)\ du}{\int P(W = k|H = u) f_H (u)\ du}$, $k = (k_1, k_2, \cdot\cdot\cdot, k_n)$ $P(W = k|H = u) = \prod_{i = 1}^{n} u (1 - u)^{k_i} = u^n (1 - u)^{k^*}$ $k^* = \sum_{i = 1}^{n} k_i$ $E[H|W = k] = \dfrac{\int_{0}^{1} u^{n + 1} (1 - u)^{k^*}\ du}{\int_{0}^{1} u^{n} (1 - u)^{k^*}\ du} = \dfrac{\Gamma (n + 2) \Gamma (k^* + 1)}{\Gamma (n + 1 + k^* + 2)} \cdot \dfrac{\Gamma (n + k^* + 2)}{\Gamma (n + 1) \Gamma (k^* + 1)} =$ $\dfrac{n + 1}{n + k^* + 2}$ Exercise $4$ Let $\{X_i: 1 \le i \le n\}$ be a random sample, given $H$. Set $W = (X_1, X_2, \cdot\cdot\cdot, X_n)$. Suppose $X$ conditionally Poisson $(u)$, given $H = u$; i.e., suppose $P(X = k|H = u) = e^{-u} u^k/k!$. If $H$ ~ gamma $(m, \lambda)$, determine the best mean square estimator for $H$, given $W$. Answer $E[H|W = k] = \dfrac{\int u P(W = k|H = u) f_H (u)\ du}{\int P(W = k|H = u) f_H (u)\ du}$ $P(W = k|H = u) = \prod_{i = 1}^{n} e^{-u} \dfrac{u^{k_i}}{k_i !} = e^{-nu} \dfrac{u^{k^*}}{A} k^* = \sum_{i = 1}^{n} k_i$ $f_H(u) = \dfrac{\lambda^m u^{m - 1} e^{-\lambda u}}{\Gamma (m)}$ $E[H|W = k] = \dfrac{\int_{0}^{\infty} u^{k^* + m} e^{-(\lambda + n)u}\ du}{\int_{0}^{\infty} u^{k^* + m - 1} e^{-(\lambda + n)u}\ du} = \dfrac{\Gamma (m + k^* + 1)}{(\lambda + n)^{k^* + m + 1}} \cdot \dfrac{(\lambda + n)^{k^* + m}}{\Gamma (m + k^*)} = \dfrac{m + k^*}{\lambda + n}$ Exercise $5$ Suppose $\{N, H\}$ is independent and $\{N, Y\}$ ci $|H$. Use properties of conditional expectation and conditional independence to show that $E[g(N) h(Y)|H] = E[g(N)] E[h(Y)|H]$ a.s. Answer $E[g(N)h(H)|H] = E[g(N)|H] E[h(Y)|H]$ a.s. by (CI6) and $E[g(N)|H] = E[g(N)]$ a.s. by (CE5). Exercise $6$ Consider the composite demand $D$ introduced in the section on Random Sums in "Random Selecton" $D = \sum_{n = 0}^{\infty} I_{\{k\}} (N) X_n$ where $X_n = \sum_{k = 0}^{n} Y_k$, $Y_0 = 0$ Suppose $\{N, H\}$ is independent, $\{N, Y_i\}$ ci $|H$ for all $i$, and $E[Y_i|H] = e(H)$, invariant with $i$. Show that $E[D|H] = E[N]E[Y|H]$ a.s.. Answer $E[D|H] = \sum_{n = 1}^{\infty} E[I_{\{n\}} (N) X_n|H]$ a.s. $E[I_{\{n\}} (N) X_n |H] = \sum_{k = 1}^{n} E[I_{\{n\}} (N) Y_k|H] = \sum_{k = 1}^{n} P(N = n) E[Y|H] = P(N = n) nE[Y|H]$ a.s. $E[D|H] = \sum_{n = 1}^{\infty} n P(N = n) E[Y|H] = E[N] E[Y|H]$ a.s. Exercise $7$ The transition matrix $P$ for a homogeneous Markov chain is as follows (in m-file npr16_07.m): $P = \begin{bmatrix} 0.23 & 0.32 & 0.02 & 0.22 & 0.21 \ 0.29 & 0.41 & 0.10 & 0.08 & 0.12 \ 0.22 & 0.07 & 0.31 & 0.14 & 0.26 \ 0.32 & 0.15 & 0.05 & 0.33 & 0.15 \ 0.08 & 0.23 & 0.31 & 0.09 & 0.29 \end{bmatrix}$ 1. Obtain the absolute values of the eigenvalues, then consider increasing powers of $P$ to observe the convergence to the long run distribution. 2. Take an arbitrary initial distribution $p0$ (as a row matrix). The product $p0 * p^k$ is the distribution for stage $k$. Note what happens as $k$ becomes large enough to give convergence to the long run transition matrix. Does the end result change with change of initial distribution $p0$? Answer ev = abs(eig(P))' ev = 1.0000 0.0814 0.0814 0.3572 0.2429 a = ev(4).^[2 4 8 16 24] a = 0.1276 0.0163 0.0003 0.0000 0.0000 % By P^16 the rows agree to four places p0 = [0.5 0 0 0.3 0.2]; % An arbitrarily chosen p0 p4 = p0*P^4 p4 = 0.2297 0.2622 0.1444 0.1644 0.1992 p8 = p0*P^8 p8 = 0.2290 0.2611 0.1462 0.1638 0.2000 p16 = p0*P^16 p16 = 0.2289 0.2611 0.1462 0.1638 0.2000 p0a = [0 0 0 0 1]; % A second choice of p0 p16a = p0a*P^16 p16a = 0.2289 0.2611 0.1462 0.1638 0.2000 Exercise $8$ The transition matrix $P$ for a homogeneous Markov chain is as follows (in m-file npr16_08.m): $P = \begin{bmatrix} 0.2 & 0.5 & 0.3 & 0 & 0 & 0 & 0 \ 0.6 & 0.1 & 0.3 & 0 & 0 & 0 & 0 \ 0.2 & 0.7 & 0.1 & 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0.6 & 0.4 & 0 & 0 \ 0 & 0 & 0 & 0.5 & 0.5 & 0 & 0 \ 0.1 & 0.3 & 0 & 0.2 & 0.1 & 0.1 & 0.2 \ 0.1 & 0.2 & 0.1 & 0.2 & 0.2 & 0.2 & 0 \end{bmatrix}$ 1. Note that the chain has two subchains, with states {1, 2, 3} and {4, 5}. Draw a transition diagram to display the two separate chains. Can any state in one subchain be reached from any state in the other? 2. Check the convergence as in part (a) of Exercise 16.3.7. What happens to the state probabilities for states 6 and 7 in the long run? What does that signify for these states? Can these states be reached from any state in either of the subchains? How would you classify these states? Answer Increasing power $p^n$ show the probability of being in states 6, 7 go to zero. These states cannot be reached from any of the other states. Exercise $9$ The transition matrix $P$ for a homogeneous Markov chain is as follows (in m-file npr16_09.m): $P = \begin{bmatrix} 0.1 & 0.2 & 0.1 & 0.3 & 0.2 & 0 & 0.1 \ 0 & 0.6 & 0 & 0 & 0 & 0 & 0.4 \ 0 & 0 & 0.2 & 0.5 & 0 & 0.3 & 0 \ 0 & 0 & 0.6 & 0.1 & 0 & 0.3 & 0 \ 0.2 & 0.2 & 0.1 & 0.2 & 0 & 0.1 & 0.2 \ 0 & 0 & 0.2 & 0.7 & 0 & 0.1 & 0 \ 0 & 0.5 & 0 & 0 & 0 & 0 & 0.5 \end{bmatrix}$ 1. Check the transition matrix $P$ for convergence, as in part (a) of Exercise 16.3.7. How many steps does it take to reach convergence to four or more decimal places? Does this agree with the theoretical result? 2. Examine the long run transition matrix. Identify transient states. 3. The convergence does not make all rows the same. Note, however, that there are two subgroups of similar rows. Rearrange rows and columns in the long run Matrix so that identical rows are grouped. This suggests subchains. Rearrange the rows and columns in the transition matrix $P$ and see that this gives a pattern similar to that for the matrix in Exercise 16.7.8. Raise the rearranged transition matrix to the power for convergence. Answer Examination of $p^{16}$ suggests set {2, 7} and {3, 4, 6} of states form subchains. Rearrangement of $P$ may be done as follows: PA = P([2 7 3 4 6 1 5], [2 7 3 4 6 1 5]) PA = 0.6000 0.4000 0 0 0 0 0 0.5000 0.5000 0 0 0 0 0 0 0 0.2000 0.5000 0.3000 0 0 0 0 0.6000 0.1000 0.3000 0 0 0 0 0.2000 0.7000 0.1000 0 0 0.2000 0.1000 0.1000 0.3000 0 0.1000 0.2000 0.2000 0.2000 0.1000 0.2000 0.1000 0.2000 0 PA16 = PA^16 PA16 = 0.5556 0.4444 0 0 0 0 0 0.5556 0.4444 0 0 0 0 0 0 0 0.3571 0.3929 0.2500 0 0 0 0 0.3571 0.3929 0.2500 0 0 0 0 0.3571 0.3929 0.2500 0 0 0.2455 0.1964 0.1993 0.2193 0.1395 0.0000 0.0000 0.2713 0.2171 0.1827 0.2010 0.1279 0.0000 0.0000 It is clear that original states 1 and 5 are transient. Exercise $10$ Use the m-procedure inventory1 (in m-file inventory1.m) to obtain the transition matrix for maximum stock $M = 8$, reorder point $m = 3$, and demand $D$ ~ Poisson(4). a. Suppose initial stock is six. What will the distribution for $X_n$, $n = 1, 3, 5$ (i.e., the stock at the end of periods 1, 3, 5, before restocking)? b. What will the long run distribution be? Answer inventory1 Enter value M of maximum stock 8 Enter value m of reorder point 3 Enter row vector of demand values 0:20 Enter demand probabilities ipoisson(4,0:20) Result is in matrix P p0 = [0 0 0 0 0 0 1 0 0]; p1 = p0*P p1 = Columns 1 through 7 0.2149 0.1563 0.1954 0.1954 0.1465 0.0733 0.0183 Columns 8 through 9 0 0 p3 = p0*P^3 p3 = Columns 1 through 7 0.2494 0.1115 0.1258 0.1338 0.1331 0.1165 0.0812 Columns 8 through 9 0.0391 0.0096 p5 = p0*P^5 p5 = Columns 1 through 7 0.2598 0.1124 0.1246 0.1311 0.1300 0.1142 0.0799 Columns 8 through 9 0.0386 0.0095 a = abs(eig(P))' a = Columns 1 through 7 1.0000 0.4427 0.1979 0.0284 0.0058 0.0005 0.0000 Columns 8 through 9 0.0000 0.0000 a(2)^16 ans = 2.1759e-06 % Convergence to at least five decimals for P^16 pinf = p0*P^16 % Use arbitrary p0, pinf approx p0*P^16 pinf = Columns 1 through 7 0.2622 0.1132 0.1251 0.1310 0.1292 0.1130 0.0789 Columns 8 through 9 0.0380 0.0093
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/16%3A_Conditional_Independence_Given_a_Random_Vector/16.03%3A_Problems_on_Conditional_Independence_Given_a_Random_Vector.txt
We use the term m-function to designate a user-defined function as distinct from the basic MATLAB functions which are part of the MATLAB package. For example, the m-function minterm produces the specified minterm vector. An m-procedure (or sometimes a procedure) is an m-file containing a set of MATLAB commands which carry out a prescribed set of operations. Generally, these will prompt for (or assume) certain data upon which the procedure is carried out. We use the term m-program to refer to either an m-function or an m-procedure. In addition to the m-programs there is a collection of m-files with properly formatted data which can be entered into the workspace by calling the file. Although the m-programs were written for MATLAB version 4.2, they work for versions 5.1, 5.2, and 7.04. The latter versions offer some new features which may make more efficient implementation of some of the m-programs, and which make possible some new ones. With one exception (so noted), these are not explored in this collection. MATLAB features Utilization of MATLAB resources is made possible by a systematic analysis of some features of the basic probability model. In particular, the minterm analysis of logical (or Boolean) combinations of events and the analysis of the structure of simple random variables with the aid of indicator functions and minterm analysis are exploited. A number of standard features of MATLAB are utilized extensively. In addition to standard matrix algebra, we use: Array arithmetic. This involves element by element calculations. For example, if a, b are matrices of the same size, then a.*b is the matrix obtained by multiplying corresponding elements in the two matrices to obtain a new matrix of the same size. Relational operations, such as less than, equal, etc. to obtain zero-one matrices with ones at element positions where the conditions are met. Logical operations on zero-one matrices utilizing logical operators and, or, and not, as well as certain related functions such as any, all, not, find, etc. Note. Relational operations and logical operations produce zero-one arrays, called logical arrays, which MATLAB treats differently from zero-one numeric arrays. A rectangular array in which some rows are logical arrays but others are not is treated as a numeric array. Any zero-one rectangular array can be converted to a numeric array (matrix) by the command A = ones(size(A)).*A, Certain MATLAB functions, such as meshgrid, sum, cumsum, prod, cumprod are used repeatedly. The function dot for dot product does not work if either array is a logical array. If one of the pair is numeric, the command C = A*B' will work. Auxiliary user-defined building blocks csort.m Description of Code: One of the most useful is a special sorting and consolidation operation implemented in the m-function csort. A standard problem arises when each of a non distinct set of values has an associated probability. To obtain the distribution, it is necessary to sort the values and add the probabilities associated with each distinct value. The following m-function achieves these operations: function [t,p] = csort(T,P). T and P are matrices with the same number of elements. Values of T are sorted and identical values are consolidated; values of P corresponding to identical values of T are added. A number of derivative functions and procedures utilize csort. The following two are useful. Answer function [t,p] = csort(T,P) % CSORT [t,p] = csort(T,P) Sorts T, consolidates P % Version of 4/6/97 % Modified to work with Versions 4.2 and 5.1, 5.2 % T and P matrices with the same number of elements % The vector T(:)' is sorted: % * Identical values in T are consolidated; % * Corresponding values in P are added. T = T(:)'; n = length(T); [TS,I] = sort(T); d = find([1,TS(2:n) - TS(1:n-1) >1e-13]); % Determines distinct values t = TS(d); % Selects the distinct values m = length(t) + 1; P = P(I); % Arranges elements of P F = [0 cumsum(P(:)')]; Fd = F([d length(F)]); % Cumulative sums for distinct values p = Fd(2:m) - Fd(1:m-1); % Separates the sums for these values distinct.m Description of Code: distinct.m function y = distinct(T) determines and sorts the distinct members of matrix $T$. Answer function y = distinct(T) % DISTINCT y = distinct(T) Disinct* members of T % Version of 5/7/96 Rev 4/20/97 for version 4 & 5.1, 5.2 % Determines distinct members of matrix T. % Members which differ by no more than 10^{-13} % are considered identical. y is a row % vector of the distinct members. TS = sort(T(:)'); n = length(TS); d = [1 abs(TS(2:n) - TS(1:n-1)) >1e-13]; y = TS(find(d)); freq.m Description of Code: freq.m sorts the distinct members of a matrix, counts the number of occurrences of each value, and calculates the cumulative relative frequencies. Answer % FREQ file freq.m Frequencies of members of matrix % Version of 5/7/96 % Sorts the distinct members of a matrix, counts % the number of occurrences of each value, and % calculates the cumulative relative frequencies. T = input('Enter matrix to be counted '); [m,n] = size(T); [t,f] = csort(T,ones(m,n)); p = cumsum(f)/(m*n); disp(['The number of entries is ',num2str(m*n),]) disp(['The number of distinct entries is ',num2str(length(t)),] ) disp(' ') dis = [t;f;p]'; disp(' Values Count Cum Frac') disp(dis) dsum.m Description of Code: dsum.mfunction y = dsum(v,w) determines and sorts the distinct elements among the sums of pairs of elements of row vectors v and w. Answer function y = dsum(v,w) % DSUM y = dsum(v,w) Distinct pair sums of elements % Version of 5/15/97 % y is a row vector of distinct % values among pair sums of elements % of matrices v, w. % Uses m-function distinct [a,b] = meshgrid(v,w); t = a+b; y = distinct(t(:)'); rep.m Description of Code: rep.mfunction y = rep(A,m,n) replicates matrix A, m times vertically and n times horizontally. Essentially the same as the function repmat in MATLAB version 5, released December, 1996. Answer function y = rep(A,m,n) % REP y = rep(A,m,n) Replicates matrix A % Version of 4/21/96 % Replicates A, % m times vertically, % n times horizontally % Essentially the same as repmat in version 5.1, 5.2 [r,c] = size(A); R = [1:r]'; C = [1:c]'; v = R(:,ones(1,m)); w = C(:,ones(1,n)); y = A(v,w); elrep.m Description of Code: elrep.mfunction y = elrep(A,m,n) replicates each element of A, $m$ times vertically and $n$ times horizontally. Answer function y = elrep(A,m,n) % ELREP y = elrep(A,m,n) Replicates elements of A % Version of 4/21/96 % Replicates each element, % m times vertically, % n times horizontally [r,c] = size(A); R = 1:r; C = 1:c; v = R(ones(1,m),:); w = C(ones(1,n),:); y = A(v,w); kronf.m Description of Code: kronf.mfunction y = kronf(A,B) determines the Kronecker product of matrices A,B Achieves the same result for full matrices as the MATLAB function kron. Answer function y = kronf(A,B) % KRONF y = kronf(A,B) Kronecker product % Version of 4/21/96 % Calculates Kronecker product of full matrices. % Uses m-functions elrep and rep % Same result for full matrices as kron for version 5.1, 5.2 [r,c] = size(B); [m,n] = size(A); y = elrep(A,r,c).*rep(B,m,n); colcopy.m Description of Code: colcopy.mfunction y = colcopy(v,n) treats row or column vector v as a column vector and makes a matrix with $n$ columns of v. Answer function y = colcopy(v,n) % COLCOPY y = colcopy(v,n) n columns of v % Version of 6/8/95 (Arguments reversed 5/7/96) % v a row or column vector % Treats v as column vector % and makes n copies % Procedure based on "Tony's trick" [r,c] = size(v); if r == 1 v = v'; end y = v(:,ones(1,n)); colcopyi.m Description of Code: colcopyi.mfunction y = colcopyi(v,n) treats row or column vector v as a column vector, reverses the order of the elements, and makes a matrix with n columns of the reversed vector. Answer function y = colcopyi(v,n) % COLCOPYI y = colcopyi(v,n) n columns in reverse order % Version of 8/22/96 % v a row or column vector. % Treats v as column vector, % reverses the order of the % elements, and makes n copies. % Procedure based on "Tony's trick" N = ones(1,n); [r,c] = size(v); if r == 1 v = v(c:-1:1)'; else v = v(r:-1:1); end y = v(:,N); rowcopy.m Description of Code: rowcopy.mfunction y = rowcopy(v,n) treats row or column vector v as a row vector and makes a matrix with $n$ rows of v. Answer function y = rowcopy(v,n) % ROWCOPY y = rowcopy(v,n) n rows of v % Version of 5/7/96 % v a row or column vector % Treats v as row vector % and makes n copies % Procedure based on "Tony's trick" [r,c] = size(v); if c == 1 v = v'; end y = v(ones(1,n),:); repseq.m Description of Code: repseq.mfunction y = repseq(V,n) replicates vector $V$ $n$ times—horizontally if $V$ is a row vector and vertically if $V$ is a column vector. Answer function y = repseq(V,n); % REPSEQ y = repseq(V,n) Replicates vector V n times % Version of 3/27/97 % n replications of vector V % Horizontally if V a row vector % Vertically if V a column vector m = length(V); s = rem(0:n*m-1,m)+1; y = V(s); total.m Description of Code: total.m Total of all elements in a matrix, calculated by: total(x) = sum(sum(x)). Answer function y = total(x) % TOTAL y = total(x) % Version of 8/1/93 % Total of all elements in matrix x. y = sum(sum(x)); dispv.m Description of Code: dispv.m Matrices $A, B$ are transposed and displayed side by side. Answer function y = dispv(A,B) % DISPV y = dispv(A,B) Transpose of A, B side by side % Version of 5/3/96 % A, B are matrices of the same size % They are transposed and displayed % side by side. y = [A;B]'; roundn.m Description of Code: roundn.mfunction y = roundn(A,n) rounds matrix A to $n$ decimal places. Answer function y = roundn(A,n); % ROUNDN y = roundn(A,n) % Version of 7/28/97 % Rounds matrix A to n decimals y = round(A*10^n)/10^n; arrep.m Description of Code: arrep.mfunction y = arrep(n,k) forms all arrangements, with repetition, of $k$ elements from the sequence $1: n$. Answer function y = arrep(n,k); % ARREP y = arrep(n,k); % Version of 7/28/97 % Computes all arrangements of k elements of 1:n, % with repetition allowed. k may be greater than n. % If only one input argument n, then k = n. % To get arrangements of column vector V, use % V(arrep(length(V),k)). N = 1:n; if nargin == 1 k = n; end y = zeros(k,n^k); for i = 1:k y(i,:) = rep(elrep(N,1,n^(k-i)),1,n^(i-1)); end Minterm vectors and probabilities The analysis of logical combinations of events (as sets) is systematized by the use of the minterm expansion. This leads naturally to the notion of minterm vectors. These are zero-one vectors which can be combined by logical operations. Production of the basic minterm patterns is essential to a number of operations. The following m-programs are key elements of various other programs. minterm.m Description of Code: minterm.mfunction y = minterm(n,k) generates the $k$th minterm vector in a class of $n$. Answer function y = minterm(n,k) % MINTERM y = minterm(n,k) kth minterm of class of n % Version of 5/5/96 % Generates the kth minterm vector in a class of n % Uses m-function rep y = rep([zeros(1,2^(n-k)) ones(1,2^(n-k))],1,2^(k-1)); mintable.m Description of Code: mintable.mfunction y = mintable(n) generates a table of minterm vectors by repeated use of the m-function minterm. Answer function y = mintable(n) % MINTABLE y = mintable(n) Table of minterms vectors % Version of 3/2/93 % Generates a table of minterm vectors % Uses the m-function minterm y = zeros(n,2^n); for i = 1:n y(i,:) = minterm(n,i); end minvec3.m Description of Code: minvec3.m sets basic minterm vectors A, B, C, A$^c$, B$^c$, C$^c$, for the class $\{A, B, C\}$. (Similarly for minvec4.m, minvec5.m, etc.) Answer % MINVEC3 file minvec3.m Basic minterm vectors % Version of 1/31/95 A = minterm(3,1); B = minterm(3,2); C = minterm(3,3); Ac = ~A; Bc = ~B; Cc = ~C; disp('Variables are A, B, C, Ac, Bc, Cc') disp('They may be renamed, if desired.') minmap Description of Code: minmapfunction y = minmap(pm) reshapes a row or column vector pm of minterm probabilities into minterm map format. Answer function y = minmap(pm) % MINMAP y = minmap(pm) Reshapes vector of minterm probabilities % Version of 12/9/93 % Reshapes a row or column vector pm of minterm % probabilities into minterm map format m = length(pm); n = round(log(m)/log(2)); a = fix(n/2); if m ~= 2^n disp('The number of minterms is incorrect') else y = reshape(pm,2^a,2^(n-a)); end binary.m Description of Code: binary.mfunction y = binary(d,n) converts a matrix d of floating point nonnegative integers to a matrix of binary equivalents, one on each row. Adapted from m-functions written by Hans Olsson and by Simon Cooke. Each matrix row may be converted to an unspaced string of zeros and ones by the device ys = setstr(y + '0'). Answer function y = binary(d,n) % BINARY y = binary(d,n) Integers to binary equivalents % Version of 7/14/95 % Converts a matrix d of floating point, nonnegative % integers to a matrix of binary equivalents. Each row % is the binary equivalent (n places) of one number. % Adapted from the programs dec2bin.m, which shared % first prize in an April 95 Mathworks contest. % Winning authors: Hans Olsson from Lund, Sweden, % and Simon Cooke from Glasgow, UK. % Each matrix row may be converted to an unspaced string % of zeros and ones by the device: ys = setstr(y + '0'). if nargin < 2, n = 1; end % Allows omission of argument n [f,e] = log2(d); n = max(max(max(e)),n); y = rem(floor(d(:)*pow2(1-n:0)),2); mincalc.m Description of Code: mincalc.m The m-procedure mincalc determines minterm probabilities from suitable data. For a discussion of the data formatting and certain problems, see 2.6. Answer % MINCALC file mincalc.m Determines minterm probabilities % Version of 1/22/94 Updated for version 5.1 on 6/6/97 % Assumes a data file which includes % 1. Call for minvecq to set q basic minterm vectors, each (1 x 2^q) % 2. Data vectors DV = matrix of md data Boolean combinations of basic sets-- % Matlab produces md minterm vectors-- one on each row. % The first combination is always A|Ac (the whole space) % 3. DP = row matrix of md data probabilities. % The first probability is always 1. % 4. Target vectors TV = matrix of mt target Boolean combinations. % Matlab produces a row minterm vector for each target combination. % If there are no target combinations, set TV = []; [md,nd] = size(DV); ND = 0:nd-1; ID = eye(nd); % Row i is minterm vector i-1 [mt,nt] = size(TV); MT = 1:mt; rd = rank(DV); if rd < md disp('Data vectors are NOT linearly independent') else disp('Data vectors are linearly independent') end % Identification of which minterm probabilities can be determined from the data % (i.e., which minterm vectors are not linearly independent of data vectors) AM = zeros(1,nd); for i = 1:nd AM(i) = rd == rank([DV;ID(i,:)]); % Checks for linear dependence of each end am = find(AM); % minterm vector CAM = ID(am,:)/DV; % Determination of coefficients for the available minterms pma = DP*CAM'; % Calculation of probabilities of available minterms PMA = [ND(am);pma]'; if sum(pma < -0.001) > 0 % Check for data consistency disp('Data probabilities are INCONSISTENT') else % Identification of which target probabilities are computable from the data CT = zeros(1,mt); for j = 1:mt CT(j) = rd == rank([DV;TV(j,:)]); end ct = find(CT); CCT = TV(ct,:)/DV; % Determination of coefficients for computable targets ctp = DP*CCT'; % Determination of probabilities disp(' Computable target probabilities') disp([MT(ct); ctp]') end % end for "if sum(pma < -0.001) > 0" disp(['The number of minterms is ',num2str(nd),]) disp(['The number of available minterms is ',num2str(length(pma)),]) disp('Available minterm probabilities are in vector pma') disp('To view available minterm probabilities, call for PMA') mincalct.m Description of Code: mincalct.m Modification of mincalc. Assumes mincalc has been run, calls for new target vectors and performs same calculations as mincalc. Answer % MINCALCT file mincalct.m Aditional target probabilities % Version of 9/1/93 Updated for version 5 on 6/6/97 % Assumes a data file which includes % 1. Call for minvecq to set q basic minterm vectors. % 2. Data vectors DV. The first combination is always A|Ac. % 3. Row matrix DP of data probabilities. The first entry is always 1. TV = input('Enter matrix of target Boolean combinations '); [md,nd] = size(DV); [mt,nt] = size(TV); MT = 1:mt; rd = rank(DV); CT = zeros(1,mt); % Identification of computable target probabilities for j = 1:mt CT(j) = rd == rank([DV;TV(j,:)]); end ct = find(CT); CCT = TV(ct,:)/DV; % Determination of coefficients for computable targets ctp = DP*CCT'; % Determination of probabilities disp(' Computable target probabilities') disp([MT(ct); ctp]') Independent events minprob.m Description of Code: minprob.mfunction y = minprob(p) calculates minterm probabilities for the basic probabilities in row or column vector p. Uses the m-functions mintable, colcopy. Answer function y = minprob(p) % MINPROB y = minprob(p) Minterm probs for independent events % Version of 4/7/96 % p is a vector [P(A1) P(A2) ... P(An)], with % {A1,A2, ... An} independent. % y is the row vector of minterm probabilities % Uses the m-functions mintable, colcopy n = length(p); M = mintable(n); a = colcopy(p,2^n); % 2^n columns, each the vector p m = a.*M + (1 - a).*(1 - M); % Puts probabilities into the minterm % pattern on its side (n by 2^n) y = prod(m); % Product of each column of m imintest.m Description of Code: imintest.mfunction y = imintest(pm) checks minterm probabilities for independence. Answer function y = imintest(pm) % IMINTEST y = imintest(pm) Checks minterm probs for independence % Version of 1/25//96 % Checks minterm probabilities for independence % Uses the m-functions mintable and minprob m = length(pm); n = round(log(m)/log(2)); if m ~= 2^n y = 'The number of minterm probabilities is incorrect'; else P = mintable(n)*pm'; pt = minprob(P'); a = fix(n/2); s = abs(pm - pt) > 1e-7; if sum(s) > 0 disp('The class is NOT independent') disp('Minterms for which the product rule fails') y = reshape(s,2^a,2^(n-a)); else y = 'The class is independent'; end end ikn.m Description of Code: ikn.mfunction y = ikn(P,k) determines the probability of the occurrence of exactly $k$ of the $n$ independent events whose probabilities are in row or column vector P (k may be a row or column vector of nonnegative integers less than or equal to $n$). Answer function y = ikn(P,k) % IKN y = ikn(P,k) Individual probabilities of k of n successes % Version of 5/15/95 % Uses the m-functions mintable, minprob, csort n = length(P); T = sum(mintable(n)); % The number of successes in each minterm pm = minprob(P); % The probability of each minterm [t,p] = csort(T,pm); % Sorts and consolidates success numbers % and adds corresponding probabilities y = p(k+1); ckn.m Description of Code: ckn.mfunction y = ckn(P,k) determines the probability of the occurrence of $k$ or more of the $n$ independent events whose probabilities are in row or column vector P ($k$ may be a row or column vector) Answer function y = ckn(P,k) % CKN y = ckn(P,k) Probability of k or more successes % Version of 5/15/95 % Probabilities of k or more of n independent events % Uses the m-functions mintable, minprob, csort n = length(P); m = length(k); T = sum(mintable(n)); % The number of successes in each minterm pm = minprob(P); % The probability of each minterm [t,p] = csort(T,pm); % Sorts and consolidates success numbers % and adds corresponding probabilities for i = 1:m % Sums probabilities for each k value y(i) = sum(p(k(i)+1:n+1)); end parallel.m Description of Code: parallel.mfunction y = parallel(p) determines the probability of a parallel combination of the independent events whose probabilities are in row or column vector p. Answer function y = parallel(p) % PARALLEL y = parallel(p) Probaaability of parallel combination % Version of 3/3/93 % Probability of parallel combination. % Individual probabilities in row matrix p. y = 1 - prod(1 - p); Conditional probability and conditional idependence bayes.m Description of Code: bayes.m produces a Bayesian reversal of conditional probabilities. The input consists of $P(E|A_i)$ and $P(A_i)$ for a disjoint class $\{A_i: 1 \le i \le n\}$ whose union contains $E$. The procedure calculates $P(A_i|E)$ and $P(A_i|E^c)$ for $1 \le i \le n$. Answer % BAYES file bayes.m Bayesian reversal of conditional probabilities % Version of 7/6/93 % Input P(E|Ai) and P(Ai) % Calculates P(Ai|E) and P(Ai|Ec) disp('Requires input PEA = [P(E|A1) P(E|A2) ... P(E|An)]') disp(' and PA = [P(A1) P(A2) ... P(An)]') disp('Determines PAE = [P(A1|E) P(A2|E) ... P(An|E)]') disp(' and PAEc = [P(A1|Ec) P(A2|Ec) ... P(An|Ec)]') PEA = input('Enter matrix PEA of conditional probabilities '); PA = input('Enter matrix PA of probabilities '); PE = PEA*PA'; PAE = (PEA.*PA)/PE; PAEc = ((1 - PEA).*PA)/(1 - PE); disp(' ') disp(['P(E) = ',num2str(PE),]) disp(' ') disp(' P(E|Ai) P(Ai) P(Ai|E) P(Ai|Ec)') disp([PEA; PA; PAE; PAEc]') disp('Various quantities are in the matrices PEA, PA, PAE, PAEc, named above') odds.m Description of Code: odds.m The procedure calculates posterior odds for for a specified profile $E$. Assumes data have been entered by the procedure oddsf or oddsp. Answer % ODDS file odds.m Posterior odds for profile % Version of 12/4/93 % Calculates posterior odds for profile E % Assumes data has been entered by oddsdf or oddsdp E = input('Enter profile matrix E '); C = diag(a(:,E))'; % aa = a(:,E) is an n by n matrix whose ith column D = diag(b(:,E))'; % is the E(i)th column of a. The elements on the % diagonal are b(i, E(i)), 1 <= i <= n % Similarly for b(:,E) R = prod(C./D)*(p1/p2); % Calculates posterior odds for profile disp(' ') disp(['Odds favoring Group 1: ',num2str(R),]) if R > 1 disp('Classify in Group 1') else disp('Classify in Group 2') end oddsdf.m Description of Code: oddsdf.m Sets up calibrating frequencies for calculating posterior odds. Answer % ODDSDF file oddsdf.m Frequencies for calculating odds % Version of 12/4/93 % Sets up calibrating frequencies % for calculating posterior odds A = input('Enter matrix A of frequencies for calibration group 1 '); B = input('Enter matrix B of frequencies for calibration group 2 '); n = length(A(:,1)); % Number of questions (rows of A) m = length(A(1,:)); % Number of answers to each question p1 = sum(A(1,:)); % Number in calibration group 1 p2 = sum(B(1,:)); % Number in calibration group 2 a = A/p1; b = B/p2; disp(' ') % Blank line in presentation disp(['Number of questions = ',num2str(n),]) % Size of profile disp(['Answers per question = ',num2str(m),]) % Usually 3: yes, no, uncertain disp(' Enter code for answers and call for procedure "odds" ') disp(' ') oddsdp.m Description of Code: oddsdp.m Sets up conditional probabilities for odds calculations. Answer % ODDSDP file oddsdp.m Conditional probs for calculating posterior odds % Version of 12/4/93 % Sets up conditional probabilities % for odds calculations a = input('Enter matrix A of conditional probabilities for Group 1 '); b = input('Enter matrix B of conditional probabilities for Group 2 '); p1 = input('Probability p1 an individual is from Group 1 '); n = length(a(:,1)); m = length(a(1,:)); p2 = 1 - p1; disp(' ') % Blank line in presentation disp(['Number of questions = ',num2str(n),]) % Size of profile disp(['Answers per question = ',num2str(m),]) % Usually 3: yes, no, uncertain disp(' Enter code for answers and call for procedure "odds" ') disp(' ') Bernoulli and multinomial trials btdata.m Description of Code: btdata.m Sets parameter $p$ and number $n$ of trials for generating Bernoulli sequences. Prompts for bt to generate the trials. Answer % BTDATA file btdata.m Parameters for Bernoulli trials % Version of 11/28/92 % Sets parameters for generating Bernoulli trials % Prompts for bt to generate the trials n = input('Enter n, the number of trials '); p = input('Enter p, the probability of success on each trial '); disp(' ') disp(' Call for bt') disp(' ') bt.m Description of Code: bt.m Generates Bernoulli sequence for parameters set by btdata. Calculates relative frequency of “successes.” Answer % BT file bt.m Generates Bernoulli sequence % version of 8/11/95 Revised 7/31/97 for version 4.2 and 5.1, 5.2 % Generates Bernoulli sequence for parameters set by btdata % Calculates relative frequency of 'successes' clear SEQ; B = rand(n,1) <= p; % ones for random numbers <= p F = sum(B)/n; % relative frequency of ones N = [1:n]'; % display details disp(['n = ',num2str(n),' p = ',num2str(p),]) disp(['Relative frequency = ',num2str(F),]) SEQ = [N B]; clear N; clear B; disp('To view the sequence, call for SEQ') disp(' ') binomial.m Description of Code: binomial.m Uses ibinom and cbinom to generate tables of the individual and cumulative binomial probabilities for specified parameters. Note that for calculation in MATLAB it is usually much more convenient and efficient to use ibinom and/or cbinom. Answer % BINOMIAL file binomial.m Generates binomial tables % Version of 12/10/92 (Display modified 4/28/96) % Calculates a TABLE of binomial probabilities % for specified n, p, and row vector k, % Uses the m-functions ibinom and cbinom. n = input('Enter n, the number of trials '); p = input('Enter p, the probability of success '); k = input('Enter k, a row vector of success numbers '); y = ibinom(n,p,k); z = cbinom(n,p,k); disp([' n = ',int2str(n),' p = ' num2str(p)]) H = [' k P(X = k) P(X >= k)']; disp(H) disp([k;y;z]') multinom.m Description of Code: multinom.m Multinomial distribution (small $N, m$). Answer % MULTINOM file multinom.m Multinomial distribution % Version of 8/24/96 % Multinomial distribution (small N, m) N = input('Enter the number of trials '); m = input('Enter the number of types '); p = input('Enter the type probabilities '); M = 1:m; T = zeros(m^N,N); for i = 1:N a = rowcopy(M,m^(i-1)); a = a(:); a = colcopy(a,m^(N-i)); T(:,N-i+1) = a(:); % All possible strings of the types end MT = zeros(m^N,m); for i = 1:m MT(:,i) = sum(T'==i)'; end clear T % To conserve memory disp('String frequencies for type k are in column matrix MT(:,k)') P = zeros(m^N,N); for i = 1:N a = rowcopy(p,m^(i-1)); a = a(:); a = colcopy(a,m^(N-i)); P(:,N-i+1) = a(:); % Strings of type probabilities end PS = prod(P'); % Probability of each string clear P % To conserve memory disp('String probabilities are in row matrix PS') Some matching problems Cardmatch.m Description of Code: Cardmatch.m Sampling to estimate the probability of one or more matches when one card is drawn from each of $nd$ identical decks of $c$ cards. The number nsns of samples is specified. Answer % CARDMATCH file cardmatch.m Prob of matches in cards from identical decks % Version of 6/27/97 % Estimates the probability of one or more matches % in drawing cards from nd decks of c cards each % Produces a supersample of size n = nd*ns, where % ns is the number of samples % Each sample is sorted, and then tested for differences % between adjacent elements. Matches are indicated by % zero differences between adjacent elements in sorted sample c = input('Enter the number c of cards in a deck '); nd = input('Enter the number nd of decks '); ns = input('Enter the number ns of sample runs '); X = 1:c; % Population values PX = (1/c)*ones(1,c); % Population probabilities N = nd*ns; % Length of supersample U = rand(1,N); % Matrix of n random numbers T = dquant(X,PX,U); % Supersample obtained with quantile function; % the function dquant determines quantile % function values of random number sequence U ex = sum(T)/N; % Sample average EX = dot(X,PX); % Population mean vx = sum(T.^2)/N - ex^2; % Sample variance VX = dot(X.^2,PX) - EX^2; % Population variance A = reshape(T,nd,ns); % Chops supersample into ns samples of size nd DS = diff(sort(A)); % Sorts each sample m = sum(DS==0)>0; % Differences between elements in each sample % Zero difference iff there is a match pm = sum(m)/ns; % Fraction of samples with one or more matches Pm = 1 - comb(c,nd)*gamma(nd + 1)/c^(nd); % Theoretical probability of match disp('The sample is in column vector T') % Displays of results disp(['Sample average ex = ', num2str(ex),]) disp(['Population mean E(X) = ',num2str(EX),]) disp(['Sample variance vx = ',num2str(vx),]) disp(['Population variance V(X) = ',num2str(VX),]) disp(['Fraction of samples with one or more matches pm = ', num2str(pm),]) disp(['Probability of one or more matches in a sample Pm = ', num2str(Pm),]) trialmatch.m Description of Code: trialmatch.m Estimates the probability of matches in $n$ independent trials from identical distributions. The sample size and number of trials must be kept relateively small to avoid exceeding available memory. Answer % TRIALMATCH file trialmatch.m Estimates probability of matches % in n independent trials from identical distributions % Version of 8/20/97 % Estimates the probability of one or more matches % in a random selection from n identical distributions % with a small number of possible values % Produces a supersample of size N = n*ns, where % ns is the number of samples. Samples are separated. % Each sample is sorted, and then tested for differences % between adjacent elements. Matches are indicated by % zero differences between adjacent elements in sorted sample. X = input('Enter the VALUES in the distribution '); PX = input('Enter the PROBABILITIES '); c = length(X); n = input('Enter the SAMPLE SIZE n '); ns = input('Enter the number ns of sample runs '); N = n*ns; % Length of supersample U = rand(1,N); % Vector of N random numbers T = dquant(X,PX,U); % Supersample obtained with quantile function; % the function dquant determines quantile % function values for random number sequence U ex = sum(T)/N; % Sample average EX = dot(X,PX); % Population mean vx = sum(T.^2)/N - ex^2; % Sample variance VX = dot(X.^2,PX) - EX^2; % Population variance A = reshape(T,n,ns); % Chops supersample into ns samples of size n DS = diff(sort(A)); % Sorts each sample m = sum(DS==0)>0; % Differences between elements in each sample % -- Zero difference iff there is a match pm = sum(m)/ns; % Fraction of samples with one or more matches d = arrep(c,n); p = PX(d); p = reshape(p,size(d)); % This step not needed in version 5.1 ds = diff(sort(d))==0; mm = sum(ds)>0; m0 = find(1-mm); pm0 = p(:,m0); % Probabilities for arrangements with no matches P0 = sum(prod(pm0)); disp('The sample is in column vector T') % Displays of results disp(['Sample average ex = ', num2str(ex),]) disp(['Population mean E(X) = ',num2str(EX),]) disp(['Sample variance vx = ',num2str(vx),]) disp(['Population variance V(X) = ',num2str(VX),]) disp(['Fraction of samples with one or more matches pm = ', num2str(pm),]) disp(['Probability of one or more matches in a sample Pm = ', num2str(1-P0),]) Distributions comb.m Description of Code: comb.mfunction y = comb(n,k) Calculates binomial coefficients. $k$ may be a matrix of integers between 0 and $n$. The result $y$ is a matrix of the same dimensions. Answer function y = comb(n,k) % COMB y = comb(n,k) Binomial coefficients % Version of 12/10/92 % Computes binomial coefficients C(n,k) % k may be a matrix of integers between 0 and n % result y is a matrix of the same dimensions y = round(gamma(n+1)./(gamma(k + 1).*gamma(n + 1 - k))); ibinom.m Description of Code: ibinom.m Binomial distribution — individual terms. We have two m-functions ibinom and cbinom for calculating individual and cumulative terms $P(S_n = k)$ and $P(S_n \ge k)$, respectively. $P(S_n = k) = C(n, k) p^k (1 - p)^{n - k}$ and $P(S_n \ge k) = \sum_{r = k}^{n} P(S_n = r)$ $0 \le k \le n$ For these m-functions, we use a modification of a computation strategy employed by S. Weintraub: Tables of the Cumulative Binomial Probability Distribution for Small Values of p, 1963. The book contains a particularly helpful error analysis, written by Leo J. Cohen. Experimentation with sums and expectations indicates a precision for ibinom and cbinom calculations that is better than $10^{-10}$ for $n = 1000$ and $p$ from 0.01 to 0.99. A similar precision holds for values of $n$ up to 5000, provided $np$ or $nq$ are limited to approximately 500. Above this value for $np$ or $nq$, the computations break down. For individual terms, function y = ibinom(n,p,k) calculates the probabilities for $n$ a positive integer, $k$ a matrix of integers between 0 and $n$. The output is a matrix of the corresponding binomial probabilities. Answer function y = ibinom(n,p,k) % IBINOM y = ibinom(n,p,k) Individual binomial probabilities % Version of 10/5/93 % n is a positive integer; p is a probability % k a matrix of integers between 0 and n % y = P(X>=k) (a matrix of probabilities) if p > 0.5 a = [1 ((1-p)/p)*ones(1,n)]; b = [1 n:-1:1]; c = [1 1:n]; br = (p^n)*cumprod(a.*b./c); bi = fliplr(br); else a = [1 (p/(1-p))*ones(1,n)]; b = [1 n:-1:1]; c = [1 1:n]; bi = ((1-p)^n)*cumprod(a.*b./c); end y = bi(k+1); ipoisson.m Description of Code: ipoisson.m Poisson distribution — individual terms. As in the case of the binomial distribution, we have an m-function for the individual terms and one for the cumulative case. The m-functions ipoisson and cpoisson use a computational strategy similar to that used for the binomial case. Not only does this work for large $\mu$, but the precision is at least as good as that for the binomial m-functions. Experience indicates that the m-functions are good for $\mu \le 700$. They breaks down at about 710, largely because of limitations of the MATLAB exponential function. For individual terms, function y = ipoisson(mu,k)calculates the probabilities for $mu$ a positive integer, $k$ a row or column vector of nonnegative integers. The output is a row vector of the corresponding Poisson probabilities. Answer function y = ipoisson(mu,k) % IPOISSON y = ipoisson(mu,k) Individual Poisson probabilities % Version of 10/15/93 % mu = mean value % k may be a row or column vector of integer values % y = P(X = k) (a row vector of probabilities) K = max(k); p = exp(-mu)*cumprod([1 mu*ones(1,K)]./[1 1:K]); y = p(k+1); cpoisson.m Description of Code: cpoisson.m Poisson distribution—cumulative terms. function y = cpoisson(mu,k), calculates $P(X \ge k)$, where $k$ may be a row or a column vector of nonnegative integers. The output is a row vector of the corresponding probabilities. Answer function y = cpoisson(mu,k) % CPOISSON y = cpoisson(mu,k) Cumulative Poisson probabilities % Version of 10/15/93 % mu = mean value mu % k may be a row or column vector of integer values % y = P(X >= k) (a row vector of probabilities) K = max(k); p = exp(-mu)*cumprod([1 mu*ones(1,K)]./[1 1:K]); pc = [1 1 - cumsum(p)]; y = pc(k+1); nbinom.m Description of Code: nbinom.m Negative binomial — function y = nbinom(m, p, k) calculates the probability that the $m$th success in a Bernoulli sequence occurs on the $k$th trial. Answer function y = nbinom(m, p, k) % NBINOM y = nbinom(m, p, k) Negative binomial probabilities % Version of 12/10/92 % Probability the mth success occurs on the kth trial % m a positive integer; p a probability % k a matrix of integers greater than or equal to m % y = P(X=k) (a matrix of the same dimensions as k) q = 1 - p; y = ((p^m)/gamma(m)).*(q.^(k - m)).*gamma(k)./gamma(k - m + 1); gaussian.m Desciption of Code: gaussian.mfunction y = gaussian(m, v, t) calculates the Gaussian (Normal) distribution function for mean value $m$, variance $v$, and matrix $t$ of values. The result $y = P(X \le t)$ is a matrix of the same dimensions as $t$. Answer function y = gaussian(m,v,t) % GAUSSIAN y = gaussian(m,v,t) Gaussian distribution function % Version of 11/18/92 % Distribution function for X ~ N(m, v) % m = mean, v = variance % t is a matrix of evaluation points % y = P(X<=t) (a matrix of the same dimensions as t) u = (t - m)./sqrt(2*v); if u >= 0 y = 0.5*(erf(u) + 1); else y = 0.5*erfc(-u); end gaussdensity.m Description of Code: gaussdensity.mfunction y = gaussdensity(m,v,t) calculates the Gaussian density function $f_X (t)$ for mean value $m$, variance $t$, and matrix $t$ of values. Answer function y = gaussdensity(m,v,t) % GAUSSDENSITY y = gaussdensity(m,v,t) Gaussian density % Version of 2/8/96 % m = mean, v = variance % t is a matrix of evaluation points y = exp(-((t-m).^2)/(2*v))/sqrt(v*2*pi); norminv.m Description of Code: norminv.mfunction y = norminv(m,v,p) calculates the inverse (the quantile function) of the Gaussian distribution function for mean value $m$, variance $v$, and $p$ a matrix of probabilities. Answer function y = norminv(m,v,p) % NORMINV y = norminv(m,v,p) Inverse gaussian distribution % (quantile function for gaussian) % Version of 8/17/94 % m = mean, v = variance % t is a matrix of evaluation points if p >= 0 u = sqrt(2)*erfinv(2*p - 1); else u = -sqrt(2)*erfinv(1 - 2*p); end y = sqrt(v)*u + m; gammadbn.m Description of Code: gammadbn.mfunction y = gammadbn(alpha, lambda, t) calculates the distribution function for a gamma distribution with parameters alpha, lambda. $t$ is a matrix of evaluation points. The result is a matrix of the same size. Answer function y = gammadbn(alpha, lambda, t) % GAMMADBN y = gammadbn(alpha, lambda, t) Gamma distribution % Version of 12/10/92 % Distribution function for X ~ gamma (alpha, lambda) % alpha, lambda are positive parameters % t may be a matrix of positive numbers % y = P(X<= t) (a matrix of the same dimensions as t) y = gammainc(lambda*t, alpha); beta.m Description of Code: beta.mfunction y = beta(r,s,t) calculates the density function for the beta distribution with parameters $r, s, t$ is a matrix of numbers between zero and one. The result is a matrix of the same size. Answer function y = beta(r,s,t) % BETA y = beta(r,s,t) Beta density function % Version of 8/5/93 % Density function for Beta (r,s) distribution % t is a matrix of evaluation points between 0 and 1 % y is a matrix of the same dimensions as t y = (gamma(r+s)/(gamma(r)*gamma(s)))*(t.^(r-1).*(1-t).^(s-1)); betadbn.m Description of Code: betadbn.mfunction y = betadbn(r,s,t) calculates the distribution function for the beta distribution with parameters $r, s, t$ is a matrix of evaluation points. The result is a matrix of the same size. Answer function y = betadbn(r,s,t) % BETADBN y = betadbn(r,s,t) Beta distribution function % Version of 7/27/93 % Distribution function for X beta(r,s) % y = P(X<=t) (a matrix of the same dimensions as t) y = betainc(t,r,s); weibull.m Description of Code: weibull.mfunction y = weibull(alpha,lambda,t) calculates the density function for the Weibull distribution with parameters alpha, lambda. $t$ is a matrix of evaluation points. The result is a matrix of the same size. Answer function y = weibull(alpha,lambda,t) % WEIBULL y = weibull(alpha,lambda,t) Weibull density % Version of 1/24/91 % Density function for X ~ Weibull (alpha, lambda, 0) % t is a matrix of positive evaluation points % y is a matrix of the same dimensions as t y = alpha*lambda*(t.^(alpha - 1)).*exp(-lambda*(t.^alpha)); weibulld.m Description of Code: weibulld.mfunction y = weibulld(alpha, lambda, t) calculates the distribution function for the Weibull distribution with parameters alpha, lambda. $t$ is a matrix of evaluation points. The result is a matrix of the same size. Answer function y = weibulld(alpha, lambda, t) % WEIBULLD y = weibulld(alpha, lambda, t) Weibull distribution function % Version of 1/24/91 % Distribution function for X ~ Weibull (alpha, lambda, 0) % t is a matrix of positive evaluation points % y = P(X<=t) (a matrix of the same dimensions as t) y = 1 - exp(-lambda*(t.^alpha)); Binomial, Poisson, and Gaussian dstributions bincomp.m Description of Code: bincomp.m Graphical comparison of the binomial, Poisson, and Gaussian distributions. The procedure calls for binomial parameters $n, p$, determines a reasonable range of evaluation points and plots on the same graph the binomial distribution function, the Poisson distribution function, and the gaussian distribution function with the adjustment called the “continuity correction.” Answer % BINCOMP file bincomp.m Approx of binomial by Poisson and gaussian % Version of 5/24/96 % Gaussian adjusted for "continuity correction" % Plots distribution functions for specified parameters n, p n = input('Enter the parameter n '); p = input('Enter the parameter p '); a = floor(n*p-2*sqrt(n*p)); a = max(a,1); % Prevents zero or negative indices b = floor(n*p+2*sqrt(n*p)); k = a:b; Fb = cumsum(ibinom(n,p,0:n)); % Binomial distribution function Fp = cumsum(ipoisson(n*p,0:n)); % Poisson distribution function Fg = gaussian(n*p,n*p*(1 - p),k+0.5); % Gaussian distribution function stairs(k,Fb(k+1)) % Plotting details hold on plot(k,Fp(k+1),'-.',k,Fg,'o') hold off xlabel('t values') % Graph labeling details ylabel('Distribution function') title('Approximation of Binomial by Poisson and Gaussian') grid legend('Binomial','Poisson','Adjusted Gaussian') disp('See Figure for results') poissapp.m Desciption of Code: poissapp.m Graphical comparison of the Poisson and Gaussian distributions. The procedure calls for a value of the Poisson parameter mu, then calculates and plots the Poisson distribution function, the Gaussian distribution function, and the adjusted Gaussian distribution function. Answer % POISSAPP file poissapp.m Comparison of Poisson and gaussian % Version of 5/24/96 % Plots distribution functions for specified parameter mu mu = input('Enter the parameter mu '); n = floor(1.5*mu); k = floor(mu-2*sqrt(mu)):floor(mu+2*sqrt(mu)); FP = cumsum(ipoisson(mu,0:n)); FG = gaussian(mu,mu,k); FC = gaussian(mu,mu,k-0.5); stairs(k,FP(k)) hold on plot(k,FG,'-.',k,FC,'o') hold off grid xlabel('t values') ylabel('Distribution function') title('Gaussian Approximation to Poisson Distribution') legend('Poisson','Gaussian','Adjusted Gaussian') disp('See Figure for results') Setup for simple random variables If a simple random variable $X$ is in canonical form, the distribution consists of the coefficients of the indicator funtions (the values of $X$) and the probabilities of the corresponding events. If $X$ is in a primitive form other than canonical, the csort operation is applied to the coefficients of the indicator functions and the probabilities of the corresponding events to obtain the distribution. If $Z = g(X)$ and $X$ is in a primitive form, then the value of $Z$ on the event in the partition associated with $t_i$ is $g(t_i)$. The distribution for Z is obtained by applying csort to the $g(t_i)$ and the $p_i$. Similarly, if $Z = g(X, Y)$ and the joint distribution is available, the value $g(t_i, u_j)$ is associated with $P(X = t_i, Y = u_j)$. The distribution for $Z$ is obtained by applying csort to the matrix of values and the corresponding matrix of probabilities. canonic.m Description of Code: canonic.m The procedure determines the distribution for a simple random variable in affine form, when the minterm probabilities are available. Input data are a row vector of coefficients for the indicator functions in the affine form (with the constant value last) and a row vector of the probabilities of the minterm generated by the events. Results consist of a row vector of values and a row vector of the corresponding probabilities. Answer % CANONIC file canonic.m Distribution for simple rv in affine form % Version of 6/12/95 % Determines the distribution for a simple random variable % in affine form, when the minterm probabilities are available. % Uses the m-functions mintable and csort. % The coefficient vector must contain the constant term. % If the constant term is zero, enter 0 in the last place. c = input(' Enter row vector of coefficients '); pm = input(' Enter row vector of minterm probabilities '); n = length(c) - 1; if 2^n ~= length(pm) error('Incorrect minterm probability vector length'); end M = mintable(n); % Provides a table of minterm patterns s = c(1:n)*M + c(n+1); % Evaluates X on each minterm [X,PX] = csort(s,pm); % s = values; pm = minterm probabilities XDBN = [X;PX]'; disp('Use row matrices X and PX for calculations') disp('Call for XDBN to view the distribution') canonicf.m Description of Code: canonicf.mfunction [x,px] = canonicf(c,pm) is a function version of canonic, which allows arbitrary naming of variables. Answer function [x,px] = canonicf(c,pm) % CANONICF [x,px] = canonicf(c,pm) Function version of canonic % Version of 6/12/95 % Allows arbitrary naming of variables n = length(c) - 1; if 2^n ~= length(pm) error('Incorrect minterm probability vector length'); end M = mintable(n); % Provides a table of minterm patterns s = c(1:n)*M + c(n+1); % Evaluates X on each minterm [x,px] = csort(s,pm); % s = values; pm = minterm probabilities jcalc.m Description of Code: jcalc.m Sets up for calculations for joint simple random variables. The matrix $P$ of $P(X = t_i, Y = u_j)$ is arranged as on the plane (i.e., values of $Y$ increase upward). The MATLAB function meshgrid is applied to the row matrix $X$ and the reversed row matrix for $Y$ to put an appropriate $X$-value and $Y$-value at each position. These are in the “calculating matrices” $t$ and $u$, respectively, which are used in determining probabilities and expectations of various functions of $t, u$. Answer % JCALC file jcalc.m Calculation setup for joint simple rv % Version of 4/7/95 (Update of prompt and display 5/1/95) % Setup for calculations for joint simple random variables % The joint probabilities arranged as on the plane % (top row corresponds to largest value of Y) P = input('Enter JOINT PROBABILITIES (as on the plane) '); X = input('Enter row matrix of VALUES of X '); Y = input('Enter row matrix of VALUES of Y '); PX = sum(P); % probabilities for X PY = fliplr(sum(P')); % probabilities for Y [t,u] = meshgrid(X,fliplr(Y)); disp(' Use array operations on matrices X, Y, PX, PY, t, u, and P') jcalcf.m Description of Code: jcalcf.mfunction [x,y,t,u,px,py,p] = jcalcf(X,Y,P) is a function version of jcalc, which allows arbitrary naming of variables. Answer function [x,y,t,u,px,py,p] = jcalcf(X,Y,P) % JCALCF [x,y,t,u,px,py,p] = jcalcf(X,Y,P) Function version of jcalc % Version of 5/3/95 % Allows arbitrary naming of variables if sum(size(P) ~= [length(Y) length(X)]) > 0 error(' Incompatible vector sizes') end x = X; y = Y; p = P; px = sum(P); py = fliplr(sum(P')); [t,u] = meshgrid(X,fliplr(Y)); jointzw.m Description of Code: jointzw.m Sets up joint distribution for $Z = g(X, Y)$ and $W = h(X, Y)$ and provides calculating matrices as in jcalc. Inputs are $P, X$ and $Y$ as well as array expressions for $g(t, u)$ and $h(t, u)$. Outputs are matrices $Z, W, PZW$ for the joint distribution, marginal probabilities $PZ, PW$, and the calculating matrices $v, w$. Answer % JOINTZW file jointzw.m Joint dbn for two functions of (X,Y) % Version of 4/29/97 % Obtains joint distribution for % Z = g(X,Y) and W = h(X,Y) % Inputs P, X, and Y as well as array % expressions for g(t,u) and h(t,u) P = input('Enter joint prob for (X,Y) '); X = input('Enter values for X '); Y = input('Enter values for Y '); [t,u] = meshgrid(X,fliplr(Y)); G = input('Enter expression for g(t,u) '); H = input('Enter expression for h(t,u) '); [Z,PZ] = csort(G,P); [W,PW] = csort(H,P); r = length(W); c = length(Z); PZW = zeros(r,c); for i = 1:r for j = 1:c a = find((G==Z(j))&(H==W(i))); if ~isempty(a) PZW(i,j) = total(P(a)); end end end PZW = flipud(PZW); [v,w] = meshgrid(Z,fliplr(W)); if (G==t)&(H==u) disp(' ') disp(' Note: Z = X and W = Y') disp(' ') elseif G==t disp(' ') disp(' Note: Z = X') disp(' ') elseif H==u disp(' ') disp(' Note: W = Y') disp(' ') end disp('Use array operations on Z, W, PZ, PW, v, w, PZW') jdtest.m Description of Code: jdtest.m Tests a joint probability matrix $P$ for negative entries and unit total probability.. Answer function y = jdtest(P) % JDTEST y = jdtest(P) Tests P for unit total and negative elements % Version of 10/8/93 M = min(min(P)); S = sum(sum(P)); if M < 0 y = 'Negative entries'; elseif abs(1 - S) > 1e-7 y = 'Probabilities do not sum to one'; else y = 'P is a valid distribution'; end Setup for general random variables tappr.m Description of Code: tappr.m Uses the density function to set up a discrete approximation to the distribution for absolutely continuous random variable $X$. Answer % TAPPR file tappr.m Discrete approximation to ac random variable % Version of 4/16/94 % Sets up discrete approximation to distribution for % absolutely continuous random variable X % Density is entered as a function of t r = input('Enter matrix [a b] of x-range endpoints '); n = input('Enter number of x approximation points '); d = (r(2) - r(1))/n; t = (r(1):d:r(2)-d) +d/2; PX = input('Enter density as a function of t '); PX = PX*d; PX = PX/sum(PX); X = t; disp('Use row matrices X and PX as in the simple case') tuappr.m Description of Code: tuappr.m Uses the joint density to set up discrete approximations to $X, Y, t, u$, and density. Answer % TUAPPR file tuappr.m Discrete approximation to joint ac pair % Version of 2/20/96 % Joint density entered as a function of t, u % Sets up discrete approximations to X, Y, t, u, and density rx = input('Enter matrix [a b] of X-range endpoints '); ry = input('Enter matrix [c d] of Y-range endpoints '); nx = input('Enter number of X approximation points '); ny = input('Enter number of Y approximation points '); dx = (rx(2) - rx(1))/nx; dy = (ry(2) - ry(1))/ny; X = (rx(1):dx:rx(2)-dx) + dx/2; Y = (ry(1):dy:ry(2)-dy) + dy/2; [t,u] = meshgrid(X,fliplr(Y)); P = input('Enter expression for joint density '); P = dx*dy*P; P = P/sum(sum(P)); PX = sum(P); PY = fliplr(sum(P')); disp('Use array operations on X, Y, PX, PY, t, u, and P') dfappr.m dfappr.m Approximate discrete distribution from distribution function entered as a function of $t$. Answer % DFAPPR file dfappr.m Discrete approximation from distribution function % Version of 10/21/95 % Approximate discrete distribution from distribution % function entered as a function of t r = input('Enter matrix [a b] of X-range endpoints '); s = input('Enter number of X approximation points '); d = (r(2) - r(1))/s; t = (r(1):d:r(2)-d) +d/2; m = length(t); f = input('Enter distribution function F as function of t '); f = [0 f]; PX = f(2:m+1) - f(1:m); PX = PX/sum(PX); X = t - d/2; disp('Distribution is in row matrices X and PX') acsetup.m Description of Code: acsetup.m Approximate distribution for absolutely continuous random variable $X$. Density is entered as a string variablefunction of $t$. Answer % ACSETUP file acsetup.m Discrete approx from density as string variable % Version of 10/22/94 % Approximate distribution for absolutely continuous rv X % Density is entered as a string variable function of t disp('DENSITY f is entered as a STRING VARIABLE.') disp('either defined previously or upon call.') r = input('Enter matrix [a b] of x-range endpoints '); s = input('Enter number of x approximation points '); d = (r(2) - r(1))/s; t = (r(1):d:r(2)-d) +d/2; m = length(t); f = input('Enter density as a function of t '); PX = eval(f); PX = PX*d; PX = PX/sum(PX); X = t; disp('Distribution is in row matrices X and PX') dfsetup.m Description of Code: dfsetup.m Approximate discrete distribution from distribution function entered as a string variable function of $t$. Answer % DFSETUP file dfsetup.m Discrete approx from string dbn function % Version of 10/21/95 % Approximate discrete distribution from distribution % function entered as string variable function of t disp('DISTRIBUTION FUNCTION F is entered as a STRING') disp('VARIABLE, either defined previously or upon call') r = input('Enter matrix [a b] of X-range endpoints '); s = input('Enter number of X approximation points '); d = (r(2) - r(1))/s; t = (r(1):d:r(2)-d) +d/2; m = length(t); F = input('Enter distribution function F as function of t '); f = eval(F); f = [0 f]; PX = f(2:m+1) - f(1:m); PX = PX/sum(PX); X = t - d/2; disp('Distribution is in row matrices X and PX') Setup for independent simple random variables MATLAB version 5.1 has provisions for multidimensional arrays, which make possible more direct implementation of icalc3 and icalc4. icalc.m Description of Code: icalc.m Calculation setup for an independent pair of simple random variables. Input consists of marginal distributions for $X, Y$. Output is joint distribution and calculating matrices $t, u$. Answer % ICALC file icalc.m Calculation setup for independent pair % Version of 5/3/95 % Joint calculation setup for independent pair X = input('Enter row matrix of X-values '); Y = input('Enter row matrix of Y-values '); PX = input('Enter X probabilities '); PY = input('Enter Y probabilities '); [a,b] = meshgrid(PX,fliplr(PY)); P = a.*b; % Matrix of joint independent probabilities [t,u] = meshgrid(X,fliplr(Y)); % t, u matrices for joint calculations disp(' Use array operations on matrices X, Y, PX, PY, t, u, and P') icalcf.m icalcf.m[x,y,t,u,px,py,p] = icalcf(X,Y,PX,PY) is a function version of icalc, which allows arbitrary naming of variables. Answer function [x,y,t,u,px,py,p] = icalcf(X,Y,PX,PY) % ICALCF [x,y,t,u,px,py,p] = icalcf(X,Y,PX,PY) Function version of icalc % Version of 5/3/95 % Allows arbitrary naming of variables x = X; y = Y; px = PX; py = PY; if length(X) ~= length(PX) error(' X and PX of different lengths') elseif length(Y) ~= length(PY) error(' Y and PY of different lengths') end [a,b] = meshgrid(PX,fliplr(PY)); p = a.*b; % Matrix of joint independent probabilities [t,u] = meshgrid(X,fliplr(Y)); % t, u matrices for joint calculations icalc3.m Description of Code: icalc3.m Calculation setup for an independent class of three simple random variables. Answer % ICALC3 file icalc3.m Setup for three independent rv % Version of 5/15/96 % Sets up for calculations for three % independent simple random variables % Uses m-functions rep, elrep, kronf X = input('Enter row matrix of X-values '); Y = input('Enter row matrix of Y-values '); Z = input('Enter row matrix of Z-values '); PX = input('Enter X probabilities '); PY = input('Enter Y probabilities '); PZ = input('Enter Z probabilities '); n = length(X); m = length(Y); s = length(Z); [t,u] = meshgrid(X,Y); t = rep(t,1,s); u = rep(u,1,s); v = elrep(Z,m,n); % t,u,v matrices for joint calculations P = kronf(PZ,kronf(PX,PY')); disp('Use array operations on matrices X, Y, Z,') disp('PX, PY, PZ, t, u, v, and P') icalc4.m Description of Code: icalc4.m Calculation setup for an independent class of four simple random variables. Answer % ICALC4 file icalc4.m Setup for four independent rv % Version of 5/15/96 % Sets up for calculations for four % independent simple random variables % Uses m-functions rep, elrep, kronf X = input('Enter row matrix of X-values '); Y = input('Enter row matrix of Y-values '); Z = input('Enter row matrix of Z-values '); W = input('Enter row matrix of W-values '); PX = input('Enter X probabilities '); PY = input('Enter Y probabilities '); PZ = input('Enter Z probabilities '); PW = input('Enter W probabilities '); n = length(X); m = length(Y); s = length(Z); r = length(W); [t,u] = meshgrid(X,Y); t = rep(t,r,s); u = rep(u,r,s); [v,w] = meshgrid(Z,W); v = elrep(v,m,n); % t,u,v,w matrices for joint calculations w = elrep(w,m,n); P = kronf(kronf(PZ,PW'),kronf(PX,PY')); disp('Use array operations on matrices X, Y, Z, W') disp('PX, PY, PZ, PW, t, u, v, w, and P') Calculations for random variables ddbn.m Description of Code: ddbn.m Uses the distribution of a simple random variable (or simple approximation) to plot a step graph for the distribution function $F_X$ Answer % DDBN file ddbn.m Step graph of distribution function % Version of 10/25/95 % Plots step graph of dbn function FX from % distribution of simple rv (or simple approximation) xc = input('Enter row matrix of VALUES '); pc = input('Enter row matrix of PROBABILITIES '); m = length(xc); FX = cumsum(pc); xt = [xc(1)-1-0.1*abs(xc(1)) xc xc(m)+1+0.1*abs(xc(m))]; FX = [0 FX 1]; % Artificial extension of range and domain stairs(xt,FX) % Plot of stairstep graph hold on plot(xt,FX,'o') % Marks values at jump hold off grid xlabel('t') ylabel('u = F(t)') title('Distribution Function') cdbn.m Description of Code: cdbn.m Plots a continuous graph of a distribution function of a simple random variable (or simple approximation). Answer % CDBN file cdbn.m Continuous graph of distribution function % Version of 1/29/97 % Plots continuous graph of dbn function FX from % distribution of simple rv (or simple approximation) xc = input('Enter row matrix of VALUES '); pc = input('Enter row matrix of PROBABILITIES '); m = length(xc); FX = cumsum(pc); xt = [xc(1)-0.01 xc xc(m)+0.01]; FX = [0 FX FX(m)]; % Artificial extension of range and domain plot(xt,FX) % Plot of continuous graph grid xlabel('t') ylabel('u = F(t)') title('Distribution Function') simple.m Description of Code: simple.m Calculates basic quantites for simple random variables from the distribution, input as row matrices $X$ and $PX$. Answer % SIMPLE file simple.m Calculates basic quantites for simple rv % Version of 6/18/95 X = input('Enter row matrix of X-values '); PX = input('Enter row matrix PX of X probabilities '); n = length(X); % dimension of X EX = dot(X,PX) % E[X] EX2 = dot(X.^2,PX) % E[X^2] VX = EX2 - EX^2 % Var[X] disp(' ') disp('Use row matrices X and PX for further calculations') jddbn.m Description of Code: jddbn.m Representation of joint distribution function for simple pair by obtaining the value of $F_{XY}$ at the lower left hand corners of each grid cell. Answer % JDDBN file jddbn.m Joint distribution function % Version of 10/7/96 % Joint discrete distribution function for % joint matrix P (arranged as on the plane). % Values at lower left hand corners of grid cells P = input('Enter joint probability matrix (as on the plane) '); FXY = flipud(cumsum(flipud(P))); FXY = cumsum(FXY')'; disp('To view corner values for joint dbn function, call for FXY') jsimple.m Description of Code: jsimple.m Calculates basic quantities for a joint simple pair $\{X, Y\}$ from the joint distrsibution $X, Y, P$ as in jcalc. Calculated quantities include means, variances, covariance, regression line, and regression curve (conditional expectation $E[Y|X = t]$ Answer % JSIMPLE file jsimple.m Calculates basic quantities for joint simple rv % Version of 5/25/95 % The joint probabilities are arranged as on the plane % (the top row corresponds to the largest value of Y) P = input('Enter JOINT PROBABILITIES (as on the plane) '); X = input('Enter row matrix of VALUES of X '); Y = input('Enter row matrix of VALUES of Y '); disp(' ') PX = sum(P); % marginal distribution for X PY = fliplr(sum(P')); % marginal distribution for Y XDBN = [X; PX]'; YDBN = [Y; PY]'; PT = idbn(PX,PY); D = total(abs(P - PT)); % test for difference if D > 1e-8 % to prevent roundoff error masking zero disp('{X,Y} is NOT independent') else disp('{X,Y} is independent') end disp(' ') [t,u] = meshgrid(X,fliplr(Y)); EX = total(t.*P) % E[X] EY = total(u.*P) % E[Y] EX2 = total((t.^2).*P) % E[X^2] EY2 = total((u.^2).*P) % E[Y^2] EXY = total(t.*u.*P) % E[XY] VX = EX2 - EX^2 % Var[X] VY = EY2 - EY^2 % Var[Y] cv = EXY - EX*EY; % Cov[X,Y] = E[XY] - E[X]E[Y] if abs(cv) > 1e-9 % to prevent roundoff error masking zero CV = cv else CV = 0 end a = CV/VX % regression line of Y on X is b = EY - a*EX % u = at + b R = CV/sqrt(VX*VY); % correlation coefficient rho disp(['The regression line of Y on X is: u = ',num2str(a),'t + ',num2str(b),]) disp(['The correlation coefficient is: rho = ',num2str(R),]) disp(' ') eYx = sum(u.*P)./PX; EYX = [X;eYx]'; disp('Marginal dbns are in X, PX, Y, PY; to view, call XDBN, YDBN') disp('E[Y|X = x] is in eYx; to view, call for EYX') disp('Use array operations on matrices X, Y, PX, PY, t, u, and P') japprox.m Description of Code: japprox.m Assumes discrete setup and calculates basic quantities for a pair of random variables as in jsimple. Plots the regression line and regression curve. Answer % JAPPROX file japprox.m Basic quantities for ac pair {X,Y} % Version of 5/7/96 % Assumes tuappr has set X, Y, PX, PY, t, u, P EX = total(t.*P) % E[X] EY = total(u.*P) % E[Y] EX2 = total(t.^2.*P) % E[X^2] EY2 = total(u.^2.*P) % E[Y^2] EXY = total(t.*u.*P) % E[XY] VX = EX2 - EX^2 % Var[X] VY = EY2 - EY^2 % Var[Y] cv = EXY - EX*EY; % Cov[X,Y] = E[XY] - E[X]E[Y] if abs(cv) > 1e-9 % to prevent roundoff error masking zero CV = cv else CV = 0 end a = CV/VX % regression line of Y on X is b = EY - a*EX % u = at + b R = CV/sqrt(VX*VY); disp(['The regression line of Y on X is: u = ',num2str(a),'t + ',num2str(b),]) disp(['The correlation coefficient is: rho = ',num2str(R),]) disp(' ') eY = sum(u.*P)./sum(P); % eY(t) = E[Y|X = t] RL = a*X + b; plot(X,RL,X,eY,'-.') grid title('Regression line and Regression curve') xlabel('X values') ylabel('Y values') legend('Regression line','Regression curve') clear eY % To conserve memory clear RL disp('Calculate with X, Y, t, u, P, as in joint simple case') Calculations and tests for independent random variables mgsum.m Description of Code: mgsum.mfunction [z,pz] = mgsum(x,y,px,py) determines the distribution for the sum of an independent pair of simple random variables from their distributions. Answer function [z,pz] = mgsum(x,y,px,py) % MGSUM [z,pz] = mgsum(x,y,px,py) Sum of two independent simple rv % Version of 5/6/96 % Distribution for the sum of two independent simple random variables % x is a vector (row or column) of X values % y is a vector (row or column) of Y values % px is a vector (row or column) of X probabilities % py is a vector (row or column) of Y probabilities % z and pz are row vectors [a,b] = meshgrid(x,y); t = a+b; [c,d] = meshgrid(px,py); p = c.*d; [z,pz] = csort(t,p); mgsum3.m Description of Code: mgsum3.mfunction [w,pw] = mgsum3(x,y,z,px,py,pz) extends mgsum to three random variables by repeated application of mgsum. Similarly for mgsum4.m. Answer function [w,pw] = mgsum3(x,y,z,px,py,pz) % MGSUM3 [w,pw] = mgsum3(x,y,z,px,py,y) Sum of three independent simple rv % Version of 5/2/96 % Distribution for the sum of three independent simple random variables % x is a vector (row or column) of X values % y is a vector (row or column) of Y values % z is a vector (row or column) of Z values % px is a vector (row or column) of X probabilities % py is a vector (row or column) of Y probabilities % pz is a vector (row or column) of Z probabilities % W and pW are row vectors [a,pa] = mgsum(x,y,px,py); [w,pw] = mgsum(a,z,pa,pz); mgnsum.m Description of Code: mgnsum.mfunction [z,pz] = mgnsum(X,P) determines the distribution for a sum of $n$ independent random variables. $X$ an $n$-row matrix of $X$-values and $n$-row matrix of $P$-values (padded with zeros, if necessary, to make all rows the same length. Answer function [z,pz] = mgnsum(X,P) % MGNSUM [z,pz] = mgnsum(X,P) Sum of n independent simple rv % Version of 5/16/96 % Distribution for the sum of n independent simple random variables % X an n-row matrix of X-values % P an n-row matrix of P-values % padded with zeros, if necessary % to make all rows the same length [n,r] = size(P); z = 0; pz = 1; for i = 1:n x = X(i,:); p = P(i,:); x = x(find(p>0)); p = p(find(p>0)); [z,pz] = mgsum(z,x,pz,p); end mgsumn.m Description of Code: mgsumn.mfunction [z,pz] = mgsumn(varargin) is an alternate to mgnsum, utilizing varargin in MATLAB version 5.1. The call is of the form [z,pz] = mgsumn([x1;p1],[x2;p2], ..., [xn;pn]). Answer function [z,pz] = mgsumn(varargin) % MGSUMN [z,pz] = mgsumn([x1;p1],[x2;p2], ..., [xn;pn]) % Version of 6/2/97 Uses MATLAB version 5.1 % Sum of n independent simple random variables % Utilizes distributions in the form [x;px] (two rows) % Iterates mgsum n = length(varargin); % The number of distributions z = 0; % Initialization pz = 1; for i = 1:n % Repeated use of mgsum [z,pz] = mgsum(z,varargin{i}(1,:),pz,varargin{i}(2,:)); end diidsum.m Description of Code: diidsum.mfunction [x,px] = diidsum(X,PX,n) determines the sum of $n$ iid simple random variables, with the common distribution $X$, $PX$ Answer function [x,px] = diidsum(X,PX,n) % DIIDSUM [x,px] = diidsum(X,PX,n) Sum of n iid simple random variables % Version of 10/14/95 Input rev 5/13/97 % Sum of n iid rv with common distribution X, PX % Uses m-function mgsum x = X; % Initialization px = PX; for i = 1:n-1 [x,px] = mgsum(x,X,px,PX); end itest.m Description of Code: itest.m Tests for independence the matrix $P$ of joint probabilities for a simple pair $\{X, Y\}$ of random variables. Answer % ITEST file itest.m Tests P for independence % Version of 5/9/95 % Tests for independence the matrix of joint % probabilities for a simple pair {X,Y} pt = input('Enter matrix of joint probabilities '); disp(' ') px = sum(pt); % Marginal probabilities for X py = sum(pt'); % Marginal probabilities for Y (reversed) [a,b] = meshgrid(px,py); PT = a.*b; % Joint independent probabilities D = abs(pt - PT) > 1e-9; % Threshold set above roundoff if total(D) > 0 disp('The pair {X,Y} is NOT independent') disp('To see where the product rule fails, call for D') else disp('The pair {X,Y} is independent') end idbn.m Description of Code: idbn.mfunction p = idbn(px,py) uses marginal probabilities to determine the joint probability matrix (arranged as on the plane) for an independent pair of simple random variables. Answer function p = idbn(px,py) % IDBN p = idbn(px,py) Matrix of joint independent probabilities % Version of 5/9/95 % Determines joint probability matrix for two independent % simple random variables (arranged as on the plane) [a,b] = meshgrid(px,fliplr(py)); p = a.*b isimple.m Description of Code: isimple.m Takes as inputs the marginal distributions for an independent pair $\{X, Y\}$ of simple random variables. Sets up the joint distribution probability matrix $P$ as in idbn, and forms the calculating matrices $t, u$ as in jcalc. Calculates basic quantities and makes available matrices $X$, $Y$, $PX$, $PY$, $P$, $t$, $u$, for additional calculations. Answer % ISIMPLE file isimple.m Calculations for independent simple rv % Version of 5/3/95 X = input('Enter row matrix of X-values '); Y = input('Enter row matrix of Y-values '); PX = input('Enter X probabilities '); PY = input('Enter Y probabilities '); [a,b] = meshgrid(PX,fliplr(PY)); P = a.*b; % Matrix of joint independent probabilities [t,u] = meshgrid(X,fliplr(Y)); % t, u matrices for joint calculations EX = dot(X,PX) % E[X] EY = dot(Y,PY) % E[Y] VX = dot(X.^2,PX) - EX^2 % Var[X] VY = dot(Y.^2,PY) - EY^2 % Var[Y] disp(' Use array operations on matrices X, Y, PX, PY, t, u, and P') Quantile functions for bounded distributions dquant.m Description of Code: dquant.mfunction t = dquant(X,PX,U) determines the values of the quantile function for a simple random variable with distribution $X$, $PX$ at the probability values in row vector $U$. The probability vector $U$ is often determined by a random number generator. Answer function t = dquant(X,PX,U) % DQUANT t = dquant(X,PX,U) Quantile function for a simple random variable % Version of 10/14/95 % U is a vector of probabilities m = length(X); n = length(U); F = [0 cumsum(PX)+1e-12]; F(m+1) = 1; % Makes maximum value exactly one if U(n) >= 1 % Prevents improper values of probability U U(n) = 1; end if U(1) <= 0 U(1) = 1e-9; end f = rowcopy(F,n); % n rows of F u = colcopy(U,m); % m columns of U t = X*((f(:,1:m) < u)&(u <= f(:,2:m+1)))'; dquanplot.m Description of Code: dquanplot.m Plots as a stairs graph the quantile function for a simple random variable $X$. The plot is the values of $X$ versus the distribution function $F_X$. Answer % DQUANPLOT file dquanplot.m Plot of quantile function for a simple rv % Version of 7/6/95 % Uses stairs to plot the inverse of FX X = input('Enter VALUES for X '); PX = input('Enter PROBABILITIES for X '); m = length(X); F = [0 cumsum(PX)]; XP = [X X(m)]; stairs(F,XP) grid title('Plot of Quantile Function') xlabel('u') ylabel('t = Q(u)') hold on plot(F(2:m+1),X,'o') % Marks values at jumps hold off dsample.m Description of Code: dsample.m Calculates a sample from a discrete distribution, determines the relative frequencies of values, and compares with actual probabilities. Input consists of value and probability matrices for $X$ and the sample size $n$. A matrix $U$ is determined by a random number generator, and the m-function dquant is used to calculate the corresponding sample values. Various data on the sample are calculated and displayed. Answer % DSAMPLE file dsample.m Simulates sample from discrete population % Version of 12/31/95 (Display revised 3/24/97) % Relative frequencies vs probabilities for % sample from discrete population distribution X = input('Enter row matrix of VALUES '); PX = input('Enter row matrix of PROBABILITIES '); n = input('Sample size n '); U = rand(1,n); T = dquant(X,PX,U); [x,fr] = csort(T,ones(1,length(T))); disp(' Value Prob Rel freq') disp([x; PX; fr/n]') ex = sum(T)/n; EX = dot(X,PX); vx = sum(T.^2)/n - ex^2; VX = dot(X.^2,PX) - EX^2; disp(['Sample average ex = ',num2str(ex),]) disp(['Population mean E[X] = ',num2str(EX),]) disp(['Sample variance vx = ',num2str(vx),]) disp(['Population variance Var[X] = ',num2str(VX),]) quanplot.m Description of Code: quanplot.m Plots the quantile function for a distribution function $F_X$. Assumes the procedure dfsetup or acsetup has been run. A suitable set $U$ of probability values is determined and the m-function dquant is used to determine corresponding values of the quantile function. The results are plotted. Answer % QUANPLOT file quanplot.m Plots quantile function for dbn function % Version of 2/2/96 % Assumes dfsetup or acsetup has been run % Uses m-function dquant X = input('Enter row matrix of values '); PX = input('Enter row matrix of probabilities '); h = input('Probability increment h '); U = h:h:1; T = dquant(X,PX,U); U = [0 U 1]; Te = X(m) + abs(X(m))/20; T = [X(1) T Te]; plot(U,T) % Plot rather than stairs for general case grid title('Plot of Quantile Function') xlabel('u') ylabel('t = Q(u)') qsample.m Description of Code: qsample.m Simulates a sample for a given population density. Determines sample parameters and approximate population parameters. Assumes dfsetup or acsetup has been run. Takes as input the distribution matrices $X, PX$ and the sample size $n$. Uses a random number generator to obtain the probability matrix $U$ and uses the m-function dquant to determine the sample. Assumes dfsetup or acsetup has been run. Answer % QSAMPLE file qsample.m Simulates sample for given population density % Version of 1/31/96 % Determines sample parameters % and approximate population parameters. % Assumes dfsetup or acsetup has been run X = input('Enter row matrix of VALUES '); PX = input('Enter row matrix of PROBABILITIES '); n = input('Sample size n = '); m = length(X); U = rand(1,n); T = dquant(X,PX,U); ex = sum(T)/n; EX = dot(X,PX); vx = sum(T.^2)/n - ex^2; VX = dot(X.^2,PX) - EX^2; disp('The sample is in column vector T') disp(['Sample average ex = ', num2str(ex),]) disp(['Approximate population mean E(X) = ',num2str(EX),]) disp(['Sample variance vx = ',num2str(vx),]) disp(['Approximate population variance V(X) = ',num2str(VX),]) targetset.m Description of Code: targetset.m Setup for arrival at a target set of values. Used in conjunction with the m-procedure targetrun to determine the number of trials needed to visit $k$ of a specified set of target values. Input consists of the distribution matrices $X, PX$ and the specified set $E$ of target values. Answer % TARGETSET file targetset.m Setup for sample arrival at target set % Version of 6/24/95 X = input('Enter population VALUES '); PX = input('Enter population PROBABILITIES '); ms = length(X); x = 1:ms; % Value indices disp('The set of population values is') disp(X); E = input('Enter the set of target values '); ne = length(E); e = zeros(1,ne); for i = 1:ne e(i) = dot(E(i) == X,x); % Target value indices end F = [0 cumsum(PX)]; A = F(1:ms); B = F(2:ms+1); disp('Call for targetrun') targetrun.m Description of Code: targetrun.m Assumes the m-file targetset has provided the basic data. Input consists of the number $r$ of repetitions and the number $k$ of the target states to visit. Calculates and displays various results. Answer % TARGETRUN file targetrun.m Number of trials to visit k target values % Version of 6/24/95 Rev for Version 5.1 1/30/98 % Assumes the procedure targetset has been run. r = input('Enter the number of repetitions '); disp('The target set is') disp(E) ks = input('Enter the number of target values to visit '); if isempty(ks) ks = ne; end if ks > ne ks = ne; end clear T % Trajectory in value indices (reset) R0 = zeros(1,ms); % Indicator for target value indices R0(e) = ones(1,ne); S = zeros(1,r); % Number of trials for each run (reset) for k = 1:r R = R0; i = 1; while sum(R) > ne - ks u = rand(1,1); s = ((A < u)&(u <= B))*x'; if R(s) == 1 % Deletes indices as values reached R(s) = 0; end T(i) = s; i = i+1; end S(k) = i-1; end if r == 1 disp(['The number of trials to completion is ',int2str(i-1),]) disp(['The initial value is ',num2str(X(T(1))),]) disp(['The terminal value is ',num2str(X(T(i-1))),]) N = 1:i-1; TR = [N;X(T)]'; disp('To view the trajectory, call for TR') else [t,f] = csort(S,ones(1,r)); D = [t;f]'; p = f/r; AV = dot(t,p); SD = sqrt(dot(t.^2,p) - AV^2); MN = min(t); MX = max(t); disp(['The average completion time is ',num2str(AV),]) disp(['The standard deviation is ',num2str(SD),]) disp(['The minimum completion time is ',int2str(MN),]) disp(['The maximum completion time is ',int2str(MX),]) disp(' ') disp('To view a detailed count, call for D.') disp('The first column shows the various completion times;') disp('the second column shows the numbers of trials yielding those times') plot(t,cumsum(p)) grid title('Fraction of Runs t Steps or Less') ylabel('Fraction of runs') xlabel('t = number of steps to complete run') end Compound demand The following pattern provides a useful model in many situations. Consider $D = \sum_{k = 0}^{N} Y_k$ where $Y_0 = 0$, and the class $\{Y_k: 1 \le k\}$ is iid, independent of the counting random variable $N$. One natural interpretation is to consider $N$ to be the number of customers in a store and $Y_k$ the amount purchased by the $k$th customer. Then $D$ is the total demand of the actual customers. Hence, we call $D$ the compound demand. gend.m Description of Code: gend.m Uses coefficients of the generating functions for $N$ and $Y$ to calculate, in the integer case, the marginal distribution for the compound demand $D$ and the joint distribution for $\{N, D\}$ Answer % GEND file gend.m Marginal and joint dbn for integer compound demand % Version of 5/21/97 % Calculates marginal distribution for compound demand D % and joint distribution for {N,D} in the integer case % Do not forget zero coefficients for missing powers % in the generating functions for N, Y disp('Do not forget zero coefficients for missing powers') gn = input('Enter gen fn COEFFICIENTS for gN '); gy = input('Enter gen fn COEFFICIENTS for gY '); n = length(gn) - 1; % Highest power in gN m = length(gy) - 1; % Highest power in gY P = zeros(n + 1,n*m + 1); % Base for generating P y = 1; % Initialization P(1,1) = gn(1); % First row of P (P(N=0) in the first position) for i = 1:n % Row by row determination of P y = conv(y,gy); % Successive powers of gy P(i+1,1:i*m+1) = y*gn(i+1); % Successive rows of P end PD = sum(P); % Probability for each possible value of D a = find(gn); % Location of nonzero N probabilities b = find(PD); % Location of nonzero D probabilities P = P(a,b); % Removal of zero rows and columns P = rot90(P); % Orientation as on the plane N = 0:n; N = N(a); % N values with positive probabilites PN = gn(a); % Positive N probabilities Y = 0:m; % All possible values of Y Y = Y(find(gy)); % Y values with positive probabilities PY = gy(find(gy)); % Positive Y proabilities D = 0:n*m; % All possible values of D PD = PD(b); % Positive D probabilities D = D(b); % D values with positive probabilities gD = [D; PD]'; % Display combination disp('Results are in N, PN, Y, PY, D, PD, P') disp('May use jcalc or jcalcf on N, D, P') disp('To view distribution for D, call for gD') gendf.m Description of Code: gendf.mfunction [d,pd] = gendf(gn,gy) is a function version of gend, which allows arbitrary naming of the variables. Calculates the distribution for $D$, but not the joint distribution for $\{N, D\}$ Answer function [d,pd] = gendf(gn,gy) % GENDF [d,pd] = gendf(gN,gY) Function version of gend.m % Calculates marginal for D in the integer case % Version of 5/21/97 % Do not forget zero coefficients for missing powers % in the generating functions for N, Y n = length(gn) - 1; % Highest power in gN m = length(gy) - 1; % Highest power in gY P = zeros(n + 1,n*m + 1); % Base for generating P y = 1; % Initialization P(1,1) = gn(1); % First row of P (P(N=0) in the first position) for i = 1:n % Row by row determination of P y = conv(y,gy); % Successive powers of gy P(i+1,1:i*m+1) = y*gn(i+1); % Successive rows of P end PD = sum(P); % Probability for each possible value of D D = 0:n*m; % All possible values of D b = find(PD); % Location of nonzero D probabilities d = D(b); % D values with positive probabilities pd = PD(b); % Positive D probabilities mgd.m Description of Code: mgd.m Uses coefficients for the generating function for $N$ and the distribution for simple $Y$ to calculate the distribution for the compound demand. Answer % MGD file mgd.m Moment generating function for compound demand % Version of 5/19/97 % Uses m-functions csort, mgsum disp('Do not forget zeros coefficients for missing') disp('powers in the generating function for N') disp(' ') g = input('Enter COEFFICIENTS for gN '); y = input('Enter VALUES for Y '); p = input('Enter PROBABILITIES for Y '); n = length(g); % Initialization a = 0; b = 1; D = a; PD = g(1); for i = 2:n [a,b] = mgsum(y,a,p,b); D = [D a]; PD = [PD b*g(i)]; [D,PD] = csort(D,PD); end r = find(PD>1e-13); D = D(r); % Values with positive probability PD = PD(r); % Corresponding probabilities mD = [D; PD]'; % Display details disp('Values are in row matrix D; probabilities are in PD.') disp('To view the distribution, call for mD.') Exercise $1$ Description of Code: mgdf.mfunction [d,pd] = mgdf(pn,y,py) is a function version of mgd, which allows arbitrary naming of the variables. The input matrix $pn$ is the coefficient matrix for the counting random variable generating function. Zeros for the missing powers must be included. The matrices $y, py$ are the actual values and probabilities of the demand random variable. Answer function [d,pd] = mgdf(pn,y,py) % MGDF [d,pd] = mgdf(pn,y,py) Function version of mgD % Version of 5/19/97 % Uses m-functions mgsum and csort % Do not forget zeros coefficients for missing % powers in the generating function for N n = length(pn); % Initialization a = 0; b = 1; d = a; pd = pn(1); for i = 2:n [a,b] = mgsum(y,a,py,b); d = [d a]; pd = [pd b*pn(i)]; [d,pd] = csort(d,pd); end a = find(pd>1e-13); % Location of positive probabilities pd = pd(a); % Positive probabilities d = d(a); % D values with positive probability Exercise $1$ Description of Code: randbern.m Let S be the number of successes in a random number $N$ of Bernoulli trials, with probability $p$ of success on each trial. The procedure randbern takes as inputs the probability $p$ of success and the distribution matrices $N$, $PN$ for the counting random variable $N$ and calculates the joint distribution for $\{N, S\}$ and the marginal distribution for $S$. Answer % RANDBERN file randbern.m Random number of Bernoulli trials % Version of 12/19/96; notation modified 5/20/97 % Joint and marginal distributions for a random number of Bernoulli trials % N is the number of trials % S is the number of successes p = input('Enter the probability of success '); N = input('Enter VALUES of N '); PN = input('Enter PROBABILITIES for N '); n = length(N); m = max(N); S = 0:m; P = zeros(n,m+1); for i = 1:n P(i,1:N(i)+1) = PN(i)*ibinom(N(i),p,0:N(i)); end PS = sum(P); P = rot90(P); disp('Joint distribution N, S, P, and marginal PS') Simulation of Markov systems Exercise $1$ Description of Code: inventory1.m Calculates the transition matrix for an $(m, M)$ inventory policy. At the end of each period, if the stock is less than a reorder point $m$, stock is replenished to the level $M$. Demand in each period is an integer valued random variable $Y$. Input consists of the parameters $m, M$ and the distribution for Y as a simple random variable (or a discrete approximation). Answer % INVENTORY1 file inventory1.m Generates P for (m,M) inventory policy % Version of 1/27/97 % Data for transition probability calculations % for (m,M) inventory policy M = input('Enter value M of maximum stock '); m = input('Enter value m of reorder point '); Y = input('Enter row vector of demand values '); PY = input('Enter demand probabilities '); states = 0:M; ms = length(states); my = length(Y); % Calculations for determining P [y,s] = meshgrid(Y,states); T = max(0,M-y).*(s < m) + max(0,s-y).*(s >= m); P = zeros(ms,ms); for i = 1:ms [a,b] = meshgrid(T(i,:),states); P(i,:) = PY*(a==b)'; end disp('Result is in matrix P') branchp.m Description of Code: branchp.m Calculates the transition matrix for a simple branching process with a specified maximum population. Input consists of the maximum population value $M$ and the coefficient matrix for the generating function for the individual propagation random variables $Z_i$. The latter matrix must include zero coefficients for missing powers. Answer % BRANCHP file branchp.m Transition P for simple branching process % Version of 7/25/95 % Calculates transition matrix for a simple branching % process with specified maximum population. disp('Do not forget zero probabilities for missing values of Z') PZ = input('Enter PROBABILITIES for individuals '); M = input('Enter maximum allowable population '); mz = length(PZ) - 1; EZ = dot(0:mz,PZ); disp(['The average individual propagation is ',num2str(EZ),]) P = zeros(M+1,M+1); Z = zeros(M,M*mz+1); k = 0:M*mz; a = min(M,k); z = 1; P(1,1) = 1; for i = 1:M % Operation similar to gend z = conv(PZ,z); Z(i,1:i*mz+1) = z; [t,p] = csort(a,Z(i,:)); P(i+1,:) = p; end disp('The transition matrix is P') disp('To study the evolution of the process, call for branchdbn') chainset.m Description of Code: chainset.m Sets up for simulation of Markov chains. Inputs are the transition matrix P the set of states, and an optional set of target states. The chain generating procedures listed below assume this procedure has been run. Answer % CHAINSET file chainset.m Setup for simulating Markov chains % Version of 1/2/96 Revise 7/31/97 for version 4.2 and 5.1 P = input('Enter the transition matrix '); ms = length(P(1,:)); MS = 1:ms; states = input('Enter the states if not 1:ms '); if isempty(states) states = MS; end disp('States are') disp([MS;states]') PI = input('Enter the long-run probabilities '); F = [zeros(1,ms); cumsum(P')]'; A = F(:,MS); B = F(:,MS+1); e = input('Enter the set of target states '); ne = length(e); E = zeros(1,ne); for i = 1:ne E(i) = MS(e(i)==states); end disp(' ') disp('Call for for appropriate chain generating procedure') mchain.m Description of Code: mchain.m Assumes chainset has been run. Generates trajectory of specified length, with specified initial state. Answer % MCHAIN file mchain.m Simulation of Markov chains % Version of 1/2/96 Revised 7/31/97 for version 4.2 and 5.1 % Assumes the procedure chainset has been run n = input('Enter the number n of stages '); st = input('Enter the initial state '); if ~isempty(st) s = MS(st==states); else s = 1; end T = zeros(1,n); % Trajectory in state numbers U = rand(1,n); for i = 1:n T(i) = s; s = ((A(s,:) < U(i))&(U(i) <= B(s,:)))*MS'; end N = 0:n-1; tr = [N;states(T)]'; n10 = min(n,11); TR = tr(1:n10,:); f = ones(1,n)/n; [sn,p] = csort(T,f); if isempty(PI) disp(' State Frac') disp([states; p]') else disp(' State Frac PI') disp([states; p; PI]') end disp('To view the first part of the trajectory of states, call for TR') arrival.m Description of Code: arrival.m Assumes chainset has been run. Calculates repeatedly the arrival time to a prescribed set of states. Answer % ARRIVAL file arrival.m Arrival time to a set of states % Version of 1/2/96 Revised 7/31/97 for version 4.2 and 5.1 % Calculates repeatedly the arrival % time to a prescribed set of states. % Assumes the procedure chainset has been run. r = input('Enter the number of repetitions '); disp('The target state set is:') disp(e) st = input('Enter the initial state '); if ~isempty(st) s1 = MS(st==states); % Initial state number else s1 = 1; end clear T % Trajectory in state numbers (reset) S = zeros(1,r); % Arrival time for each rep (reset) TS = zeros(1,r); % Terminal state number for each rep (reset) for k = 1:r R = zeros(1,ms); % Indicator for target state numbers R(E) = ones(1,ne); % reset for target state numbers s = s1; T(1) = s; i = 1; while R(s) ~= 1 % While s is not a target state number u = rand(1,1); s = ((A(s,:) < u)&(u <= B(s,:)))*MS'; i = i+1; T(i) = s; end S(k) = i-1; % i is the number of stages; i-1 is time TS(k) = T(i); end [ts,ft] = csort(TS,ones(1,r)); % ts = terminal state numbers ft = frequencies fts = ft/r; % Relative frequency of each ts [a,at] = csort(TS,S); % at = arrival time for each ts w = at./ft; % Average arrival time for each ts RES = [states(ts); fts; w]'; disp(' ') if r == 1 disp(['The arrival time is ',int2str(i-1),]) disp(['The state reached is ',num2str(states(ts)),]) N = 0:i-1; TR = [N;states(T)]'; disp('To view the trajectory of states, call for TR') else disp(['The result of ',int2str(r),' repetitions is:']) disp('Term state Rel Freq Av time') disp(RES) disp(' ') [t,f] = csort(S,ones(1,r)); % t = arrival times f = frequencies p = f/r; % Relative frequency of each t dbn = [t; p]'; AV = dot(t,p); SD = sqrt(dot(t.^2,p) - AV^2); MN = min(t); MX = max(t); disp(['The average arrival time is ',num2str(AV),]) disp(['The standard deviation is ',num2str(SD),]) disp(['The minimum arrival time is ',int2str(MN),]) disp(['The maximum arrival time is ',int2str(MX),]) disp('To view the distribution of arrival times, call for dbn') disp('To plot the arrival time distribution, call for plotdbn') end recurrence.m Description of Code: recurrence.m Assumes chainset has been run. Calculates repeatedly the recurrence time to a prescribed set of states, if initial state is in the set; otherwise calculates the arrival time. Answer % RECURRENCE file recurrence.m Recurrence time to a set of states % Version of 1/2/96 Revised 7/31/97 for version 4.2 and 5.1 % Calculates repeatedly the recurrence time % to a prescribed set of states, if initial % state is in the set; otherwise arrival time. % Assumes the procedure chainset has been run. r = input('Enter the number of repititions '); disp('The target state set is:') disp(e) st = input('Enter the initial state '); if ~isempty(st) s1 = MS(st==states); % Initial state number else s1 = 1; end clear T % Trajectory in state numbers (reset) S = zeros(1,r); % Recurrence time for each rep (reset) TS = zeros(1,r); % Terminal state number for each rep (reset) for k = 1:r R = zeros(1,ms); % Indicator for target state numbers R(E) = ones(1,ne); % reset for target state numbers s = s1; T(1) = s; i = 1; if R(s) == 1 u = rand(1,1); s = ((A(s,:) < u)&(u <= B(s,:)))*MS'; i = i+1; T(i) = s; end while R(s) ~= 1 % While s is not a target state number u = rand(1,1); s = ((A(s,:) < u)&(u <= B(s,:)))*MS'; i = i+1; T(i) = s; end S(k) = i-1; % i is the number of stages; i-1 is time TS(k) = T(i); end [ts,ft] = csort(TS,ones(1,r)); % ts = terminal state numbers ft = frequencies fts = ft/r; % Relative frequency of each ts [a,tt] = csort(TS,S); % tt = total time for each ts w = tt./ft; % Average time for each ts RES = [states(ts); fts; w]'; disp(' ') if r == 1 disp(['The recurrence time is ',int2str(i-1),]) disp(['The state reached is ',num2str(states(ts)),]) N = 0:i-1; TR = [N;states(T)]'; disp('To view the trajectory of state numbers, call for TR') else disp(['The result of ',int2str(r),' repetitions is:']) disp('Term state Rel Freq Av time') disp(RES) disp(' ') [t,f] = csort(S,ones(1,r)); % t = recurrence times f = frequencies p = f/r; % Relative frequency of each t dbn = [t; p]'; AV = dot(t,p); SD = sqrt(dot(t.^2,p) - AV^2); MN = min(t); MX = max(t); disp(['The average recurrence time is ',num2str(AV),]) disp(['The standard deviation is ',num2str(SD),]) disp(['The minimum recurrence time is ',int2str(MN),]) disp(['The maximum recurrence time is ',int2str(MX),]) disp('To view the distribution of recurrence times, call for dbn') disp('To plot the recurrence time distribution, call for plotdbn') end kvis.m Description of Code: kvis.m Assumes chainset has been run. Calculates repeatedly the time to complete visits to a specified $k$ of the states in a prescribed set. Answer % KVIS file kvis.m Time to complete k visits to a set of states % Version of 1/2/96 Revised 7/31/97 for version 4.2 and 5.1 % Calculates repeatedly the time to complete % visits to k of the states in a prescribed set. % Default is visit to all the target states. % Assumes the procedure chainset has been run. r = input('Enter the number of repetitions '); disp('The target state set is:') disp(e) ks = input('Enter the number of target states to visit '); if isempty(ks) ks = ne; end if ks > ne ks = ne; end st = input('Enter the initial state '); if ~isempty(st) s1 = MS(st==states); % Initial state number else s1 = 1; end disp(' ') clear T % Trajectory in state numbers (reset) R0 = zeros(1,ms); % Indicator for target state numbers R0(E) = ones(1,ne); % reset S = zeros(1,r); % Terminal transitions for each rep (reset) for k = 1:r R = R0; s = s1; if R(s) == 1 R(s) = 0; end i = 1; T(1) = s; while sum(R) > ne - ks u = rand(1,1); s = ((A(s,:) < u)&(u <= B(s,:)))*MS'; if R(s) == 1 R(s) = 0; end i = i+1; T(i) = s; end S(k) = i-1; end if r == 1 disp(['The time for completion is ',int2str(i-1),]) N = 0:i-1; TR = [N;states(T)]'; disp('To view the trajectory of states, call for TR') else [t,f] = csort(S,ones(1,r)); p = f/r; D = [t;f]'; AV = dot(t,p); SD = sqrt(dot(t.^2,p) - AV^2); MN = min(t); MX = max(t); disp(['The average completion time is ',num2str(AV),]) disp(['The standard deviation is ',num2str(SD),]) disp(['The minimum completion time is ',int2str(MN),]) disp(['The maximum completion time is ',int2str(MX),]) disp(' ') disp('To view a detailed count, call for D.') disp('The first column shows the various completion times;') disp('the second column shows the numbers of trials yielding those times') end plotdbn Description of Code: plotdbn Used after m-procedures arrival or recurrence to plot arrival or recurrence time distribution. Answer % PLOTDBN file plotdbn.m % Version of 1/23/98 % Plot arrival or recurrence time dbn % Use after procedures arrival or recurrence % to plot arrival or recurrence time distribution plot(t,p,'-',t,p,'+') grid title('Time Distribution') xlabel('Time in number of transitions') ylabel('Relative frequency')
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.01%3A_Appendix_A_to_Applied_Probability-_Directory_of_m-functions_and_m-procedures.txt
Series 1. Geometric series From the expression $(1 - r) (1 + r + r^2 + \cdot\cdot\cdot + r^n) = 1 - r^{n + 1}$, we obtain $\sum_{k = 0}^{n} r^k = \dfrac{1 - r^{n + 1}}{1 - r}$ for $r \ne 1$ For $|r| < 1$, these sums converge to the geometric series $\sum_{k = 0}^{\infty} r^k = \dfrac{1}{1 - r}$ Differentiation yields the following two useful series: $\sum_{k = 1}^{\infty} kr^{k - 1} = \dfrac{1}{(1 - r)^2}$ for $|r| < 1$ and $\sum_{k = 2}^{\infty} k(k - 1)r^{k - 2} = \dfrac{2}{(1 - r)^3}$ for $|r| < 1$ For the finite sum, differentiation and algebraic manipulation yields $\sum_{k = 0}^{n} k r^{k - 1} = \dfrac{1 - r^n [1 + n(1 - r)]}{(1 - r)^2}$ which converges to $\dfrac{1}{(1 - r)^2}$ for $|r| < 1$ 2. Exponential series. $e^k = \sum_{k = 0}^{\infty} \dfrac{x^k}{k!}$ and $e^{-s} = \sum_{k = 0}^{\infty} (-1)^k \dfrac{x^k}{k!}$ for any $x$ Simple algebraic manipulation yields the following equalities useful for the Poisson distribution: $\sum_{k = n}^{\infty} k \dfrac{x^k}{k!} = x \sum_{k = n - 1}^{\infty} \dfrac{x^k}{k!}$ and $\sum_{k = n}^{\infty} k (k - 1) \dfrac{x^k}{k!} = x^2 \sum_{k = n - 2}^{\infty} {x^k}{k!}$ 3. Sums of powers of integers $\sum_{i = 1}^{n} i = \dfrac{n(n + 1)}{2}$ $\sum_{i = 1}^{n} i^2 = \dfrac{n(n + 1)(2n + 1)}{6}$ Some useful integrals 1. The gamma function $\Gamma(r) = \int_{0}^{\infty} t^{r - 1} e^{-t}\ dt$ for $r > 0$ Integration by parts shows $Gamma (r) = (r - 1) \Gamma (r - 1)$ for $r > 1$ By induction $\Gamma (r) = (r - 1)(r - 2) \cdot\cdot\cdot (r - k) \Gamma (r - k)$ for $r > k$ For a positive integer $n$, $\Gamma (n) = (n - 1)!$ with $\Gamma (1) = 0! = 1$ 2. By a change of variable in the gamma integral, we obtain $\int_{0}^{\infty} t^r e^{-\lambda t}\ dt = \dfrac{\Gamma (r+1)}{\lambda^{r + 1}}$ $r > -1$, $\lambda > 0$ 3. A well known indefinite integral gives $\int_{a}^{\infty} te^{-\lambda t}\ dt = \dfrac{m!}{\lambda^{m + 1}} e^{-\lambda a} [1 + \lambda a + \dfrac{(\lambda a)^2}{2!} + \cdot\cdot\cdot + \dfrac{(\lambda a)^m}{m!}]$ 4. The following integrals are important for the Beta distribution. $\int_{0}^{1} u^r (1 - u)^s\ du = \dfrac{\Gamma (r + 1) \Gamma (s + 1)}{\Gamma (r + s + 2)}$ $r > -1$, $s > -1$ For nonnegative integers $m, n$ $\int_{0}^{1} u^m (1 - u)^n\ du = \dfrac{m! n!}{(m + n + 1)!}$ Some basic counting problems We consider three basic counting problems, which are used repeatedly as components of more complex problems. The first two, arrangements and occupancy are equivalent. The third is a basic matching problem. Arrangements of $r$ objects selected from among $n$ distinguishable objects. a. The order is significant. b. The order is irrelevant. For each of these, we consider two additional alternative conditions. 1. No element may be selected more than once. 2. Repitition is allowed. Occupancy of $n$ distinct cells by $r$ objects. These objects are a. Distinguishable. b. Indistinguishable. The occupancy may be 1. Exclusive. 2. Nonexclusive (i.e., more than one object per cell) The results in the four cases may be summarized as follows: a. 1. Ordered arrangements, without repetition (permutations). Distinguishable objects, exclusive occupancy. $P(n, r) = \dfrac{n!}{(n - r)!}$ 2. Ordered arrangements, with repitition allowed. Distinguishable objects, nonexclusive occupancy. $U(n,r) = n^r$ b. 1. Arrangements without repetition, order irrelevant (combinations). Indistinguishable objects, exclusive occupancy. $C(n, r) = \dfrac{n!}{r!(n - r)!} = \dfrac{P(n, r)}{r!}$ 2. Unordered arrangements, with repetition. Indistinguishable objects, nonexclusive occupancy. $S(n, r) = C(n + r - 1, r)$ Matching $n$ distinguishable elements to a fixed order. Let $M(n, k)$ be the number of permutations which give $k$ matches. $n = 5$ Natural order 1 2 3 4 5 Permutation 3 2 5 4 1 (Two matches– positions 2, 4) We reduce the problem to determining $m(n, 0)$, as follows: Select $k$ places for matches in $C(n, k)$ ways. Order the $n - k$ remaining elements so that no matches in the other $n - k$ places. $M(n, k) = C(n, k) M(n - k, 0)$ Some algebraic trickery shows that $M(n, 0)$ is the integer nearest $n!/e$. These are easily calculated by the MATLAB command M = round(gamma(n+1)/exp(1)) For example >> M = round(gamma([3:10]+1)/exp(1)); >> disp([3:6;M(1:4);7:10;M(5:8)]') 3 2 7 1854 4 9 8 14833 5 44 9 133496 6 265 10 1334961 Extended binomial coefficients and the binomial series The ordinary binomial coefficient is $C(n, k) = \dfrac{n!}{k!(n - k)!}$ for integers $n > 0$, $0 \le k \le n$ For any real $x$, any integer $k$, we extend the definition by $C(x, 0) = 1$, $C(x, k) = 0$ for $k < 0$, and $C(n, k) = 0$ for a positive integer $k > n$ and $C(x, k) = \dfrac{x(x - 1) (x - 2) \cdot\cdot\cdot (x - k + 1)}{k!}$ otherwise The Pascal's relation holds: $C(x, k) = C(x - 1, k - 1) + C(x - 1, k)$ The power series expansion about $t = 0$ shows $(1 + t)^x = 1 + C(x, 1)t + C(x, 2)t^2 + \cdot\cdot\cdot$ $\forall x$, $-1 < t < 1$ For $x = n$, a positive integer, the series becomes a polynomial of degree $n$ Cauchy's equation Let $f$ be a real-valued function defined on $(0, \infty)$, such that a. $f(t + u) = f(t) + f(u)$ for $t, u > 0$, and b. There is an open interval $I$ on which $f$ is bounded above (or is bounded below). Then $f(t) = f(1) t$ $\forall t > 0$ Let $f$ be a real-valued function defined on ($0, \infty$) such that a. $f(t + u) = f(t)f(u)$ $\forall t, u > 0$, and b. There is an interval on which $f$ is bounded above. Then, either $f(t) = 0$ for $t > 0$, or there is a constant $a$ such that \f(t) = e^{at}\) for $t >0$ [For a proof, see Billingsley, Probability and Measure, second edition, appendix A20] Countable and uncountable sets A set (or class) is countable iff either it is finite or its members can be put into a one-to-one correspondence with the natural numbers. Examples • The set of odd integers is countable. • Thee finite set $\{n: 1 \le n \le 1000\}$ is countable. • The set of all rational numbers is countable. (This is established by an argument known as diagonalization). • The set of pairs of elements from two countable sets is countable. • The union of a countable class of countable sets is countable. A set is uncountable iff it is neither finite nor can be put into a one-to-one correspondence with the natural numbers. Examples • The class of positive real numbers is uncountable. A well known operation shows that the assumption of countability leads to a contradiction. • The set of real numbers in any finite interval is uncountable, since these can be put into a one-to-one correspondence of the class of all positive reals.
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.02%3A_Appendix_B_to_Applied_Probability-_some_mathematical_aids.txt
Discrete distributions Indicator function $X = I_E$ $P(X = 1) = P(E) = p$ $P(X = 0) = q = 1 - p$ $E[X] = p$ $\text{Var} [X] = pq$ $M_X (s) = q + pe^s$ $g_X (s) = q + ps$ Simple random variable $X = \sum_{i = 1}^{n} t_i I_{A_i}$ (a primitive form) $P(A_i) = p_i$ $E[X] = \sum_{i = 1}^{n} t_ip_i$ $\text{Var} [X] = \sum_{i = 1}^{n} t_i^2 p_i q_i - 2 \sum_{i < j} t_i t_j p_i p_j$ $M_X(s) = \sum_{i = 1}^{n} p_i e^{st_i}$ Binomial$(n, p)$$X = \sum_{i = 1}^{n} I_{E_i}$ with $\{I_{E_i} : 1 \le i \le n\}$ iid $P(E_i) = p$ $P(X = k) = C(n, k) p^k q^{n - k}$ $E[X] = np$ $\text{Var} [X] = npq$ $M_X (s) = (q + pe^s)^n$ $g_X (s) = (q + ps)^n$ MATLAB: $P(X = k) = \text{ibinom} (n, p, k)$ $P(X \ge k) = \text{cbinom} (n, p, k)$ Geometric($p$)$P(X = k) = pq^k$ $\forall k \ge 0$ $E[X] = q/p$ $\text{Var} [X] = q/p^2$ $M_X (s) = dfrac{p}{1 - qe^s}$ $g_X (s) = \dfrac{p}{1- qs}$ If $Y - 1$ ~ geometric $(p)$, so that $P(Y = k) = pq^{k - 1}$ $\forall k \ge 1$, then $E[Y] = 1/p$ $\text{Var} [X] = q/p^2$ $M_Y (s) = \dfrac{pe^s}{1 - qe^s}$ $g_Y (s) = \dfrac{ps}{1 - qs}$ Negative binomial$(m, p)$, $X$ is the number of failures before the $m$th success. $P(X = k) = C(m + k - 1, m - 1) p^m q^k$ $\forall k \ge 0$ $E[X] = mq/p$ $\text{Var} [X] = mq/p^2$ $M_X (s) = (\dfrac{p}{1 - qe^s})^m$ $g_X (s) = (\dfrac{p}{1 - qs})^m$ For $Y_m = X_m + m$, the number of the trial on which $m$th success occurs. $P(Y = k) = C(k - 1, m - 1) p^m q^{k - m}$ $\forall k \ge m$. $E[Y] = m/p$ $\text{Var} [Y] = mq/p^2$ $M_Y(s) = (\dfrac{pe^s}{1 - qe^s})^m$ $g_Y (s) = (\dfrac{ps}{1 - qs})^m$ MATLAB: $P(Y = k) = \text{nbinom} (m, p, k)$ Poisson$(\mu)$. $P(X = k) = e^{-\mu} \dfrac{\mu^k}{k!}$ $\forall k \ge 0$ $E[X] = \mu$ $\text{Var}[X] = \mu$ $M_X (s) = e^{\mu (e^s - 1)}$ $g_X (s) = e^{\mu (s - 1)}$ MATLAB: $P(X = k) = \text{ipoisson} (m, k)$ $P(X \ge k) = \text{cpoisson} (m, k)$ Absolutely continuous distributions Uniform$(a, b)$ $f_x (t) = \dfrac{1}{b - a}$ $a < t < b$ (zero elsewhere) $E[X] = \dfrac{b + a}{2}$ $\text{Var} [X] = \dfrac{(b - a)^2}{12}$ $M_X (s) = \dfrac{e^{sb} - e^{sa}}{s(b - a)}$ Symmetric triangular $(-a, a)$ $f_X (t) = \begin{cases} (a + t)/a^2 & -a \le t < 0 \ (a - t)/a^2 & 0 \le t \le a \end{cases}$ $E[X] = 0$ $\text{Var} [X] = \dfrac{a^2}{6}$ $M_X (s) = \dfrac{e^{as} + e^{-as} - 2}{a^2 s^2} = \dfrac{e^{as} - 1}{as} \cdot \dfrac{1 - e^{-as}}{as}$ Exponential$(\lambda)$$f_X(t) = \lambda e^{-\lambda t}$ $t \ge 0$ $E[X] = \dfrac{1}{\lambda}$ $\text{Var} [X] = \dfrac{1}{\lambda^2}$ $M_X (s) = \dfrac{\lambda}{\lambda - s}$ Gamma$(\alpha, \lambda)$$f_X(t) = \dfrac{\lambda^{\alpha} t^{\alpha - 1} e^{-\lambda t}}{\Gamma (\alpha)}$ $t \ge 0$ $E[X] = \dfrac{\alpha}{\lambda}$ $\text{Var} [X] = \dfrac{\alpha}{\lambda^2}$ $M_X (s) = (\dfrac{\lambda}{\lambda - s})^{\alpha}$ MATLAB: $P(X \le t) = \text{gammadbn} (\alpha, \lambda, t)$ Normal$N(\mu, \sigma^2)f_X (t) = \dfrac{1}{\sigma \sqrt{2\pi}} \text{exp} (-\dfrac{1}{2} (\dfrac{t - \mu}{\sigma})^2)$ $E[X] = \mu$ $\text{Var} [X] \sigma^2$ $M_X (s) = \text{exp} (\dfrac{\sigma^2 s^2}{2} + \mu s)$ MATLAB: $P(X \le t) = \text{gaussian} (\mu, \sigma^2, t)$ Beta$(r, s)$ $f_X (t) = \dfrac{\Gamma (r + s)}{\Gamma (r) \Gamma (s)} t^{r -1} (1 - t)^{s - 1}$ $0 < t < 1$, $r > 0$, $s > 0$ $E[X] = \dfrac{r}{r + s}$ $\text{Var} [X] = \dfrac{rs}{(r + s)^2 (r + s + 1)}$ MATLAB: $f_X (t) = \text{beta} (r, s, t)$ $P(X \le t) = \text{betadbn} (r, s, t)$ Weibull($\alpha, \lambda, \nu$) $F_X (t) = 1 - e^{-\lambda (t - \nu)^{\alpha}}$, $\alpha > 0, \lambda >0, \nu \ge 0, t \ge \nu$ $E[X] = \dfrac{1}{\lambda^{1/\alpha}} \Gamma (1 + 1/\alpha) + \nu$ $\text{Var} [X] = \dfrac{1}{\lambda^{2/\alpha}} [\Gamma (1 + 2/\lambda) - \Gamma^2 (1 + 1/\lambda)]$ MATLAB: ($\nu = 0$ only) $f_X (t) = \text{weibull} (a, l, t)$ $P(X \le t) = \text{weibull} (a, l, t)$ Relationship between gamma and Poisson distributions • If $X$ ~ gamma $(n, \lambda)$, then $P(X \le t) = P(Y \ge n)$ where $Y$ ~ Poisson $(\lambda t)$. • If $Y$ ~ Poisson $(\lambda t)$, then $P(Y \ge n) = P(X \le t)$ where $X$ ~ gamma $(n, \lambda)$. 17.04: Appendix D to Applied Probability- The standard normal distribution $\phi (t) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{t} e^{-\mu^2/2} \ dt$   $\phi (-t) = 1 - \phi (t)$ t 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359 0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753 0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141 0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517 0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879 0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224 0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549 0.7 0.7580 0.7611 0.7643 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852 0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133 0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389 1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621 1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830 1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015 1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177 1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319 1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 9.9406 0.9418 0.9429 0.9441 1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545 1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633 1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706 1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767 2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817 2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857 2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890 2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916 2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936 2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952 2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964 2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974 2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981 2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986 3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.03%3A_Appendix_C-_Data_on_some_common_distributions.txt
$E[g(X)] = \int g(X)\ dP \nonumber$ We suppose, without repeated assertion, that the random variables and Borel functions of random variables or random vectors are integrable. Use of an expression such as $I_M (X)$ involves the tacit assumption that $M$ is a Borel set on the codomain of $X$. (E1): $E[aI_A] = aP(A)$, any constant $a$, any event $A$ (E1a): $E[I_M (X)] = P(X \in M)$ and $E[I_M (X) I_N (Y)] - P(X \in M, Y \in N)$ for any Borel sets $M, N$ (Extends to any finite product of such indicator functions of random vectors) (E2): Linearity. For any constants $a, b$, $E[aX + bY) = aE[X] + bE[Y]$ (Extends to any finite linear combination) (E3): Positivity; monotonicity. a. $X \ge 0$ a.s. implies $E[X] \ge 0$, with equality iff $X = 0$ a.s. b. $X \ge Y$ a.s. implies $E[X] \ge E[Y]$, with equality iff $X = Y$ a.s. (E4): Fundamental lemma. If $X \ge 0$ is bounded, and $\{X_n: 1 \le n\}$ is a.s. nonnegative, nondecreasing, with $\text{lim}_n X_n (\omega) \ge X(\omega)$ for a.e. $\omega$, then $\text{lim}_n E[X_n] \ge E[X]$ (E4a): Monotone convergence. If for all $n$, $0 \le X_n \le X_{n + 1}$ a.s. and $X_n \to X$ a.s.,then $E[X_n] \to E[X]$ (The theorem also holds if $E[X] = \infty$) ****** (E5): Uniqueness. * is to be read as one of the symbols $\le, =$, or $\ge$ a. $E[I_M(X) g(X)]$ * $E[I_M(X) h(X)]$ for all $M$ iff $g(X)$ * $h(X)$ a.s. b. $E[I_M(X) I_N (Z) g(X, Z)] = E[I_M (X) I_N (Z) h(X,Z)]$ for all $M, N$ iff $g(X, Z) = h(X, Z)$ a.s. (E6): Fatou's lemma. If $X_n \ge 0$ a.s., for all $n$, then $E[ \text{lim inf } X_n] \le [\text{lim inf } E[X_n]$ (E7): Dominated convergence. If real or complex $X_n \to X$ a.s., $|X_n| \le Y$ a.s. for all $n$, and $Y$ is integrable, then $\text{lim}_n E[X_n] = E[X]$ (E8): Countable additivity and countable sums. a. If $X$ is integrable over $E$, and $E = \bigvee_{i = 1}^{\infty} E_i$ (disjoint union), then $E[I_E X] = \sum_{i = 1}^{\infty} E[I_{E_i} X]$ b. If $\sum_{n = 1}^{\infty} E[|X_n|] < \infty$, then $\sum_{n = 1}^{\infty} |X_n| < \infty$, a.s. and $E[\sum_{n = 1}^{\infty} X_n] = \sum_{n = 1}^{\infty} E[X_n]$ (E9): Some integrability conditions a. $X$ is integrable iff both $X^{+}$ and $X^{-}$ are integrable iff $|X|$ is integrable. b. $X$ is integrable iff $E[I_{\{|X| > a\}} |X|] \to 0$ as $a \to \infty$ c. If $X$ is integrable, then $X$ is a.s. finite d. If $E[X]$ exists and $P(A) = 0$, then $E[I_A X] = 0$ (E10): Triangle inequality. For integrable $X$, real or complex, $|E[X]| \le E[|X|]$ (E11): Mean-value theorem. If $a \le X \le b$ a.s. on $A$, then $aP(A) \le E[I_A X] \le bP(A)$ (E12): For nonnegative, Borel $g$, $E[g(X)] \ge aP(g(X) \ge a)$ (E13): Markov's inequality. If $g \ge 0$ and nondecreasing for $t \ge 0$ and $a \ge 0$, then $g(a)P(|X| \ge a) \le E[g(|X|)]$ (E14): Jensen's inequality. If $g$ is convex on an interval which contains the range of random variable $X$, then $g(E[X]) \le E[g(X)]$ (E15): Schwarz' inequality. For $X, Y$ real or complex, $|E[XY]|^2 \le E[|X|^2] E[|Y|^2]$, with equality iff there is a constant $c$ such that $X = cY$ a.s. (E16): Hölder's inequality. For $1 \le p, q$, with $\dfrac{1}{p} + \dfrac{1}{q} = 1$, and $X, Y$ real or complex. $E[|XY|] \le E[|X|^p]^{1/p} E[|Y|^q]^{1/q}$ (E17): Hölder's inequality. For $1 < p$ and $X, Y$ real or complex, $E[|X + Y|^p]^{1/p} \le E[|X|^p]^{1/p} + E[|Y|^p]^{1/p}$ (E18): Independence and expectation. The following conditions are equivalent. a. The pair $\{X, Y\}$ is independent b. $E[I_M (X) I_N (Y)] = E[I_M (X)] E[I_N (Y)]$ for all Borel $M, N$ c. $E[g(X)h(Y)] = E[g(X)] E[h(Y)]$ for all Borel $g, h$ such that $g(X)$, $h(Y)$ are integrable. (E19): Special case of the Radon-Nikodym theorem If $g(Y)$ is integrable and $X$ is a random vector, then there exists a real-valued Borel function $e(\cdot)$, defined on the range of $X$, unique a.s. $[P_X]$, such that $E[I_M(X) g(X)] = E[I_M (X) e(X)]$ for all Borel sets $M$ on the codomain of $X$. (E20): Some special forms of expectation a. Suppose $F$ is nondecreasing, right-continuous on $[0, \infty)$, with $F(0^{-}) = 0$. Let $F^{*} (t) = F(t - 0)$. Consider $X \ge 0$ with $E[F(X)] < \infty$. Then, (1) $E[F(X)] = \int_{0}^{\infty} P(X \ge t) F\ (dt)$ and (2) $E[F^{*} (X)] = \int_{0}^{\infty} P(X > t) F\ (dt)$ b. If $X$ is integrable, then $E[X] = \int_{-\infty}^{\infty} [u(t) - F_X (t)]\ dt$ c. If $X, Y$ are integrable, then $E[X - Y] = \int_{-\infty}^{\infty} [F_Y (t) - F_X (t)]\ dt$ d. if $X \ge 0$ is integrable, then $\sum_{n = 0}^{\infty} P(X \ge n + 1) \le E[X] \le \sum_{n = 0}^{\infty} P(X \ge n) \le N \sum_{k = 0}^{\infty} P(X \ge kN)$, for all $N \ge 1$ e. If integrable $X \ge 0$ is integer-valued, then $E[X] = \sum_{n = 1}^{\infty} P(X \ge n) = \sum_{n = 0}^{\infty} P(X > n) E[X^2] = \sum_{n = 1}^{\infty} (2n - 1) P(X \ge n) = \sum_{n = 0}^{\infty} (2n + 1) P(X > n)$ f. If $Q$ is the quantile function for $F_X$, then $E[g(X)] = \int_{0}^{1} g[Q(u)]\ du$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.05%3A_Appendix_E_to_Applied_Probability_-_Properties_of_Mathematical_Expectation.txt
We suppose, without repeated assertion, that the random variables and functions of random vectors are integrable, as needed. (CE1): Defining condition. $e(X) = E[g(Y)|X]$ a.s. iff $E[I_M (X) g(Y)] = E[I_M (X) e(X)]$ for each Borel set $M$ on the codomain of $X$. (CE1a): If $P(X \in M) > 0$, then $E[I_M(X) e(X)] = E[g(Y)|X \in M] P(X \in M)$ (CE1b): Law of total probability. $E[g(Y)] = E\{[g(Y)|X]\}$ (CE2): Linearity. For any constants $a, b$ $E[ag(Y) + bh(Z)|X] = aE[g(Y)|X] + bE[h(Z)|X]$ a.s. (Extends to any finite linear combination) (CE3): positivity; monotonicty. a. $g(Y) \ge 0$ a.s. implies $E[g(Y)|X] \ge 0$ a.s. b. $g(Y) \ge h(Z)$ a.s. implies $E[g(Y)|X] \ge E[h(Z)|X]$ a.s. (CE4): Monotone convergence. $Y_n \to Y$ a.s. monotonically implies $E[Y_n |X] \to E[Y|X]$ a.s. (CE5): Independence. $\{X, Y\}$ is an independent pair a. iff $E[g(Y)|X] = E[g(Y)]$ a.s. for all Borel functions $g$ b. iff $E[I_N (Y)|X] = E[I_N (Y)]$ a.s. for all Borel sets $N$ on the codomain of $Y$ (CE6): $e(X) = E[g(Y)|X]$ a.s. iff $E[h(X)g(Y)] = E[h(X)e(X)]$ a.s. for any Borel function $h$ (CE7): $E[h(X)|X] = h(X)$ a.s. for any Borel function $h$ (CE8): $E[h(X)g(Y)|X] = h(X) E[g(Y)|X]$ a.s. for any Borel function $h$ (CE9): If $X = h(W)$ and $W = k(X)$, with $h, k$ Borel functions, then $E[g(Y)|X] = E[g(Y)|W]$ a.s. (CE10): If $g$is a Borel function such that $E[g(t, Y)]$ is finite for all $t$ on the range of $X$ and $E[g(X, Y)]$ is finite, then a. $E[g(X, Y)|X = t] = E[g(t, Y)|X = t]$ a.s. $[P_X]$ b. If $\{X, Y\}$ is independent, then $E[g(X, Y)|X = t] = E[g(t, Y)]$ a.s. $[P_X]$ (CE11): Suppose $\{X(t): t \in T\}$ is a real-valued measurable random process whose parameter set $T$ is a Borel subset of the real line and $S$ is a random variable whose range is a subset of $T$, so that $X(S)$ is a random variable. If $E[X(t)]$ is finite for all $t$ in $T$ and $E[X(S)]$ is finite, then a. \9E[X(S)|S = t] = E[X(t)|S = t]\) a.s $[P_S]$ b. If, in addition, $\{S, X_T\}$ is independent, then $E[X(S)|S = t] = E[X(t)]$ a.s. $[P_S]$ (CE12): Countable additivity and countable sums. a. If $Y$ is integrable on $A$ and $A = \bigvee_{n = 1}^{\infty} A_n$. then $E[I_A Y|X] = \sum_{n = 1}^{\infty} E[I_A Y|X]$ a.s. b. If $\sum_{n = 1}^{\infty} E[|Y_n|] < \infty$, thne $E[\sum_{n = 1}^{\infty} Y_n|X]$ a.s. (CE13): Triangle inequality. $|E[g(Y)|X]| \le E[|g(Y)||X]$ a.s. (CE14): Jensen's inequality. If $g$ is a convex function on an interval $I$ which contains the range of a real random variable $Y$, then $g\{E[Y|X]\} \le E[g(Y)|X]$ a.s. (CE15): Suppose $E[|Y|^p] < \infty$ and $E[|Z|^p] < \infty$ for $1 \le p < \infty$. Then $E\{|E[Y|X] - E[Z|X]|^p\} \le E[|Y - Z|^p] < \infty$ 17.07: Appendix G to Applied Probability- Properties of conditional independence given a random vector Definition The pair $\{X, Y\}$ is conditionally independent, givenZ, denoted $\{X, Y\}$ ci $|Z$ iff $E[I_M(X) I_N (Y)|Z] = E[I_M(X)|Z] E[I_N(Y)|Z]$ a.s. for all Borel sets $M, N$ An arbitrary class $\{X_t: t \in T\}$ of random vectors is conditionally independent, give $Z$, iff such a product rule holds for each finite subclass or two or more members of the class. Remark. The expression “for all Borel sets $M$, $N$," here and elsewhere, implies the sets are on the appropriate codomains. Also, the expressions below “for all Borel functions $g$,” etc., imply that the functions are real-valued, such that the indicated expectations are finite. The following are equivalent. Each is necessary and sufficient that $\{X, Y\}$ ci $|Z$. (CI1): $E[I_M (X) I_N (Y)|Z] = E[I_M (X)|Z] E[I_N (Y)|Z]$ a.s. for all Borel sets $M, N$ (CI2): $E[I_M (X)|Z, Y] = E[I_M(X)|Z]$ a.s. for all Borel sets $M$ (CI3): $E[I_M (X)I_Q(Z)|Z,Y] = E[I_M(X)I_Q(Z)|Z]$ a.s. for all Borel sets $M, Q$ (CI4): $E[I_M (X) I_Q(Z)|Y] = E\{E[I_M(X) I_Q (Z)|Z]|Y\}$ a.s. for all Borel sets $M, Q$ **** (CI5): $E[g(X, Z)h(Y, Z)|Z] = E[g(X,Z)|Z]E[h(Y, Z)|Z]$ a.s. for all Borel functions $g$, $h$ (CI6): $E[g(X, Z)|Z, Y] = E[g(X, Z)|Z]$ a.s. for all Borel function $g$ (CI7): For any Borel function $g$, there exists a Borel function $e_g$ such that $E[g(X, Z)|Z,Y] = e_g(Z)$ a.s. (CI8): $E[g(X,Z)|Y] = E\{E[g(X, Z)|Z]|Y\}$ a.s. for all Borel functions $g$ **** (CI9): $\{U, V\}$ ci $|Z$, where $U = g(X,Z)$ and $V =h(Y,Z)$, for any Borel functions $g, h$. Additional properties of conditional independence (CI10): $\{X, Y\}$ ci $|Z$ implies $\{X, Y\}$ ci $|(Z, U)$, $\{X, Y\}$ ci $|(Z, V)$, and $\{X, Y\}$ ci $|(Z, U, V)$, where $U = h(X)$ and $V = k(Y)$, with $h, k$ Borel. (CI11): $\{X, Z\}$ ci $|Y$ and $\{X, W\}$ ci $|(Y, Z)$ iff $\{X, (Z, W)\}$ ci $|Y$. (CI12): $\{X, Z\}$ ci $|Y$ and $\{(X, Y), W\}$ ci $|Z$ implies $\{X, (Z, W)\}$ is independent. (CI13): $\{X, Y\}$ is independent and $\{X, Y\}$ ci $|Y$ iff $\{X, (Y, Z)\}$ is independent. (CI14): $\{X, Y\}$ ci $|Z$ implies $E[g(X, Y)|Y = u, Z = v] = E[g(X, u)|Z = v]$ a.s. $[P_{YZ}]$ (CI15): $\{X, Y\}$ ci $|Z$ implies a. $E[g(X, Z)h(Y, Z)] = E\{E[g(X, Z)|Z] E[h(Y, Z)|Z]\} = E[e_1(Z)e_2(Z)]$ b. $E[g(Y)|X \in M] P(X \in M) = E\{E[I_M(X)|Z] E[g(Y)|Z]\}$ (CI16): $\{(X, Y), Z\}$ ci $|W$ iff $E[I_M(X)I_N(Y)I_Q(Z)|W] = E[I_M(X)I_N (Y)|W] E[I_Q(Z)|W]$ a.s. for all Borel sets $M, N, Q$
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.06%3A_Appendix_F_to_Applied_Probability-_Properties_of_conditional_expectation_given_a_random_vector.txt
npr02_04 % file npr02_04.m % Data for problem P2-4 pm = [0.0168 0.0392 0.0672 0.1568 0.0072 0.0168 0.0288 0.0672 ... 0.0252 0.0588 0.1008 0.2352 0.0108 0.0252 0.0432 0.1008]; disp('Minterm probabilities are in pm. Use mintable(4)') npr02_05 % file npr02_05.m % Data for problem P2-5 pm = [0.0216 0.0144 0.0504 0.0336 0.0324 0.0216 0.0756 0.0504 0.0216 ... 0.0144 0.0504 0.0336 0.0324 0.0216 0.0756 0.0504 0.0144 0.0096 ... 0.0336 0.0224 0.0216 0.0144 0.0504 0.0336 0.0144 0.0096 0.0336 ... 0.0224 0.0216 0.0144 0.0504 0.0336]; disp('Minterm probabilities are in pm. Use mintable(5)') npr02_06 % file npr02_06.m % Data for problem P2-6 minvec3 DV = [A|Ac; A|(Bc&C); A&C; Ac&B; Ac&Cc; B&Cc]; DP = [1 0.65 0.20 0.25 0.25 0.30]; TV = [((A&Cc)|(Ac&C))&Bc; ((A&Bc)|Ac)&Cc; Ac&(B|Cc)]; disp('Call for mincalc') npr02_07 % file npr02_07.m % Data for problem P2-7 minvec3 DV = [A|Ac; ((A&Bc)|(Ac&B))&C; A&B; Ac&Cc; A; C; A&Bc&Cc]; DP = [ 1 0.4 0.2 0.3 0.6 0.5 0.1]; TV = [(Ac&Cc)|(A&C); ((A&Bc)|Ac)&Cc; Ac&(B|Cc)]; disp('Call for mincalc') npr02_08 % file npr02_08.m % Data for problem P2-8 minvec3 DV = [A|Ac; A; C; A&C; Ac&B; Ac&Bc&Cc]; DP = [ 1 0.6 0.4 0.3 0.2 0.1]; TV = [(A|B)&Cc; (A&Cc)|(Ac&C); (A&Cc)|(Ac&B)]; disp('Call for mincalc') npr02_09 % file npr02_09.m % Data for problem P2-9 minvec3 DV = [A|Ac; A; A&B; A&C; A&B&Cc]; DP = [ 1 0.5 0.3 0.3 0.1]; TV = [A&(~(B&Cc)); (A&B)|(A&C)|(B&C)]; disp('Call for mincalc') % Modification for part 2 % DV = [DV; Ac&Bc&Cc; Ac&B&C]; % DP = [DP 0.1 0.05]; npr02_10 % file npr02_10.m % Data for problem P2-10 minvec4 DV = [A|Ac; A; Ac&Bc; A&Cc; A&C&Dc]; DP = [1 0.6 0.2 0.4 0.1]; TV = [(Ac&B)|(A&(Cc|D))]; disp('Call for mincalc') npr02_11 % file npr02_11.m % Data for problem P2-11 % A = male; B = on campus; C = active in sports minvec3 DV = [A|Ac; A; B; A|C; B&Cc; A&B&C; A&Bc; A&Cc]; DP = [ 1 0.52 0.85 0.78 0.30 0.32 0.08 0.17]; TV = [A&B; A&B&Cc; Ac&C]; disp('Call for mincalc') npr02_12 % file npr02_12.m % Data for problem P2-12 % A = male; B = party member; C = voted last election minvec3 DV = [A|Ac; A; A&Bc; B; Bc&C; Ac&Bc&C]; DP = [ 1 0.60 0.30 0.50 0.20 0.10]; TV = [Bc&Cc]; disp('Call for mincalc') npr02_13 % file npr02_13.m % Data for problem P2-13 % A = rain in Austin; B = rain in Houston; % C = rain in San Antonio minvec3 DV = [A|Ac; A&B; A&Bc; A&C; (A&Bc)|(Ac&B); B&C; Bc&C; Ac&Bc&Cc]; DP = [ 1 0.35 0.15 0.20 0.45 0.30 0.05 0.15]; TV = [A&B&C; (A&B&Cc)|(A&Bc&C)|(Ac&B&C); (A&Bc&Cc)|(Ac&B&Cc)|(Ac&Bc&C)]; disp('Call for mincalc') npr02_14 % file npr02_14.m % Data for problem P2-14 % A = male; B = engineering; % C = foreign language; D = graduate study minvec4 DV = [A|Ac; A; B; Ac&B; C; Ac&C; A&D; Ac&D; A&B&D; ... Ac&B&D; B&C&D; Bc&Cc&D; Ac&Bc&C&D]; DP = [1 0.55 0.23 0.10 0.75 0.45 0.26 0.19 0.13 0.08 0.20 0.05 0.11]; TV = [C&D; Ac&Dc; A&((C&Dc)|(Cc&D))]; disp('Call for mincalc') npr02_15 % file npr02_15.m % Data for problem P2-15 % A = men; B = on campus; C = readers; D = active minvec4 DV = [A|Ac; A; B; Ac&B; C; Ac&C; D; B&D; C&D; ... Ac&B&D; Ac&Bc&D; Ac&B&C&D; Ac&Bc&C&D; A&Bc&Cc&D]; DP = [1 0.6 0.55 0.25 0.40 0.25 0.70 0.50 0.35 0.25 0.05 0.10 0.05 0.05]; TV = [A&D&(Cc|Bc); A&Dc&Cc]; disp('Call for mincalc') npr02_16 % file npr02_16.m % Data for problem P2-16 minvec3 DV = [A|Ac; A; B; C; (A&B)|(A&C)|(B&C); A&B&C; A&C; (A&B)-2*(B&C)]; DP = [ 1 0.221 0.209 0.112 0.197 0.045 0.062 0]; TV = [A|B|C; (A&Bc&Cc)|(Ac&B&Cc)|(Ac&Bc&C)]; disp('Call for mincalc') npr02_17 % file npr02_17.m % Data for problem P2-17 % A = alignment; B = brake work; C = headlight minvec3 DV = [A|Ac; A&B&C; (A&B)|(A&C)|(B&C); B&C; A ]; DP = [ 1 0.100 0.325 0.125 0.550]; TV = [A&Bc&Cc; Ac&(~(B&C))]; disp('Call for mincalc') npr02_18 % file npr02_18.m % Date for problem P2-18 minvec3 DV = [A|Ac; A&(B|C); Ac; Ac&Bc&Cc]; DP = [ 1 0.3 0.6 0.1]; TV = [B|C; (((A&B)|(Ac&Bc))&Cc)|(A&C); Ac&(B|Cc)]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&C; Ac&B]; % DP = [DP 0.2 0.3]; npr02_19 % file npr02_19.m % Data for problem P2-19 % A = computer; B = monitor; C = printer minvec3 DV = [A|Ac; A&B; A&B&Cc; A&C; B&C; (A&Cc)|(Ac&C); ... (A&Bc)|(Ac&B); (B&Cc)|(Bc&C)]; DP = [1 0.49 0.17 0.45 0.39 0.50 0.43 0.43]; TV = [A; B; C; (A&B&Cc)|(A&Bc&C)|(Ac&B&C); (A&B)|(A&C)|(B&C); A&B&C]; disp('Call for mincalc') npr02_20 % file npr02_20.m % Data for problem P2-20 minvec3 DV = [A|Ac; A; B; A&B&C; A&C; (A&B)|(A&C)|(B&C); B&C - 2*(A&C)]; DP = [ 1 0.232 0.228 0.045 0.062 0.197 0]; TV = [A|B|C; Ac&Bc&C]; disp('Call for mincalc') % Modification % DV = [DV; C]; % DP = [DP 0.230 ]; npr02_21 % file npr02_21.m % Data for problem P2-21 minvec3 DV = [A|Ac; A; A&B; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.3 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; npr02_22 % file npr02_22.m % Data for problem P2-22 minvec3 DV = [A|Ac; A; A&B; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.5 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; npr02_23 % file npr02_23.m % Data for problem P2-23 minvec3 DV = [A|Ac; A; A&C; A&B&C; C; Ac&Cc]; DP = [ 1 0.4 0.3 0.25 0.65 0.3 ]; TV = [(A&Cc)|(Ac&C); Ac&Bc; A|B; A&Bc]; disp('Call for mincalc') % Modification % DV = [DV; Ac&B&Cc; Ac&Bc]; % DP = [DP 0.1 0.3 ]; npr03_01 % file npr03_01.m % Data for problem P3-1 minvec3 DV = [A|Ac; A; A&B; B&C; Ac|(B&C); Ac&B&Cc]; DP = [ 1 0.55 0.30 0.20 0.55 0.15 ]; TV = [Ac&B; B]; disp('Call for mincalc') npr04_04 % file npr04_04.m % Data for problem P4-4 pm = [0.032 0.016 0.376 0.011 0.364 0.073 0.077 0.051]; disp('Minterm probabilities for P4-4 are in pm') npr04_05 % file npr04_05.m % Data for problem P4-5 pm = [0.084 0.196 0.036 0.084 0.085 0.196 0.035 0.084 ... 0.021 0.049 0.009 0.021 0.020 0.049 0.010 0.021]; disp('Minterm probabilities for P4-5 are in pm') npr04_06 % file npr04_06.m % Data for problem P4-6 pm = [0.085 0.195 0.035 0.085 0.080 0.200 0.035 0.085 ... 0.020 0.050 0.010 0.020 0.020 0.050 0.015 0.015]; disp('Minterm probabilities for P4-6 are in pm') mpr05_16 % file mpr05_16.m % Data for Problem P5-16 A = [51 26 7; 42 32 10; 19 54 11; 24 53 7; 27 52 5; 49 19 16; 16 59 9; 47 32 5; 55 17 12; 24 53 7]; B = [27 34 5; 19 43 4; 39 22 5; 38 19 9; 28 33 5; 19 41 6; 37 21 8; 19 42 5; 27 33 6; 39 21 6]; disp('Call for oddsdf') npr05_17 % file npr05_17.m % Data for problem P5-17 PG1 = 84/150; PG2 = 66/125; A = [0.61 0.31 0.08 0.50 0.38 0.12 0.23 0.64 0.13 0.29 0.63 0.08 0.32 0.62 0.06 0.58 0.23 0.19 0.19 0.70 0.11 0.56 0.38 0.06 0.65 0.20 0.15 0.29 0.63 0.08]; B = [0.41 0.51 0.08 0.29 0.65 0.06 0.59 0.33 0.08 0.57 0.29 0.14 0.42 0.50 0.08 0.29 0.62 0.09 0.56 0.32 0.12 0.29 0.64 0.08 0.41 0.50 0.09 0.59 0.32 0.09]; disp('Call for oddsdp') npr06_10 % file npr06_10.m % Data for problem P6-10 pm = [ 0.072 0.048 0.018 0.012 0.168 0.112 0.042 0.028 ... 0.062 0.048 0.028 0.010 0.170 0.110 0.040 0.032]; c = [-5.3 -2.5 2.3 4.2 -3.7]; disp('Minterm probabilities are in pm, coefficients in c') npr06_12 % file npr06_12.m % Data for problem P6-12 pm = 0.001*[5 7 6 8 9 14 22 33 21 32 50 75 86 129 201 302]; c = [1 1 1 1 0]; disp('Minterm probabilities in pm, coefficients in c') npr06_18.m % file npr06_18.m % Data for problem P6-18 cx = [5 17 21 8 15 0]; cy = [8 15 12 18 15 12 0]; pmx = minprob(0.01*[37 22 38 81 63]); pmy = minprob(0.01*[77 52 23 41 83 58]); disp('Data in cx, cy, pmx, pmy') npr07_01 \begin{verbatim} % file npr07_01.m % Data for problem P7-1 T = [1 3 2 3 4 2 1 3 5 2]; pc = 0.01*[ 8 13 6 9 14 11 12 7 11 9]; disp('Data are in T and pc') \end{verbatim} npr07_02 % file npr07_02.m % Data for problem P7-2 T = [3.5 5.0 3.5 7.5 5.0 5.0 3.5 7.5]; pc = 0.01*[10 15 15 20 10 5 10 15]; disp('Data are in T, pc') npr08_01 % file npr08_01.m % Solution for problem P8-1 X = 0:2; Y = 0:2; Pn = [132 24 0; 864 144 6; 1260 216 6]; P = Pn/(52*51); disp('Data in Pn, P, X, Y') npr08_02 % file npr08_02.m % Solution for problem P8-2 X = 0:2; Y = 0:2; Pn = [6 0 0; 18 12 0; 6 12 2]; P = Pn/56; disp('Data are in X, Y,Pn, P') npr08_03 % file npr08_03.m % Solution for problem P8-3 X = 1:6; Y = 0:6; P0 = zeros(6,7); % Initialize for i = 1:6 % Calculate rows of Y probabilities P0(i,1:i+1) = (1/6)*ibinom(i,1/2,0:i); end P = rot90(P0); % Rotate to orient as on the plane PY = fliplr(sum(P')); % Reverse to put in normal order disp('Answers are in X, Y, P, PY') npr08_04 % file npr08_04.m % Solution for problem P8-4 X = 2:12; Y = 0:12; PX = (1/36)*[1 2 3 4 5 6 5 4 3 2 1]; P0 = zeros(11,13); for i = 1:11 P0(i,1:i+2) = PX(i)*ibinom(i+1,1/2,0:i+1); end P = rot90(P0); PY = fliplr(sum(P')); disp('Answers are in X, Y, PY, P') npr08_05 % file npr08_05.m % Data and basic calculations for P8-5 PX = (1/36)*[1 2 3 4 5 6 5 4 3 2 1]; X = 2:12; Y = 0:12; P0 = zeros(11,13); for i = 1:11 P0(i,1:i+2) = PX(i)*ibinom(i+1,1/6,0:i+1); end P = rot90(P0); PY = fliplr(sum(P')); disp('Answers are in X, Y, P, PY') npr08_06 % file Newprobs/pr08_06.m % Data for problem P8-6 (from Exam 2, 95f) P = [0.0483 0.0357 0.0420 0.0399 0.0441 0.0437 0.0323 0.0380 0.0361 0.0399 0.0713 0.0527 0.0620 0.0609 0.0551 0.0667 0.0493 0.0580 0.0651 0.0589]; X = [-2.3 -0.7 1.1 3.9 5.1]; Y = [ 1.3 2.5 4.1 5.3]; disp('Data are in X, Y, P') npr08_07 % file pr08_07.m (from Exam3, 96s) % Data for problem P8-7 X = [-3.1 -0.5 1.2 2.4 3.7 4.9]; Y = [-3.8 -2.0 4.1 7.5]; P = [ 0.0090 0.0396 0.0594 0.0216 0.0440 0.0203; 0.0495 0 0.1089 0.0528 0.0363 0.0231; 0.0405 0.1320 0.0891 0.0324 0.0297 0.0189; 0.0510 0.0484 0.0726 0.0132 0 0.0077]; disp('Data are in X, Y, P') npr08_08 % file Newprobs/pr08_08.m (from Exam 4 96s) % Data for problem P8-8 P = [0.0156 0.0191 0.0081 0.0035 0.0091 0.0070 0.0098 0.0056 0.0091 0.0049; 0.0064 0.0204 0.0108 0.0040 0.0054 0.0080 0.0112 0.0064 0.0104 0.0056; 0.0196 0.0256 0.0126 0.0060 0.0156 0.0120 0.0168 0.0096 0.0056 0.0084; 0.0112 0.0182 0.0108 0.0070 0.0182 0.0140 0.0196 0.0012 0.0182 0.0038; 0.0060 0.0260 0.0162 0.0050 0.0160 0.0200 0.0280 0.0060 0.0160 0.0040; 0.0096 0.0056 0.0072 0.0060 0.0256 0.0120 0.0268 0.0096 0.0256 0.0084; 0.0044 0.0134 0.0180 0.0140 0.0234 0.0180 0.0252 0.0244 0.0234 0.0126; 0.0072 0.0017 0.0063 0.0045 0.0167 0.0090 0.0026 0.0172 0.0217 0.0223]; X = 1:2:19; Y = [-5 -3 -1 3 5 9 10 12]; disp('Data are in X, Y, P') npr08_09 % file pr08_09.m (from Exam3 95f) % Data for problem P8-9 P = [0.0390 0.0110 0.0050 0.0010 0.0010; 0.0650 0.0700 0.0500 0.0150 0.0100; 0.0310 0.0610 0.1370 0.0510 0.0330; 0.0120 0.0490 0.1630 0.0580 0.0390; 0.0030 0.0090 0.0450 0.0250 0.0170]; X = [1 1.5 2 2.5 3]; Y = [1 2 3 4 5]; disp('Data are in X, Y, P') npr09_02 \begin{verbatim} % file Newprobs/npr09_02.m % Data for problem P9-2 P = [0.0589 0.0342 0.0304 0.0456 0.0209; 0.0961 0.0556 0.0498 0.0744 0.0341; 0.0682 0.0398 0.0350 0.0528 0.0242; 0.0868 0.0504 0.0448 0.0672 0.0308]; X = [-3.9 -1.7 1.5 2.8 4.1]; Y = [-2 1 2.6 5.1]; disp('Data are in X, Y, P') \end{verbatim} npr10_16 \begin{verbatim} % file npr10_16.m % Data for problem P10-16 cx = [-2 1 3 0]; pmx = 0.001*[255 25 375 45 108 12 162 18]; cy = [1 3 1 -3]; pmy = minprob(0.01*[32 56 40]); Z = [-1.3 1.2 2.7 3.4 5.8]; PZ = 0.01*[12 24 43 13 8]; disp('Data are in cx, pmx, cy, pmy, Z, PZ') \end{verbatim} npr12_10 % file npr12_10.m % Data for problems P12-10, P12_11 cx = [-3.3 -1.7 2.3 7.6 -3.4]; pmx = 0.0001*[475 725 120 180 1125 1675 280 420 480 720 130 170 1120 1680 270 430]; cy = [10 17 20 -10]; pmy = 0.01*[6 14 9 21 6 14 9 21]; disp('Data are in cx, cy, pmx and pmy') npr16_07 \begin{verbatim} % file npr16_07.m % Transition matrix for problem P16-7 P = [0.23 0.32 0.02 0.22 0.21; 0.29 0.41 0.10 0.08 0.12; 0.22 0.07 0.31 0.14 0.26; 0.32 0.15 0.05 0.33 0.15; 0.08 0.23 0.31 0.09 0.29]; disp('Transition matrix is P') \end{verbatim} npr16_09 % file npr16_09.m % Transition matrix for problem P16-9 P = [0.2 0.5 0.3 0 0 0 0; 0.6 0.1 0.3 0 0 0 0; 0.2 0.7 0.1 0 0 0 0; 0 0 0 0.6 0.4 0 0; 0 0 0 0.5 0.5 0 0; 0.1 0.3 0 0.2 0.1 0.1 0.2; 0.1 0.2 0.1 0.2 0.2 0.2 0 ]; disp('Transition matrix is P')
textbooks/stats/Probability_Theory/Applied_Probability_(Pfeiffer)/17%3A_Appendices/17.08%3A_Matlab_files_for_Problems_in_Applied_Probability.txt
Probability In this chapter, we shall first consider chance experiments with a finite number of possible outcomes $\omega_1$, $\omega_2$, …, $\omega_n$. For example, we roll a die and the possible outcomes are 1, 2, 3, 4, 5, 6 corresponding to the side that turns up. We toss a coin with possible outcomes H (heads) and T (tails). It is frequently useful to be able to refer to an outcome of an experiment. For example, we might want to write the mathematical expression which gives the sum of four rolls of a die. To do this, we could let $X_i$, $i = 1, 2, 3, 4,$ represent the values of the outcomes of the four rolls, and then we could write the expression $X_1 + X_2 + X_3 + X_4$ for the sum of the four rolls. The $X_i$’s are called . A random variable is simply an expression whose value is the outcome of a particular experiment. Just as in the case of other types of variables in mathematics, random variables can take on different values. Let $X$ be the random variable which represents the roll of one die. We shall assign probabilities to the possible outcomes of this experiment. We do this by assigning to each outcome $\omega_j$ a nonnegative number $m(\omega_j)$ in such a way that $m(\omega_1) + m(\omega_2) + \cdots + m(\omega_6) = 1\ .$ The function $m(\omega_j)$ is called the of the random variable $X$. For the case of the roll of the die we would assign equal probabilities or probabilities 1/6 to each of the outcomes. With this assignment of probabilities, one could write $P(X \le 4) = {2\over 3}$ to mean that the probability is $2/3$ that a roll of a die will have a value which does not exceed 4. Let $Y$ be the random variable which represents the toss of a coin. In this case, there are two possible outcomes, which we can label as H and T. Unless we have reason to suspect that the coin comes up one way more often than the other way, it is natural to assign the probability of 1/2 to each of the two outcomes. In both of the above experiments, each outcome is assigned an equal probability. This would certainly not be the case in general. For example, if a drug is found to be effective 30 percent of the time it is used, we might assign a probability .3 that the drug is effective the next time it is used and .7 that it is not effective. This last example illustrates the intuitive That is, if we have a probability $p$ that an experiment will result in outcome $A$, then if we repeat this experiment a large number of times we should expect that the fraction of times that $A$ will occur is about $p$. To check intuitive ideas like this, we shall find it helpful to look at some of these problems experimentally. We could, for example, toss a coin a large number of times and see if the fraction of times heads turns up is about 1/2. We could also simulate this experiment on a computer. Simulation We want to be able to perform an experiment that corresponds to a given set of probabilities; for example, $m(\omega_1) = 1/2$, $m(\omega_2) = 1/3$, and $m(\omega_3) = 1/6$. In this case, one could mark three faces of a six-sided die with an $\omega_1$, two faces with an $\omega_2$, and one face with an $\omega_3$. In the general case we assume that $m(\omega_1)$, $m(\omega_2)$, …, $m(\omega_n)$ are all rational numbers, with least common denominator $n$. If $n > 2$, we can imagine a long cylindrical die with a cross-section that is a regular $n$-gon. If $m(\omega_j) = n_j/n$, then we can label $n_j$ of the long faces of the cylinder with an $\omega_j$, and if one of the end faces comes up, we can just roll the die again. If $n = 2$, a coin could be used to perform the experiment. We will be particularly interested in repeating a chance experiment a large number of times. Although the cylindrical die would be a convenient way to carry out a few repetitions, it would be difficult to carry out a large number of experiments. Since the modern computer can do a large number of operations in a very short time, it is natural to turn to the computer for this task. Random Numbers We must first find a computer analog of rolling a die. This is done on the computer by means of a Depending upon the particular software package, the computer can be asked for a real number between 0 and 1, or an integer in a given set of consecutive integers. In the first case, the real numbers are chosen in such a way that the probability that the number lies in any particular subinterval of this unit interval is equal to the length of the subinterval. In the second case, each integer has the same probability of being chosen. Let $X$ be a random variable with distribution function $m(\omega)$, where $\omega$ is in the set $\{\omega_1, \omega_2, \omega_3\}$, and $m(\omega_1) = 1/2$, $m(\omega_2) = 1/3$, and $m(\omega_3) = 1/6$. If our computer package can return a random integer in the set $\{1, 2, ..., 6\}$, then we simply ask it to do so, and make 1, 2, and 3 correspond to $\omega_1$, 4 and 5 correspond to $\omega_2$, and 6 correspond to $\omega_3$. If our computer package returns a random real number $r$ in the interval $(0,~1)$, then the expression $\lfloor {6r}\rfloor + 1$will be a random integer between 1 and 6. (The notation $\lfloor x \rfloor$ means the greatest integer not exceeding $x$, and is read “floor of $x$.") The method by which random real numbers are generated on a computer is described in the historical discussion at the end of this section. The following example gives sample output of the program RandomNumbers. Example $1$: Random Number Generation The program RandomNumbers generates $n$ random real numbers in the interval $[0, 1]$, where $n$ is chosen by the user. When we ran the program with $n = 20$, we obtained the data shown in Table $1$ Sample output of the program RandomNumbers. .203309 .762057 .151121 .623868 .932052 .415178 .716719 .967412 .069664 .670982 .352320 .049723 .750216 .784810 .089734 .966730 .946708 .380365 .027381 .900794 Example $2$: Coin Tossing As we have noted, our intuition suggests that the probability of obtaining a head on a single toss of a coin is 1/2. To have the computer toss a coin, we can ask it to pick a random real number in the interval $[0, 1]$ and test to see if this number is less than 1/2. If so, we shall call the outcome ; if not we call it Another way to proceed would be to ask the computer to pick a random integer from the set $\{0, 1\}$. The program CoinTosses carries out the experiment of tossing a coin $n$ times. Running this program, with $n = 20$, resulted in: $THTTTHTTTTHTTTTTHHTT$ Note that in 20 tosses, we obtained 5 heads and 15 tails. Let us toss a coin $n$ times, where $n$ is much larger than 20, and see if we obtain a proportion of heads closer to our intuitive guess of 1/2. The program CoinTosses keeps track of the number of heads. When we ran this program with $n = 1000$, we obtained 494 heads. When we ran it with $n = 10000$, we obtained 5039 heads. We notice that when we tossed the coin 10,000 times, the proportion of heads was close to the “true value" .5 for obtaining a head when a coin is tossed. A mathematical model for this experiment is called Bernoulli Trials (see Chapter 3). The which we shall study later (see Chapter 8), will show that in the Bernoulli Trials model, the proportion of heads should be near .5, consistent with our intuitive idea of the frequency interpretation of probability. Of course, our program could be easily modified to simulate coins for which the probability of a head is $p$, where $p$ is a real number between 0 and 1. In the case of coin tossing, we already knew the probability of the event occurring on each experiment. The real power of simulation comes from the ability to estimate probabilities when they are not known ahead of time. This method has been used in the recent discoveries of strategies that make the casino game of blackjack favorable to the player. We illustrate this idea in a simple situation in which we can compute the true probability and see how effective the simulation is. Example $3$: Dice Rolling We consider a dice game that played an important role in the historical development of probability. The famous letters between Pascal and Fermat, which many believe started a serious study of probability, were instigated by a request for help from a French nobleman and gambler, Chevalier de Méré. It is said that de Méré had been betting that, in four rolls of a die, at least one six would turn up. He was winning consistently and, to get more people to play, he changed the game to bet that, in 24 rolls of two dice, a pair of sixes would turn up. It is claimed that de Méré lost with 24 and felt that 25 rolls were necessary to make the game favorable. It was that mathematics was wrong. We shall try to see if de Méré is correct by simulating his various bets. The program DeMere1 simulates a large number of experiments, seeing, in each one, if a six turns up in four rolls of a die. When we ran this program for 1000 plays, a six came up in the first four rolls 48.6 percent of the time. When we ran it for 10,000 plays this happened 51.98 percent of the time. We note that the result of the second run suggests that de Méré was correct in believing that his bet with one die was favorable; however, if we had based our conclusion on the first run, we would have decided that he was wrong. Accurate results by simulation require a large number of experiments. The program DeMere2 simulates de Méré’s second bet that a pair of sixes will occur in $n$ rolls of a pair of dice. The previous simulation shows that it is important to know how many trials we should simulate in order to expect a certain degree of accuracy in our approximation. We shall see later that in these types of experiments, a rough rule of thumb is that, at least 95% of the time, the error does not exceed the reciprocal of the square root of the number of trials. Fortunately, for this dice game, it will be easy to compute the exact probabilities. We shall show in the next section that for the first bet the probability that de Méré wins is $1 - (5/6)^4 = .518$. One can understand this calculation as follows: The probability that no 6 turns up on the first toss is $(5/6)$. The probability that no 6 turns up on either of the first two tosses is $(5/6)^2$. Reasoning in the same way, the probability that no 6 turns up on any of the first four tosses is $(5/6)^4$. Thus, the probability of at least one 6 in the first four tosses is $1 - (5/6)^4$. Similarly, for the second bet, with 24 rolls, the probability that de Méré wins is $1 - (35/36)^{24} = .491$, and for 25 rolls it is $1 - (35/36)^{25} = .506$. Using the rule of thumb mentioned above, it would require 27,000 rolls to have a reasonable chance to determine these probabilities with sufficient accuracy to assert that they lie on opposite sides of .5. It is interesting to ponder whether a gambler can detect such probabilities with the required accuracy from gambling experience. Some writers on the history of probability suggest that de Méré was, in fact, just interested in these problems as intriguing probability problems. Example $4$: Heads or Tails For our next example, we consider a problem where the exact answer is difficult to obtain but for which simulation easily gives the qualitative results. Peter and Paul play a game called In this game, a fair coin is tossed a sequence of times—we choose 40. Each time a head comes up Peter wins 1 penny from Paul, and each time a tail comes up Peter loses 1 penny to Paul. For example, if the results of the 40 tosses are $\text{THTHHHHTTHTHHTTHHTTTTHHHTHHTHHHTHHHTTTHH}$ Peter’s winnings may be graphed as in Figure $1$ Peter has won 6 pennies in this particular game. It is natural to ask for the probability that he will win $j$ pennies; here $j$ could be any even number from $-40$ to $40$. It is reasonable to guess that the value of $j$ with the highest probability is $j = 0$, since this occurs when the number of heads equals the number of tails. Similarly, we would guess that the values of $j$ with the lowest probabilities are $j = \pm 40$. A second interesting question about this game is the following: How many times in the 40 tosses will Peter be in the lead? Looking at the graph of his winnings (Figure $1$), we see that Peter is in the lead when his winnings are positive, but we have to make some convention when his winnings are 0 if we want all tosses to contribute to the number of times in the lead. We adopt the convention that, when Peter’s winnings are 0, he is in the lead if he was ahead at the previous toss and not if he was behind at the previous toss. With this convention, Peter is in the lead 34 times in our example. Again, our intuition might suggest that the most likely number of times to be in the lead is 1/2 of 40, or 20, and the least likely numbers are the extreme cases of 40 or 0. It is easy to settle this by simulating the game a large number of times and keeping track of the number of times that Peter’s final winnings are $j$, and the number of times that Peter ends up being in the lead by $k$. The proportions over all games then give estimates for the corresponding probabilities. The program HTSimulation carries out this simulation. Note that when there are an even number of tosses in the game, it is possible to be in the lead only an even number of times. We have simulated this game 10,000 times. The results are shown in Figures(Figure $2$) and (Figure $3$). These graphs, which we call spike graphs, were generated using the program Spikegraph. The vertical line, or spike, at position $x$ on the horizontal axis, has a height equal to the proportion of outcomes which equal $x$. Our intuition about Peter’s final winnings was quite correct, but our intuition about the number of times Peter was in the lead was completely wrong. The simulation suggests that the least likely number of times in the lead is 20 and the most likely is 0 or 40. This is indeed correct, and the explanation for it is suggested by playing the game of heads or tails with a large number of tosses and looking at a graph of Peter’s winnings. In Figure(Figure $4$) we show the results of a simulation of the game, for 1000 tosses and in Figure (Figure $5$) for 10,000 tosses. In the second example Peter was ahead most of the time. It is a remarkable fact, however, that, if play is continued long enough, Peter’s winnings will continue to come back to 0, but there will be very long times between the times that this happens. These and related results will be discussed in Chapter 12. In all of our examples so far, we have simulated equiprobable outcomes. We illustrate next an example where the outcomes are not equiprobable. Example $5$: Horse Races Four horses (Acorn, Balky, Chestnut, and Dolby) have raced many times. It is estimated that Acorn wins 30 percent of the time, Balky 40 percent of the time, Chestnut 20 percent of the time, and Dolby 10 percent of the time. We can have our computer carry out one race as follows: Choose a random number $x$. If $x < .3$ then we say that Acorn won. If $.3 \le x < .7$ then Balky wins. If $.7 \le x < .9$ then Chestnut wins. Finally, if $.9 \le x$ then Dolby wins. The program HorseRace uses this method to simulate the outcomes of $n$ races. Running this program for $n = 10$ we found that Acorn won 40 percent of the time, Balky 20 percent of the time, Chestnut 10 percent of the time, and Dolby 30 percent of the time. A larger number of races would be necessary to have better agreement with the past experience. Therefore we ran the program to simulate 1000 races with our four horses. Although very tired after all these races, they performed in a manner quite consistent with our estimates of their abilities. Acorn won 29.8 percent of the time, Balky 39.4 percent, Chestnut 19.5 percent, and Dolby 11.3 percent of the time. The program GeneralSimulation uses this method to simulate repetitions of an arbitrary experiment with a finite number of outcomes occurring with known probabilities. Historical Remarks Anyone who plays the same chance game over and over is really carrying out a simulation, and in this sense the process of simulation has been going on for centuries. As we have remarked, many of the early problems of probability might well have been suggested by gamblers’ experiences. It is natural for anyone trying to understand probability theory to try simple experiments by tossing coins, rolling dice, and so forth. The naturalist Buffon tossed a coin 4040 times, resulting in 2048 heads and 1992 tails. He also estimated the number $\pi$ by throwing needles on a ruled surface and recording how many times the needles crossed a line The English biologist W. F. R. Weldon1 recorded 26,306 throws of 12 dice, and the Swiss scientist Rudolf Wolf2 recorded 100,000 throws of a single die without a computer. Such experiments are very time-consuming and may not accurately represent the chance phenomena being studied. For example, for the dice experiments of Weldon and Wolf, further analysis of the recorded data showed a suspected bias in the dice. The statistician Karl Pearson analyzed a large number of outcomes at certain roulette tables and suggested that the wheels were biased. He wrote in 1894: Clearly, since the Casino does not serve the valuable end of huge laboratory for the preparation of probability statistics, it has no scientific Men of science cannot have their most refined theories disregarded in this shameless manner! The French Government must be urged by the hierarchy of science to close the gaming-saloons; it would be, of course, a graceful act to hand over the remaining resources of the Casino to the Académie des Sciences for the endowment of a laboratory of orthodox probability; in particular, of the new branch of that study, the application of the theory of chance to the biological problems of evolution, which is likely to occupy so much of men’s thoughts in the near future.3 However, these early experiments were suggestive and led to important discoveries in probability and statistics. They led Pearson to the which is of great importance in testing whether observed data fit a given probability distribution. By the early 1900s it was clear that a better way to generate random numbers was needed. In 1927, L. H. C. Tippett published a list of 41,600 digits obtained by selecting numbers haphazardly from census reports. In 1955, RAND Corporation printed a table of 1,000,000 random numbers generated from electronic noise. The advent of the high-speed computer raised the possibility of generating random numbers directly on the computer, and in the late 1940s John von Neumann suggested that this be done as follows: Suppose that you want a random sequence of four-digit numbers. Choose any four-digit number, say 6235, to start. Square this number to obtain 38,875,225. For the second number choose the middle four digits of this square (i.e., 8752). Do the same process starting with 8752 to get the third number, and so forth. More modern methods involve the concept of modular arithmetic. If $a$ is an integer and $m$ is a positive integer, then by $a\ (\mbox{mod}\ m)$ we mean the remainder when $a$ is divided by $m$. For example, $10\ ( \mbox{mod}\ 4) = 2$, $8\ (\mbox{mod}\ 2) = 0$, and so forth. To generate a random sequence $X_0, X_1, X_2, \dots$ of numbers choose a starting number $X_0$ and then obtain the numbers $X_{n+1}$ from $X_n$ by the formula $X_{n+1} = (aX_n + c)) (\text{ mod m}),$ where $a$, $c$, and $m$ are carefully chosen constants. The sequence $X_0, X_1,$ $X_2, \dots$ is then a sequence of integers between 0 and $m-1$. To obtain a sequence of real numbers in $[0,1)$, we divide each $X_j$ by $m$. The resulting sequence consists of rational numbers of the form $j/m$, where $0 \leq j \leq m-1$. Since $m$ is usually a very large integer, we think of the numbers in the sequence as being random real numbers in $[0, 1)$. For both von Neumann’s squaring method and the modular arithmetic technique the sequence of numbers is actually completely determined by the first number. Thus, there is nothing really random about these sequences. However, they produce numbers that behave very much as theory would predict for random experiments. To obtain different sequences for different experiments the initial number $X_0$ is chosen by some other procedure that might involve, for example, the time of day.4 During the Second World War, physicists at the Los Alamos Scientific Laboratory needed to know, for purposes of shielding, how far neutrons travel through various materials. This question was beyond the reach of theoretical calculations. Daniel McCracken, writing in the , states: The physicists had most of the necessary data: they knew the average distance a neutron of a given speed would travel in a given substance before it collided with an atomic nucleus, what the probabilities were that the neutron would bounce off instead of being absorbed by the nucleus, how much energy the neutron was likely to lose after a given collision and so on.5 John von Neumann and Stanislas Ulam suggested that the problem be solved by modeling the experiment by chance devices on a computer. Their work being secret, it was necessary to give it a code name. Von Neumann chose the name “Monte Carlo." Since that time, this method of simulation has been called the William Feller indicated the possibilities of using computer simulations to illustrate basic concepts in probability in his book In discussing the problem about the number of times in the lead in the game of “heads or tails" Feller writes: The results concerning fluctuations in coin tossing show that widely held beliefs about the law of large numbers are fallacious. These results are so amazing and so at variance with common intuition that even sophisticated colleagues doubted that coins actually misbehave as theory predicts. The record of a simulated experiment is therefore included.6 Feller provides a plot showing the result of 10,000 plays of similar to that in Figure $5$ The martingale betting system described in Exercise $10$ has a long and interesting history. Russell Barnhart pointed out to the authors that its use can be traced back at least to 1754, when Casanova, writing in his memoirs, History of My Life, writes She [Casanova’s mistress] made me promise to go to the casino [the Ridotto in Venice] for money to play in partnership with her. I went there and took all the gold I found, and, determinedly doubling my stakes according to the system known as the martingale, I won three or four times a day during the rest of the Carnival. I never lost the sixth card. If I had lost it, I should have been out of funds, which amounted to two thousand zecchini.7 Even if there were no zeros on the roulette wheel so the game was perfectly fair, the martingale system, or any other system for that matter, cannot make the game into a favorable game. The idea that a fair game remains fair and unfair games remain unfair under gambling systems has been exploited by mathematicians to obtain important results in the study of probability. We will introduce the general concept of a martingale in Chapter 6. The word itself also has an interesting history. The origin of the word is obscure. A recent version of the gives examples of its use in the early 1600s and says that its probable origin is the reference in Rabelais’s Book One, Chapter 20: Everything was done as planned, the only thing being that Gargantua doubted if they would be able to find, right away, breeches suitable to the old fellow’s legs; he was doubtful, also, as to what cut would be most becoming to the orator—the martingale, which has a draw-bridge effect in the seat, to permit doing one’s business more easily; the sailor-style, which affords more comfort for the kidneys; the Swiss, which is warmer on the belly; or the codfish-tail, which is cooler on the loins.8 Dominic Lusinchi noted an earlier occurrence of the word martingale. According to the French dictionary Le Petit Robert, the word comes from the Provençal word “martegalo," which means “from Martigues." Martigues is a town due west of Merseille. The dictionary gives the example of “chausses à la martinguale" (which means Martigues-style breeches) and the date 1491. In modern uses martingale has several different meanings, all related to in addition to the gambling use. For example, it is a strap on a horse’s harness used to hold down the horse’s head, and also part of a sailing rig used to hold down the bowsprit. The Labouchere system described in Exercise $9$ is named after Henry du Pre Labouchere (1831–1912), an English journalist and member of Parliament. Labouchere attributed the system to Condorcet. Condorcet (1743–1794) was a political leader during the time of the French revolution who was interested in applying probability theory to economics and politics. For example, he calculated the probability that a jury using majority vote will give a correct decision if each juror has the same probability of deciding correctly. His writings provided a wealth of ideas on how probability might be applied to human affairs.9 Exercise $1$ Modify the program CoinTosses to toss a coin $n$ times and print out after every 100 tosses the proportion of heads minus 1/2. Do these numbers appear to approach 0 as $n$increases? Modify the program again to print out, every 100 times, both of the following quantities: the proportion of heads minus 1/2, and the number of heads minus half the number of tosses. Do these numbers appear to approach 0 as $n$ increases? Exercise $2$ Modify the program CoinTosses so that it tosses a coin $n$ times and records whether or not the proportion of heads is within .1 of .5 (i.e., between .4 and .6). Have your program repeat this experiment 100 times. About how large must $n$ be so that approximately 95 out of 100 times the proportion of heads is between .4 and .6? Exercise $3$ In the early 1600s, Galileo was asked to explain the fact that, although the number of triples of integers from 1 to 6 with sum 9 is the same as the number of such triples with sum 10, when three dice are rolled, a 9 seemed to come up less often than a 10—supposedly in the experience of gamblers. 1. Write a program to simulate the roll of three dice a large number of times and keep track of the proportion of times that the sum is 9 and the proportion of times it is 10. 2. Can you conclude from your simulations that the gamblers were correct? Exercise $4$ In raquetball, a player continues to serve as long as she is winning; a point is scored only when a player is serving and wins the volley. The first player to win 21 points wins the game. Assume that you serve first and have a probability .6 of winning a volley when you serve and probability .5 when your opponent serves. Estimate, by simulation, the probability that you will win a game. Exercise $5$ Consider the bet that all three dice will turn up sixes at least once in $n$ rolls of three dice. Calculate $f(n)$, the probability of at least one triple-six when three dice are rolled $n$times. Determine the smallest value of $n$ necessary for a favorable bet that a triple-six will occur when three dice are rolled $n$ times. (DeMoivre would say it should be about $216\log 2 = 149.7$ and so would answer 150—see Exercise $16$ Do you agree with him?) Exercise $6$ In Las Vegas, a roulette wheel has 38 slots numbered 0, 00, 1, 2, …, 36. The 0 and 00 slots are green and half of the remaining 36 slots are red and half are black. A croupier spins the wheel and throws in an ivory ball. If you bet 1 dollar on red, you win 1 dollar if the ball stops in a red slot and otherwise you lose 1 dollar. Write a program to find the total winnings for a player who makes 1000 bets on red. Exercise $7$ Another form of bet for roulette is to bet that a specific number (say 17) will turn up. If the ball stops on your number, you get your dollar back plus 35 dollars. If not, you lose your dollar. Write a program that will plot your winnings when you make 500 plays of roulette at Las Vegas, first when you bet each time on red (see Exercise$6$), and then for a second visit to Exercise $8$ An astute student noticed that, in our simulation of the game of heads or tails (see Example$3$), the proportion of times the player is always in the lead is very close to the proportion of times that the player’s total winnings end up 0. Work out these probabilities by enumeration of all cases for two tosses and for four tosses, and see if you think that these probabilities are, in fact, the same. Exercise $9$ The for roulette is played as follows. Write down a list of numbers, usually 1, 2, 3, 4. Bet the sum of the first and last, $1 + 4 = 5$, on red. If you win, delete the first and last numbers from your list. If you lose, add the amount that you last bet to the end of your list. Then use the new list and bet the sum of the first and last numbers (if there is only one number, bet that amount). Continue until your list becomes empty. Show that, if this happens, you win the sum, $1 + 2 + 3 + 4 = 10$, of your original list. Simulate this system and see if you do always stop and, hence, always win. If so, why is this not a foolproof gambling system? Exercise $10$ Another well-known gambling system is the . Suppose that you are betting on red to turn up in roulette. Every time you win, bet 1 dollar next time. Every time you lose, double your previous bet. Suppose that you use this system until you have won at least 5 dollars or you have lost more than 100 dollars. Write a program to simulate this and play it a number of times and see how you do. In his book W. M. Thackeray remarks “You have not played as yet? Do not do so; above all avoid a martingale if you do."10 Was this good advice? Exercise $11$ Modify the program HTSimulation so that it keeps track of the maximum of Peter’s winnings in each game of 40 tosses. Have your program print out the proportion of times that your total winnings take on values $0,\ 2,\ 4,\ \dots,\ 40$. Calculate the corresponding exact probabilities for games of two tosses and four tosses. Exercise $12$ In an upcoming national election for the President of the United States, a pollster plans to predict the winner of the popular vote by taking a random sample of 1000 voters and declaring that the winner will be the one obtaining the most votes in his sample. Suppose that 48 percent of the voters plan to vote for the Republican candidate and 52 percent plan to vote for the Democratic candidate. To get some idea of how reasonable the pollster’s plan is, write a program to make this prediction by simulation. Repeat the simulation 100 times and see how many times the pollster’s prediction would come true. Repeat your experiment, assuming now that 49 percent of the population plan to vote for the Republican candidate; first with a sample of 1000 and then with a sample of 3000. (The Gallup Poll uses about 3000.) (This idea is discussed further in Chapter 9, Section 9.1.) Exercise $13$ The psychologist Tversky and his colleagues11 say that about four out of five people will answer (a) to the following question: A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital 15 babies are born each day. Although the overall proportion of boys is about 50 percent, the actual proportion at either hospital may be more or less than 50 percent on any day. At the end of a year, which hospital will have the greater number of days on which more than 60 percent of the babies born were boys? 1. the large hospital 2. the small hospital 3. neither—the number of days will be about the same. Assume that the probability that a baby is a boy is .5 (actual estimates make this more like .513). Decide, by simulation, what the right answer is to the question. Can you suggest why so many people go wrong? Exercise $14$ You are offered the following game. A fair coin will be tossed until the first time it comes up heads. If this occurs on the $j$th toss you are paid $2^j$ dollars. You are sure to win at least 2 dollars so you should be willing to pay to play this game—but how much? Few people would pay as much as 10 dollars to play this game. See if you can decide, by simulation, a reasonable amount that you would be willing to pay, per game, if you will be allowed to make a large number of plays of the game. Does the amount that you would be willing to pay per game depend upon the number of plays that you will be allowed? Exercise $15$ Tversky and his colleagues12 studied the records of 48 of the Philadelphia 76ers basketball games in the 1980–81 season to see if a player had times when he was hot and every shot went in, and other times when he was cold and barely able to hit the backboard. The players estimated that they were about 25 percent more likely to make a shot after a hit than after a miss. In fact, the opposite was true—the 76ers were 6 percent more likely to score after a miss than after a hit. Tversky reports that the number of hot and cold streaks was about what one would expect by purely random effects. Assuming that a player has a fifty-fifty chance of making a shot and makes 20 shots a game, estimate by simulation the proportion of the games in which the player will have a streak of 5 or more hits. Exercise $16$ Estimate, by simulation, the average number of children there would be in a family if all people had children until they had a boy. Do the same if all people had children until they had at least one boy and at least one girl. How many more children would you expect to find under the second scheme than under the first in 100,000 families? (Assume that boys and girls are equally likely.) Exercise $17$ Mathematicians have been known to get some of the best ideas while sitting in a cafe, riding on a bus, or strolling in the park. In the early 1900s the famous mathematician George Pólya lived in a hotel near the woods in Zurich. He liked to walk in the woods and think about mathematics. Pólya describes the following incident: At the hotel there lived also some students with whom I usually took my meals and had friendly relations. On a certain day one of them expected the visit of his fiancée, what (sic) I knew, but I did not foresee that he and his fiancée would also set out for a stroll in the woods, and then suddenly I met them there. And then I met them the same morning repeatedly, I don’t remember how many times, but certainly much too often and I felt embarrassed: It looked as if I was snooping around which was, I assure you, not the case.13 This set him to thinking about whether random walkers were destined to meet. Pólya considered random walkers in one, two, and three dimensions. In one dimension, he envisioned the walker on a very long street. At each intersection the walker flips a fair coin to decide which direction to walk next (see Figure $6a$). In two dimensions, the walker is walking on a grid of streets, and at each intersection he chooses one of the four possible directions with equal probability (see Figure $6b$). In three dimensions (we might better speak of a random climber), the walker moves on a three-dimensional grid, and at each intersection there are now six different directions that the walker may choose, each with equal probability (see Figure $6c$). The reader is referred to Section 12.1 where this and related problems are discussed. 1. Write a program to simulate a random walk in one dimension starting at 0. Have your program print out the lengths of the times between returns to the starting point (returns to 0). See if you can guess from this simulation the answer to the following question: Will the walker always return to his starting point eventually or might he drift away forever? 2. The paths of two walkers in two dimensions who meet after n steps can be considered to be a single path that starts at (0, 0) and returns to (0, 0) after 2n steps. This means that the probability that two random walkers in two dimensions meet is the same as the probability that a single walker in two dimensions ever returns to the starting point. Thus the question of whether two walkers are sure to meet is the same as the question of whether a single walker is sure to return to the starting point. Write a program to simulate a random walk in two dimensions and see if you think that the walker is sure to return to (0, 0). If so, P´olya would be sure to keep meeting his friends in the park. Perhaps by now you have conjectured the answer to the question: Is a random walker in one or two dimensions sure to return to the starting point? P´olya answered this question for dimensions one, two, and three. He established the remarkable result that the answer is yes in one and two dimensions and no in three dimensions 3. Write a program to simulate a random walk in two dimensions and see if you think that the walker is sure to return to $(0,0)$. If so, Pólya would be sure to keep meeting his friends in the park. Perhaps by now you have conjectured the answer to the question: Is a random walker in one or two dimensions sure to return to the starting point? Pólya answered this question for dimensions one, two, and three. He established the remarkable result that the answer is in one and two dimensions and in three dimensions. P´olya, “Two Incidents,” Scientists at Work: Festschrift in Honour of Herman Wold, ed. T. Dalenius, G. Karlsson, and S. Malmquist (Uppsala: Almquist & Wiksells Boktryckeri AB, 1970).
textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions/1.01%3A__Simulation_of_Discrete_Probabilities.txt
In this book we shall study many different experiments from a probabilistic point of view. What is involved in this study will become evident as the theory is developed and examples are analyzed. However, the overall idea can be described and illustrated as follows: to each experiment that we consider there will be associated a random variable, which represents the outcome of any particular experiment. The set of possible outcomes is called the sample space. In the first part of this section, we will consider the case where the experiment has only finitely many possible outcomes, i.e., the sample space is finite. We will then generalize to the case that the sample space is either finite or countably infinite. This leads us to the following definition. Random Variables and Sample Spaces Definition $1$ Suppose we have an experiment whose outcome depends on chance. We represent the outcome of the experiment by a capital Roman letter, such as $X$, called a random variable. The of the experiment is the set of all possible outcomes. If the sample space is either finite or countably infinite, the random variable is said to be discrete. We generally denote a sample space by the capital Greek letter $\Omega$. As stated above, in the correspondence between an experiment and the mathematical theory by which it is studied, the sample space $\Omega$ corresponds to the set of possible outcomes of the experiment. We now make two additional definitions. These are subsidiary to the definition of sample space and serve to make precise some of the common terminology used in conjunction with sample spaces. First of all, we define the elements of a sample space to be outcomes. Second, each subset of a sample space is defined to be an event. Normally, we shall denote outcomes by lower case letters and events by capital letters. Example $1$ A die is rolled once. We let $X$ denote the outcome of this experiment. Then the sample space for this experiment is the 6-element set $\Omega = \{1,2,3,4,5,6\}\ ,$ where each outcome $i$, for $i = 1$, …, 6, corresponds to the number of dots on the face which turns up. The event $E = \{2,4,6\}$ corresponds to the statement that the result of the roll is an even number. The event $E$ can also be described by saying that $X$ is even. Unless there is reason to believe the die is loaded, the natural assumption is that every outcome is equally likely. Adopting this convention means that we assign a probability of 1/6 to each of the six outcomes, i.e., $m(i) = 1/6$, for $1 \le i \le 6$. Distribution Functions We next describe the assignment of probabilities. The definitions are motivated by the example above, in which we assigned to each outcome of the sample space a nonnegative number such that the sum of the numbers assigned is equal to 1. Definition $2$ Let $X$ be a random variable which denotes the value of the outcome of a certain experiment, and assume that this experiment has only finitely many possible outcomes. Let $\Omega$ be the sample space of the experiment (i.e., the set of all possible values of $X$, or equivalently, the set of all possible outcomes of the experiment.) A for $X$ is a real-valued function $m$ whose domain is $\Omega$ and which satisfies: 1. $m(\omega) \geq 0\ , \qquad$for all $\omega\in\Omega$, and 2. $\sum_{\omega \in \Omega} m(\omega) = 1$. For any subset $E$ of $\Omega$, we define the of $E$ to be the number $P(E)$ given by $P(E) = \sum_{\omega\in E} m(\omega) .$ Example $\PageIndex{2$ Consider an experiment in which a coin is tossed twice. Let $X$ be the random variable which corresponds to this experiment. We note that there are several ways to record the outcomes of this experiment. We could, for example, record the two tosses, in the order in which they occurred. In this case, we have $\Omega =${HH,HT,TH,TT}. We could also record the outcomes by simply noting the number of heads that appeared. In this case, we have $\Omega =${0,1,2}. Finally, we could record the two outcomes, without regard to the order in which they occurred. In this case, we have $\Omega =${HH,HT,TT}. We will use, for the moment, the first of the sample spaces given above. We will assume that all four outcomes are equally likely, and define the distribution function $m(\omega)$ by $m(\mbox{HH}) = m(\mbox{HT}) = m(\mbox{TH}) = m(\mbox{TT}) = \frac14\ .$ Let $E =${HH,HT,TH} be the event that at least one head comes up. Then, the probability of $E$ can be calculated as follows: \begin{aligned} P(E) &=& m(\mbox{HH}) + m(\mbox{HT}) + m(\mbox{TH}) \ &=& \frac14 + \frac14 + \frac14 = \frac34\ .\end{aligned} Similarly, if $F =${HH,HT} is the event that heads comes up on the first toss, then we have \begin{aligned} P(F) &=& m(\mbox{HH}) + m(\mbox{HT}) \ &=& \frac14 + \frac14 = \frac12\ .\end{aligned} Example $3$ The sample space for the experiment in which the die is rolled is the 6-element set $\Omega = \{1,2,3,4,5,6\}$. We assumed that the die was fair, and we chose the distribution function defined by $m(i) = \frac16, \qquad {\rm{for}}\,\, i = 1, \dots, 6\ .$ If $E$ is the event that the result of the roll is an even number, then $E = \{2,4,6\}$ and \begin{aligned} P(E) &=& m(2) + m(4) + m(6) \ &=& \frac16 + \frac16 + \frac16 = \frac12\ .\end{aligned} Notice that it is an immediate consequence of the above definitions that, for every $\omega \in \Omega$, $P(\{\omega\}) = m(\omega)\ .$ That is, the probability of the elementary event $\{\omega\}$, consisting of a single outcome $\omega$, is equal to the value $m(\omega)$ assigned to the outcome $\omega$ by the distribution function. Example $4$ Three people, A, B, and C, are running for the same office, and we assume that one and only one of them wins. The sample space may be taken as the 3-element set $\Omega =${A,B,C} where each element corresponds to the outcome of that candidate’s winning. Suppose that A and B have the same chance of winning, but that C has only 1/2 the chance of A or B. Then we assign $m(\mbox{A}) = m(\mbox{B}) = 2m(\mbox{C})\ .$ Since $m(\mbox{A}) + m(\mbox{B}) + m(\mbox{C}) = 1\ ,$ we see that $2m(\mbox{C}) + 2m(\mbox{C}) + m(\mbox{C}) = 1\ ,$ which implies that $5m(\mbox{C}) = 1$. Hence, $m(\mbox{A}) = \frac25\ , \qquad m(\mbox{B}) = \frac25\ , \qquad m(\mbox{C}) = \frac15\ .$ Let $E$ be the event that either A or C wins. Then $E =${A,C}, and $P(E) = m(\mbox{A}) + m(\mbox{C}) = \frac25 + \frac15 = \frac35\ .$ In many cases, events can be described in terms of other events through the use of the standard constructions of set theory. We will briefly review the definitions of these constructions. The reader is referred to Figure [fig 1.6] for Venn diagrams which illustrate these constructions. Let $A$ and $B$ be two sets. Then the union of $A$ and $B$ is the set $A \cup B = \{x\,|\, x \in A\ \mbox{or}\ x \in B\}\ .$ The intersection of $A$ and $B$ is the set $A \cap B = \{x\,|\, x \in A\ \mbox{and}\ x \in B\}\ .$ The difference of $A$ and $B$ is the set $A - B = \{x\,|\, x \in A\ \mbox{and}\ x \not \in B\}\ .$ The set $A$ is a subset of $B$, written $A \subset B$, if every element of $A$ is also an element of $B$. Finally, the complement of $A$ is the set $\tilde A = \{x\,|\, x \in \Omega\ \mbox{and}\ x \not \in A\}\ .$ The reason that these constructions are important is that it is typically the case that complicated events described in English can be broken down into simpler events using these constructions. For example, if $A$ is the event that “it will snow tomorrow and it will rain the next day," $B$ is the event that “it will snow tomorrow," and $C$ is the event that “it will rain two days from now," then $A$ is the intersection of the events $B$ and $C$. Similarly, if $D$ is the event that “it will snow tomorrow or it will rain the next day," then $D = B \cup C$. (Note that care must be taken here, because sometimes the word “or" in English means that exactly one of the two alternatives will occur. The meaning is usually clear from context. In this book, we will always use the word “or" in the inclusive sense, i.e., $A$ or $B$ means that at least one of the two events $A$, $B$ is true.) The event $\tilde B$ is the event that “it will not snow tomorrow." Finally, if $E$ is the event that “it will snow tomorrow but it will not rain the next day," then $E = B - C$. Properties Theorem $1$ The probabilities assigned to events by a distribution function on a sample space $\Omega$ satisfy the following properties: 1. $P(E) \geq 0$ for every $E \subset \Omega\$. 2. $P( \Omega) = 1$. 3. If $E \subset F \subset \Omega$, then $P(E) \leq P(F)\$. 4. If $A$ and $B$ are subsets of $\Omega$, then $P(A \cup B) = P(A) + P(B)\$. 5. $P(\tilde A) = 1 - P(A)$ for every $A \subset \Omega$. For any event $E$ the probability $P(E)$ is determined from the distribution $m$ by $P(E) = \sum_{\omega \in E} m(\omega)\ ,$ for every $E \subset \Omega$. Since the function $m$ is nonnegative, it follows that $P(E)$ is also nonnegative. Thus, Property 1 is true. Property 2 is proved by the equations $P(\Omega) = \sum_{\omega \in \Omega} m(\omega) = 1\ .$ Suppose that $E \subset F \subset \Omega$. Then every element $\omega$ that belongs to $E$ also belongs to $F$. Therefore, $\sum_{\omega \in E} m(\omega) \leq \sum_{\omega \in F} m(\omega)\ ,$ since each term in the left-hand sum is in the right-hand sum, and all the terms in both sums are non-negative. This implies that $P(E) \le P(F)\ ,$ and Property 3 is proved. Suppose next that $A$ and $B$ are disjoint subsets of $\Omega$. Then every element $\omega$ of $A \cup B$ lies either in $A$ and not in $B$ or in $B$ and not in $A$. It follows that $\begin{array}{ll} P(A \cup B) &= \sum_{\omega \in A \cup B} m(\omega) = \sum_{\omega \in A} m(\omega) + \sum_{\omega \in B} m(\omega) \ & \ &= P(A) + P(B)\ , \end{array}$ and Property 4 is proved. Finally, to prove Property 5, consider the disjoint union $\Omega = A \cup \tilde A\ .$ Since $P(\Omega) = 1$, the property of disjoint additivity (Property 4) implies that $1 = P(A) + P(\tilde A)\ ,$ whence $P(\tilde A) = 1 - P(A)$. It is important to realize that Property 4 in Theorem [thm 1.1] can be extended to more than two sets. The general finite additivity property is given by the following theorem. Theorem $2$ If $A_1$, …, $A_n$ are pairwise disjoint subsets of $\Omega$ (i.e., no two of the $A_i$’s have an element in common), then $P(A_1 \cup \cdots \cup A_n) = \sum_{i = 1}^n P(A_i)\ .$ Let $\omega$ be any element in the union $A_1 \cup \cdots \cup A_n\ .$ Then $m(\omega)$ occurs exactly once on each side of the equality in the statement of the theorem. We shall often use the following consequence of the above theorem. Theorem $3$ Let $A_1$, …, $A_n$ be pairwise disjoint events with $\Omega = A_1 \cup \cdots \cup A_n$, and let $E$ be any event. Then $P(E) = \sum_{i = 1}^n P(E \cap A_i)\ .$ The sets $E \cap A_1$, …, $E \cap A_n$ are pairwise disjoint, and their union is the set $E$. The result now follows from Theorem [thm 1.1.5]. Corollary $\PageIndex {1 }$ For any two events $A$ and $B$, $P(A) = P(A \cap B) + P(A \cap \tilde B)\ .$ Property 4 can be generalized in another way. Suppose that $A$ and $B$ are subsets of $\Omega$ which are not necessarily disjoint. Then: Theorem $4$ If $A$ and $B$ are subsets of $\Omega$, then $P(A \cup B) = P(A) + P(B) - P(A \cap B)\ \label{eq 1.1}$ Proof The left side of Equation $1$ is the sum of $m(\omega)$ for $\omega$ in either $A$ or $B$. We must show that the right side of Equation $1$ also adds $m(\omega)$ for $\omega$ in $A$ or $B$. If $\omega$ is in exactly one of the two sets, then it is counted in only one of the three terms on the right side of Equation [eq 1.1]. If it is in both $A$ and $B$, it is added twice from the calculations of $P(A)$ and $P(B)$ and subtracted once for $P(A \cap B)$. Thus it is counted exactly once by the right side. Of course, if $A \cap B = \emptyset$, then Equation $1$ reduces to Property 4. (Equation $1$ can also be generalized; see Theorem [thm 3.10].) Tree Diagrams Example $1$ Let us illustrate the properties of probabilities of events in terms of three tosses of a coin. When we have an experiment which takes place in stages such as this, we often find it convenient to represent the outcomes by a as shown in Figure $7$ A path through the tree corresponds to a possible outcome of the experiment. For the case of three tosses of a coin, we have eight paths $\omega_1$, $\omega_2$, …, $\omega_8$ and, assuming each outcome to be equally likely, we assign equal weight, 1/8, to each path. Let $E$ be the event “at least one head turns up." Then $\tilde E$ is the event “no heads turn up." This event occurs for only one outcome, namely, $\omega_8 = \mbox{TTT}$. Thus, $\tilde E = \{\mbox{TTT}\}$ and we have $P(\tilde E) = P(\{\mbox{TTT}\}) = m(\mbox{TTT}) = \frac18\ .$ By Property 5 of Theorem $1$, $P(E) = 1 - P(\tilde E) = 1 - \frac18 = \frac78\ .$ Note that we shall often find it is easier to compute the probability that an event does not happen rather than the probability that it does. We then use Property 5 to obtain the desired probability. Let $A$ be the event “the first outcome is a head," and $B$ the event “the second outcome is a tail." By looking at the paths in Figure [fig 1.7], we see that $P(A) = P(B) = \frac12\ .$ Moreover, $A \cap B = \{\omega_3,\omega_4\}$, and so $P(A \cap B) = 1/4.$ Using Theorem $5$, we obtain \begin{aligned} P(A \cup B) & = & P(A) + P(B) - P(A \cap B) \ & = & \frac 12 + \frac 12 - \frac 14 = \frac 34\ .\end{aligned} Since $A \cup B$ is the 6-element set, $A \cup B = \{\mbox{HHH,HHT,HTH,HTT,TTH,TTT}\}\ ,$ we see that we obtain the same result by direct enumeration. In our coin tossing examples and in the die rolling example, we have assigned an equal probability to each possible outcome of the experiment. Corresponding to this method of assigning probabilities, we have the following definitions. Uniform Distribution Definition $3$ The uniform distribution on a sample space $\Omega$ containing $n$ elements is the function $m$ defined by $m(\omega) = \frac1n\ ,$ for every $\omega \in \Omega$. It is important to realize that when an experiment is analyzed to describe its possible outcomes, there is no single correct choice of sample space. For the experiment of tossing a coin twice in $1$, we selected the 4-element set $\Omega = \{HH,HT,TH,TT\}$ as a sample space and assigned the uniform distribution function. These choices are certainly intuitively natural. On the other hand, for some purposes it may be more useful to consider the 3-element sample space $\bar\Omega = \{0,1,2\}$ in which 0 is the outcome “no heads turn up," 1 is the outcome “exactly one head turns up," and 2 is the outcome “two heads turn up." The distribution function $\bar m$ on $\bar\Omega$ defined by the equations $\bar m(0) = \frac14\ ,\qquad \bar m(1) = \frac12\ , \qquad \bar m(2) = \frac14$ is the one corresponding to the uniform probability density on the original sample space $\Omega$. Notice that it is perfectly possible to choose a different distribution function. For example, we may consider the uniform distribution function on $\bar\Omega$, which is the function $\bar q$ defined by $\bar q(0) = \bar q(1) = \bar q(2) = \frac13\ .$ Although $\bar q$ is a perfectly good distribution function, it is not consistent with observed data on coin tossing. Example $4$: Consider the experiment that consists of rolling a pair of dice. We take as the sample space $\Omega$ the set of all ordered pairs $(i,j)$ of integers with $1\leq i\leq 6$ and $1\leq j\leq 6$. Thus, $\Omega = \{\,(i,j):1\leq i,\space j \leq 6\,\}\ .$ (There is at least one other “reasonable" choice for a sample space, namely the set of all unordered pairs of integers, each between 1 and 6. For a discussion of why we do not use this set, see Example $15$.) To determine the size of $\Omega$, we note that there are six choices for $i$, and for each choice of $i$ there are six choices for $j$, leading to 36 different outcomes. Let us assume that the dice are not loaded. In mathematical terms, this means that we assume that each of the 36 outcomes is equally likely, or equivalently, that we adopt the uniform distribution function on $\Omega$ by setting $m((i,j)) = \frac1{36},\qquad 1\leq i,\space j \leq 6\ .$ What is the probability of getting a sum of 7 on the roll of two dice—or getting a sum of 11? The first event, denoted by $E$, is the subset $E = \{(1,6),(6,1),(2,5),(5,2),(3,4),(4,3)\}\ .$ A sum of 11 is the subset $F$ given by $F = \{(5,6),(6,5)\}\ .$ Consequently, $\begin{array}{ll} P(E) = &\sum_{\omega \in E} m(\omega) = 6\cdot\frac1{36} = \frac16\ , \ & \ P(F) = &\sum_{\omega \in F} m(\omega) = 2\cdot\frac1{36} = \frac1{18}\ . \end{array}$ What is the probability of getting neither snakeeyes (double ones) nor (double sixes)? The event of getting either one of these two outcomes is the set $E = \{(1,1),(6,6)\}\ .$ Hence, the probability of obtaining neither is given by $P(\tilde E) = 1 - P(E) = 1 - \frac2{36} = \frac{17}{18}\ .$ In the above coin tossing and the dice rolling experiments, we have assigned an equal probability to each outcome. That is, in each example, we have chosen the uniform distribution function. These are the natural choices provided the coin is a fair one and the dice are not loaded. However, the decision as to which distribution function to select to describe an experiment is a part of the basic mathematical theory of probability. The latter begins only when the sample space and the distribution function have already been defined. Determination of Probabilities It is important to consider ways in which probability distributions are determined in practice. One way is by symmetry. For the case of the toss of a coin, we do not see any physical difference between the two sides of a coin that should affect the chance of one side or the other turning up. Similarly, with an ordinary die there is no essential difference between any two sides of the die, and so by symmetry we assign the same probability for any possible outcome. In general, considerations of symmetry often suggest the uniform distribution function. Care must be used here. We should not always assume that, just because we do not know any reason to suggest that one outcome is more likely than another, it is appropriate to assign equal probabilities. For example, consider the experiment of guessing the sex of a newborn child. It has been observed that the proportion of newborn children who are boys is about .513. Thus, it is more appropriate to assign a distribution function which assigns probability .513 to the outcome boy and probability .487 to the outcome girl than to assign probability 1/2 to each outcome. This is an example where we use statistical observations to determine probabilities. Note that these probabilities may change with new studies and may vary from country to country. Genetic engineering might even allow an individual to influence this probability for a particular case. Odds Statistical estimates for probabilities are fine if the experiment under consideration can be repeated a number of times under similar circumstances. However, assume that, at the beginning of a football season, you want to assign a probability to the event that Dartmouth will beat Harvard. You really do not have data that relates to this year’s football team. However, you can determine your own personal probability by seeing what kind of a bet you would be willing to make. For example, suppose that you are willing to make a 1 dollar bet giving 2 to 1 odds that Dartmouth will win. Then you are willing to pay 2 dollars if Dartmouth loses in return for receiving 1 dollar if Dartmouth wins. This means that you think the appropriate probability for Dartmouth winning is 2/3. Let us look more carefully at the relation between odds and probabilities. Suppose that we make a bet at $r$ to $1$ odds that an event $E$ occurs. This means that we think that it is $r$ times as likely that $E$ will occur as that $E$ will not occur. In general, $r$ to $s$ odds will be taken to mean the same thing as $r/s$ to 1, i.e., the ratio between the two numbers is the only quantity of importance when stating odds. Now if it is $r$ times as likely that $E$ will occur as that $E$ will not occur, then the probability that $E$ occurs must be $r/(r+1)$, since we have $P(E) = r\,P(\tilde E)$ and $P(E) + P(\tilde E) = 1\ .$ In general, the statement that the odds are $r$ to $s$ in favor of an event $E$ occurring is equivalent to the statement that \begin{aligned} P(E) & = & \frac{r/s}{(r/s) + 1}\ & = & \frac {r}{r+s}\ .\end{aligned} If we let $P(E) = p$, then the above equation can easily be solved for $r/s$ in terms of $p$; we obtain $r/s = p/(1-p)$. We summarize the above discussion in the following definition. Definition $\PageIndex {4 }$ If $P(E) = p$, the in favor of the event $E$ occurring are $r : s$ ($r$ to $s$) where $r/s = p/(1-p)$. If $r$ and $s$ are given, then $p$ can be found by using the equation $p = r/(r+s)$. Example $\PageIndex{ 5 }$ In Example $7$ we assigned probability 1/5 to the event that candidate C wins the race. Thus the odds in favor of C winning are $1/5 : 4/5$. These odds could equally well have been written as $1 : 4$, $2 : 8$, and so forth. A bet that C wins is fair if we receive 4 dollars if C wins and pay 1 dollar if C loses. Infinite Sample Spaces If a sample space has an infinite number of points, then the way that a distribution function is defined depends upon whether or not the sample space is countable. A sample space is countably infinite if the elements can be counted, i.e., can be put in one-to-one correspondence with the positive integers, and uncountably infinite otherwise. Infinite sample spaces require new concepts in general , but countably infinite spaces do not. If $\Omega = \{\omega_1,\omega_2,\omega_3, \dots\}$ is a countably infinite sample space, then a distribution function is defined exactly as in Definition $2$, except that the sum must now be a infinite sum. Theorem $1$ is still true, as are its extensions Theorems $5$ and $5$ One thing we cannot do on a countably infinite sample space that we could do on a finite sample space is to define a uniform distribution function as in Definition $3$. You are asked in Exercise $19$ to explain why this is not possible. Example $6$: A coin is tossed until the first time that a head turns up. Let the outcome of the experiment, $\omega$, be the first time that a head turns up. Then the possible outcomes of our experiment are $\Omega = \{1,2,3, \dots\}\ .$ Note that even though the coin could come up tails every time we have not allowed for this possibility. We will explain why in a moment. The probability that heads comes up on the first toss is 1/2. The probability that tails comes up on the first toss and heads on the second is 1/4. The probability that we have two tails followed by a head is 1/8, and so forth. This suggests assigning the distribution function $m(n) = 1/2^n$ for $n = 1$, 2, 3, …. To see that this is a distribution function we must show that $\sum_{\omega} m(\omega) = \frac12 + \frac14 + \frac18 + \cdots = 1 .$ That this is true follows from the formula for the sum of a geometric series, $1 + r + r^2 + r^3 + \cdots = \frac1{1-r}\ ,$ or $r + r^2 + r^3 + r^4 + \cdots = \frac r{1-r}\ , \label{eq 1.2}$ for $-1 < r < 1$. Putting $r = 1/2$, we see that we have a probability of 1 that the coin eventually turns up heads. The possible outcome of tails every time has to be assigned probability 0, so we omit it from our sample space of possible outcomes. Let $E$ be the event that the first time a head turns up is after an even number of tosses. Then $E = \{2,4,6,8, \dots\}\ ,$ and $P(E) = \frac14 + \frac1{16} + \frac1{64} +\cdots\ .$ Putting $r = 1/4$ in Equation$2$ see that $P(E) = \frac{1/4}{1 - 1/4} = \frac13\ .$ Thus the probability that a head turns up for the first time after an even number of tosses is 1/3 and after an odd number of tosses is 2/3. Historical Remarks An interesting question in the history of science is: Why was probability not developed until the sixteenth century? We know that in the sixteenth century problems in gambling and games of chance made people start to think about probability. But gambling and games of chance are almost as old as civilization itself. In ancient Egypt (at the time of the First Dynasty, ca. 3500 B.C.) a game now called “Hounds and Jackals" was played. In this game the movement of the hounds and jackals was based on the outcome of the roll of four-sided dice made out of animal bones called astragali. Six-sided dice made of a variety of materials date back to the sixteenth century B.C. Gambling was widespread in ancient Greece and Rome. Indeed, in the Roman Empire it was sometimes found necessary to invoke laws against gambling. Why, then, were probabilities not calculated until the sixteenth century? Several explanations have been advanced for this late development. One is that the relevant mathematics was not developed and was not easy to develop. The ancient mathematical notation made numerical calculation complicated, and our familiar algebraic notation was not developed until the sixteenth century. However, as we shall see, many of the combinatorial ideas needed to calculate probabilities were discussed long before the sixteenth century. Since many of the chance events of those times had to do with lotteries relating to religious affairs, it has been suggested that there may have been religious barriers to the study of chance and gambling. Another suggestion is that a stronger incentive, such as the development of commerce, was necessary. However, none of these explanations seems completely satisfactory, and people still wonder why it took so long for probability to be studied seriously. An interesting discussion of this problem can be found in Hacking.14 The first person to calculate probabilities systematically was Gerolamo Cardano (1501–1576) in his book This was translated from the Latin by Gould and appears in the book by Ore.15 Ore provides a fascinating discussion of the life of this colorful scholar with accounts of his interests in many different fields, including medicine, astrology, and mathematics. You will also find there a detailed account of Cardano’s famous battle with Tartaglia over the solution to the cubic equation. In his book on probability Cardano dealt only with the special case that we have called the uniform distribution function. This restriction to equiprobable outcomes was to continue for a long time. In this case Cardano realized that the probability that an event occurs is the ratio of the number of favorable outcomes to the total number of outcomes. Many of Cardano’s examples dealt with rolling dice. Here he realized that the outcomes for two rolls should be taken to be the 36 ordered pairs $(i,j)$ rather than the 21 unordered pairs. This is a subtle point that was still causing problems much later for other writers on probability. For example, in the eighteenth century the famous French mathematician d’Alembert, author of several works on probability, claimed that when a coin is tossed twice the number of heads that turn up would be 0, 1, or 2, and hence we should assign equal probabilities for these three possible outcomes.16 Cardano chose the correct sample space for his dice problems and calculated the correct probabilities for a variety of events. Cardano’s mathematical work is interspersed with a lot of advice to the potential gambler in short paragraphs, entitled, for example: “Who Should Play and When," “Why Gambling Was Condemned by Aristotle," “Do Those Who Teach Also Play Well?" and so forth. In a paragraph entitled “The Fundamental Principle of Gambling," Cardano writes: The most fundamental principle of all in gambling is simply equal conditions, e.g., of opponents, of bystanders, of money, of situation, of the dice box, and of the die itself. To the extent to which you depart from that equality, if it is in your opponent’s favor, you are a fool, and if in your own, you are unjust.17 Cardano did make mistakes, and if he realized it later he did not go back and change his error. For example, for an event that is favorable in three out of four cases, Cardano assigned the correct odds $3 : 1$ that the event will occur. But then he assigned odds by squaring these numbers (i.e., $9 : 1$) for the event to happen twice in a row. Later, by considering the case where the odds are $1 : 1$, he realized that this cannot be correct and was led to the correct result that when $f$ out of $n$ outcomes are favorable, the odds for a favorable outcome twice in a row are $f^2 : n^2 - f^2$. Ore points out that this is equivalent to the realization that if the probability that an event happens in one experiment is $p$, the probability that it happens twice is $p^2$. Cardano proceeded to establish that for three successes the formula should be $p^3$ and for four successes $p^4$, making it clear that he understood that the probability is $p^n$ for $n$ successes in $n$ independent repetitions of such an experiment. This will follow from the concept of independence that we introduce in Section 4.1. Cardano’s work was a remarkable first attempt at writing down the laws of probability, but it was not the spark that started a systematic study of the subject. This came from a famous series of letters between Pascal and Fermat. This correspondence was initiated by Pascal to consult Fermat about problems he had been given by Chevalier de Méré, a well-known writer, a prominent figure at the court of Louis XIV, and an ardent gambler. The first problem de Méré posed was a dice problem. The story goes that he had been betting that at least one six would turn up in four rolls of a die and winning too often, so he then bet that a pair of sixes would turn up in 24 rolls of a pair of dice. The probability of a six with one die is 1/6 and, by the product law for independent experiments, the probability of two sixes when a pair of dice is thrown is $(1/6)(1/6) = 1/36$. Ore18 claims that a gambling rule of the time suggested that, since four repetitions was favorable for the occurrence of an event with probability 1/6, for an event six times as unlikely, $6 \cdot 4 = 24$ repetitions would be sufficient for a favorable bet. Pascal showed, by exact calculation, that 25 rolls are required for a favorable bet for a pair of sixes. The second problem was a much harder one: it was an old problem and concerned the determination of a fair division of the stakes in a tournament when the series, for some reason, is interrupted before it is completed. This problem is now referred to as the problem of points. The problem had been a standard problem in mathematical texts; it appeared in Fra Luca Paccioli’s book , printed in Venice in 1494,19 in the form: A team plays ball such that a total of 60 points are required to win the game, and each inning counts 10 points. The stakes are 10 ducats. By some incident they cannot finish the game and one side has 50 points and the other 20. One wants to know what share of the prize money belongs to each side. In this case I have found that opinions differ from one to another but all seem to me insufficient in their arguments, but I shall state the truth and give the correct way. Reasonable solutions, such as dividing the stakes according to the ratio of games won by each player, had been proposed, but no correct solution had been found at the time of the Pascal-Fermat correspondence. The letters deal mainly with the attempts of Pascal and Fermat to solve this problem. Blaise Pascal (1623–1662) was a child prodigy, having published his treatise on conic sections at age sixteen, and having invented a calculating machine at age eighteen. At the time of the letters, his demonstration of the weight of the atmosphere had already established his position at the forefront of contemporary physicists. Pierre de Fermat (1601–1665) was a learned jurist in Toulouse, who studied mathematics in his spare time. He has been called by some the prince of amateurs and one of the greatest pure mathematicians of all times. The letters, translated by Maxine Merrington, appear in Florence David’s fascinating historical account of probability, Gods and Gambling.20 In a letter dated Wednesday, 29th July, 1654, Pascal writes to Fermat: Sir, Like you, I am equally impatient, and although I am again ill in bed, I cannot help telling you that yesterday evening I received from M. de Carcavi your letter on the problem of points, which I admire more than I can possibly say. I have not the leisure to write at length, but, in a word, you have solved the two problems of points, one with dice and the other with sets of games with perfect justness; I am entirely satisfied with it for I do not doubt that I was in the wrong, seeing the admirable agreement in which I find myself with you now… Your method is very sound and is the one which first came to my mind in this research; but because the labour of the combination is excessive, I have found a short cut and indeed another method which is much quicker and neater, which I would like to tell you here in a few words: for henceforth I would like to open my heart to you, if I may, as I am so overjoyed with our agreement. I see that truth is the same in Toulouse as in Paris. Here, more or less, is what I do to show the fair value of each game, when two opponents play, for example, in three games and each person has staked 32 pistoles. Let us say that the first man had won twice and the other once; now they play another game, in which the conditions are that, if the first wins, he takes all the stakes; that is 64 pistoles; if the other wins it, then they have each won two games, and therefore, if they wish to stop playing, they must each take back their own stake, that is, 32 pistoles each. Then consider, Sir, if the first man wins, he gets 64 pistoles; if he loses he gets 32. Thus if they do not wish to risk this last game but wish to separate without playing it, the first man must say: ‘I am certain to get 32 pistoles, even if I lost I still get them; but as for the other 32, perhaps I will get them, perhaps you will get them, the chances are equal. Let us then divide these 32 pistoles in half and give one half to me as well as my 32 which are mine for sure.’ He will then have 48 pistoles and the other 16… Pascal’s argument produces the table illustrated in Figure $1.9$ for the amount due player A at any quitting point. Each entry in the table is the average of the numbers just above and to the right of the number. This fact, together with the known values when the tournament is completed, determines all the values in this table. If player A wins the first game, then he needs two games to win and B needs three games to win; and so, if the tournament is called off, A should receive 44 pistoles. The letter in which Fermat presented his solution has been lost; but fortunately, Pascal describes Fermat’s method in a letter dated Monday, 24th August, 1654. From Pascal’s letter:21 This is your procedure when there are two players: If two players, playing several games, find themselves in that position when the first man needs games and second needs , then to find the fair division of stakes, you say that one must know in how many games the play will be absolutely decided. It is easy to calculate that this will be in games, from which you can conclude that it is necessary to see in how many ways four games can be arranged between two players, and one must see how many combinations would make the first man win and how many the second and to share out the stakes in this proportion. I would have found it difficult to understand this if I had not known it myself already; in fact you had explained it with this idea in mind. Fermat realized that the number of ways that the game might be finished may not be equally likely. For example, if A needs two more games and B needs three to win, two possible ways that the tournament might go for A to win are WLW and LWLW. These two sequences do not have the same chance of occurring. To avoid this difficulty, Fermat extended the play, adding fictitious plays, so that all the ways that the games might go have the same length, namely four. He was shrewd enough to realize that this extension would not change the winner and that he now could simply count the number of sequences favorable to each player since he had made them all equally likely. If we list all possible ways that the extended game of four plays might go, we obtain the following 16 possible outcomes of the play: WWWW WLWW LWWW LLWW WWWL WLWL LWWL LLWL WWLW WLLW LWLW LLLW WWLL WLLL LWLL LLLL . Player A wins in the cases where there are at least two wins (the 11 underlined cases), and B wins in the cases where there are at least three losses (the other 5 cases). Since A wins in 11 of the 16 possible cases Fermat argued that the probability that A wins is 11/16. If the stakes are 64 pistoles, A should receive 44 pistoles in agreement with Pascal’s result. Pascal and Fermat developed more systematic methods for counting the number of favorable outcomes for problems like this, and this will be one of our central problems. Such counting methods fall under the subject of , which is the topic of Chapter 3. We see that these two mathematicians arrived at two very different ways to solve the problem of points. Pascal’s method was to develop an algorithm and use it to calculate the fair division. This method is easy to implement on a computer and easy to generalize. Fermat’s method, on the other hand, was to change the problem into an equivalent problem for which he could use counting or combinatorial methods. We will see in Chapter 3 that, in fact, Fermat used what has become known as Pascal’s triangle! In our study of probability today we shall find that both the algorithmic approach and the combinatorial approach share equal billing, just as they did 300 years ago when probability got its start. Exercise $\PageIndex {1}$ Let $\Omega = \{a,b,c\}$ be a sample space. Let $m(a) = 1/2$, $m(b) = 1/3$, and $m(c) = 1/6$. Find the probabilities for all eight subsets of $\Omega$. Exercise $\PageIndex {2}$ Give a possible sample space $\Omega$ for each of the following experiments: 1. An election decides between two candidates A and B. 2. A two-sided coin is tossed. 3. A student is asked for the month of the year and the day of the week on which her birthday falls. 4. A student is chosen at random from a class of ten students. 5. You receive a grade in this course. Exercise $\PageIndex {3}$ For which of the cases in Exercise $2$ would it be reasonable to assign the uniform distribution function? Exercise $4$ Describe in words the events specified by the following subsets of $\Omega = \{HHH,\ HHT,\ HTH,\ HTT,\ THH,\ THT,\ TTH,\ TTT\}$ (see Example $5$ ). 1. $E = \{\mbox{HHH,HHT,HTH,HTT}\}$. 2. $E = \{\mbox{HHH,TTT}\}$. 3. $E = \{\mbox{HHT,HTH,THH}\}$. 4. $E = \{\mbox{HHT,HTH,HTT,THH,THT,TTH,TTT}\}$. Exercise $\PageIndex {5}$ What are the probabilities of the events described in Exercise $4$? Exercise $\PageIndex {6}$ A die is loaded in such a way that the probability of each face turning up is proportional to the number of dots on that face. (For example, a six is three times as probable as a two.) What is the probability of getting an even number in one throw? Exercise $\PageIndex {7}$ Let $A$ and $B$ be events such that $P(A \cap B) = 1/4$, $P(\tilde A) = 1/3$, and $P(B) = 1/2$. What is $P(A \cup B)$? Exercise $\PageIndex {8}$ A student must choose one of the subjects, art, geology, or psychology, as an elective. She is equally likely to choose art or psychology and twice as likely to choose geology. What are the respective probabilities that she chooses art, geology, and psychology? Exercise $\PageIndex{ 9}$ A student must choose exactly two out of three electives: art, French, and mathematics. He chooses art with probability 5/8, French with probability 5/8, and art and French together with probability 1/4. What is the probability that he chooses mathematics? What is the probability that he chooses either art or French? Exercise $\PageIndex {10 }$ For a bill to come before the president of the United States, it must be passed by both the House of Representatives and the Senate. Assume that, of the bills presented to these two bodies, 60 percent pass the House, 80 percent pass the Senate, and 90 percent pass at least one of the two. Calculate the probability that the next bill presented to the two groups will come before the president. Exercise $\PageIndex {11 }$ What odds should a person give in favor of the following events? 1. A card chosen at random from a 52-card deck is an ace. 2. Two heads will turn up when a coin is tossed twice. 3. Boxcars (two sixes) will turn up when two dice are rolled. Exercise $\PageIndex {12}$ You offer $3 : 1$ odds that your friend Smith will be elected mayor of your city. What probability are you assigning to the event that Smith wins? Exercise $\PageIndex {13.1}$ In a horse race, the odds that Romance will win are listed as $2 : 3$ and that Downhill will win are $1 : 2$. What odds should be given for the event that either Romance or Downhill wins? Exercise $\PageIndex {13.2}$ Let $X$ be a random variable with distribution function $m_X(x)$ defined by $m_X(-1) = 1/5,\ \ m_X(0) = 1/5,\ \ m_X(1) = 2/5,\ \ m_X(2) = 1/5\ .$ 1. Let $Y$ be the random variable defined by the equation $Y = X + 3$. Find the distribution function $m_Y(y)$ of $Y$. 2. Let $Z$ be the random variable defined by the equation $Z = X^2$. Find the distribution function $m_Z(z)$ of $Z$. Exercise $\PageIndex {14}$ John and Mary are taking a mathematics course. The course has only three grades: A, B, and C. The probability that John gets a B is .3. The probability that Mary gets a B is .4. The probability that neither gets an A but at least one gets a B is .1. What is the probability that at least one gets a B but neither gets a C? Exercise $\PageIndex {15}$ In a fierce battle, not less than 70 percent of the soldiers lost one eye, not less than 75 percent lost one ear, not less than 80 percent lost one hand, and not less than 85 percent lost one leg. What is the minimal possible percentage of those who simultaneously lost one ear, one eye, one hand, and one leg?22 Exercise $\PageIndex {16}$ Assume that the probability of a “success" on a single experiment with $n$ outcomes is $1/n$. Let $m$ be the number of experiments necessary to make it a favorable bet that at least one success will occur (see Exercise $5$ ). 1. Show that the probability that, in $m$ trials, there are no successes is $(1 - 1/n)^m$. 2. (de Moivre) Show that if $m = n \log 2$ then $\lim_{n \to \infty} \left(1 - \frac1n \right)^m = \frac12\ .$ : $\lim_{n \to \infty} \left(1 - \frac1n \right)^n = e^{-1}\ .$ Hence for large $n$ we should choose $m$ to be about $n \log 2$. 3. Would DeMoivre have been led to the correct answer for de Méré’s two bets if he had used his approximation? Exercise $\PageIndex {17}$ 1. For events $A_1$, …, $A_n$, prove that $P(A_1 \cup \cdots \cup A_n) \leq P(A_1) + \cdots + P(A_n)\ .$ 2. For events $A$ and $B$, prove that $P(A \cap B) \geq P(A) + P(B) - 1.$ Exercise $\PageIndex {18}$ If $A$, $B$, and $C$ are any three events, show that $\begin{array}{ll} P(A \cup B \cup C) &= P(A) + P(B) + P(C) \ &\ \ -\, P(A \cap B) - P(B \cap C) - P(C \cap A) \ &\ \ +\, P(A \cap B \cap C)\ . \end{array}$ Exercise $\PageIndex {19}$ Explain why it is not possible to define a uniform distribution function (see Definition $3$) on a countably infinite sample space. : Assume $m(\omega) = a$ for all $\omega$, where $0 \leq a \leq 1$. Does $m(\omega)$ have all the properties of a distribution function? Exercise $\PageIndex {20}$ In Example $10$ find the probability that the coin turns up heads for the first time on the tenth, eleventh, or twelfth toss. Exercise $\PageIndex {21}$ A die is rolled until the first time that a six turns up. We shall see that the probability that this occurs on the $n$th roll is $(5/6)^{n-1}\cdot(1/6)$. Using this fact, describe the appropriate infinite sample space and distribution function for the experiment of rolling a die until a six turns up for the first time. Verify that for your distribution function $\sum_{\omega} m(\omega) = 1$. Exercise $22$ Let $\Omega$ be the sample space $\Omega = \{0,1,2,\dots\}\ ,$ and define a distribution function by $m(j) = (1 - r)^j r\ ,$ for some fixed $r$, $0 < r < 1$, and for $j = 0, 1, 2, \ldots$. Show that this is a distribution function for $\Omega$. Exercise $\PageIndex {23}$ Our calendar has a 400-year cycle. B. H. Brown noticed that the number of times the thirteenth of the month falls on each of the days of the week in the 4800 months of a cycle is as follows: Sunday 687 Monday 685 Tuesday 685 Wednesday 687 Thursday 684 Friday 688 Saturday 684 From this he deduced that the thirteenth was more likely to fall on Friday than on any other day. Explain what he meant by this. Exercise $\PageIndex {24}$ Tversky and Kahneman23 asked a group of subjects to carry out the following task. They are told that: Linda is 31, single, outspoken, and very bright. She majored in philosophy in college. As a student, she was deeply concerned with racial discrimination and other social issues, and participated in anti-nuclear demonstrations. The subjects are then asked to rank the likelihood of various alternatives, such as: (1) Linda is active in the feminist movement. (2) Linda is a bank teller. (3) Linda is a bank teller and active in the feminist movement. Tversky and Kahneman found that between 85 and 90 percent of the subjects rated alternative (1) most likely, but alternative (3) more likely than alternative (2). Is it? They call this phenomenon the and note that it appears to be unaffected by prior training in probability or statistics. Is this phenomenon a fallacy? If so, why? Can you give a possible explanation for the subjects’ choices? Exercise $25$ Two cards are drawn successively from a deck of 52 cards. Find the probability that the second card is higher in rank than the first card. : Show that $1 = P(\mbox{higher}) + P(\mbox{lower}) + P(\mbox{same})$ and use the fact that $P(\mbox{higher}) = P(\mbox{lower})$. Exercise $\PageIndex {26}$ A is a table that lists for a given number of births the estimated number of people who will live to a given age. In Appendix C we give a life table based upon 100,000 births for ages from 0 to 85, both for women and for men. Show how from this table you can estimate the probability $m(x)$ that a person born in 1981 would live to age $x$. Write a program to plot $m(x)$ both for men and for women, and comment on the differences that you see in the two cases. Exercise $\PageIndex{ 27}$ Here is an attempt to get around the fact that we cannot choose a “random integer." 1. What, intuitively, is the probability that a “randomly chosen" positive integer is a multiple of 3? 2. Let $P_3(N)$ be the probability that an integer, chosen at random between 1 and $N$, is a multiple of 3 (since the sample space is finite, this is a legitimate probability). Show that the limit $P_3 = \lim_{N \to \infty} P_3(N)$ exists and equals 1/3. This formalizes the intuition in (a), and gives us a way to assign “probabilities" to certain events that are infinite subsets of the positive integers. 3. If $A$ is any set of positive integers, let $A(N)$ mean the number of elements of $A$ which are less than or equal to $N$. Then define the “probability" of $A$ as $P(A) = \lim_{N \to \infty} A(N)/N\ ,$ provided this limit exists. Show that this definition would assign probability 0 to any finite set and probability 1 to the set of all positive integers. Thus, the probability of the set of all integers is not the sum of the probabilities of the individual integers in this set. This means that the definition of probability given here is not a completely satisfactory definition. 4. Let $A$ be the set of all positive integers with an odd number of digits. Show that $P(A)$ does not exist. This shows that under the above definition of probability, not all sets have probabilities. Exercise $\PageIndex {28}$ (from Sholander24) In a standard clover-leaf interchange, there are four ramps for making right-hand turns, and inside these four ramps, there are four more ramps for making left-hand turns. Your car approaches the interchange from the south. A mechanism has been installed so that at each point where there exists a choice of directions, the car turns to the right with fixed probability $r$. 1. If $r = 1/2$, what is your chance of emerging from the interchange going west? 2. Find the value of $r$ that maximizes your chance of a westward departure from the interchange. Exercise $\PageIndex{ 29}$ (from Benkoski25) Consider a “pure" cloverleaf interchange in which there are no ramps for right-hand turns, but only the two intersecting straight highways with cloverleaves for left-hand turns. (Thus, to turn right in such an interchange, one must make three left-hand turns.) As in the preceding problem, your car approaches the interchange from the south. What is the value of $r$ that maximizes your chances of an eastward departure from the interchange? Exercise $\PageIndex {30}$ (from vos Savant26) A reader of Marilyn vos Savant’s column wrote in with the following question: My dad heard this story on the radio. At Duke University, two students had received A’s in chemistry all semester. But on the night before the final exam, they were partying in another state and didn’t get back to Duke until it was over. Their excuse to the professor was that they had a flat tire, and they asked if they could take a make-up test. The professor agreed, wrote out a test and sent the two to separate rooms to take it. The first question (on one side of the paper) was worth 5 points, and they answered it easily. Then they flipped the paper over and found the second question, worth 95 points: ‘Which tire was it?’ What was the probability that both students would say the same thing? My dad and I think it’s 1 in 16. Is that right?" 1. Is the answer 1/16? 2. The following question was asked of a class of students. “I was driving to school today, and one of my tires went flat. Which tire do you think it was?" The responses were as follows: right front, 58%, left front, 11%, right rear, 18%, left rear, 13%. Suppose that this distribution holds in the general population, and assume that the two test-takers are randomly chosen from the general population. What is the probability that they will give the same answer to the second question? 1.R: References 1. T. C. Fry, 2nd ed. (Princeton: Van Nostrand, 1965). 2. E. Czuber, 3rd ed. (Berlin: Teubner, 1914). 3. K. Pearson, “Science and Monte Carlo," , vol. 55 (1894), p. 193; cited in S. M. Stigler, (Cambridge: Harvard University Press, 1986). 4. For a detailed discussion of random numbers, see D. E. Knuth, vol. II (Reading: Addison-Wesley, 1969). 5. D. D. McCracken, “The Monte Carlo Method," vol. 192 (May 1955), p. 90. 6. W. Feller, vol. 1, 3rd ed. (New York: John Wiley & Sons, 1968), p. xi. 7. G. Casanova, vol. IV, Chap. 7, trans. W. R. Trask (New York: Harcourt-Brace, 1968), p. 124. 8. Quoted in the ed. S. Putnam (New York: Viking, 1946), p. 113. 9. Le Marquise de Condorcet, (Paris: Imprimerie Royale, 1785). 10. W. M. Thackerey, (London: Bradbury and Evans, 1854–55). 11. See K. McKean, “Decisions, Decisions," June 1985, pp. 22–31. Kevin McKean, Discover Magazine, ©1987 Family Media, Inc. Reprinted with permission. This popular article reports on the work of Tverksy et. al. in (Cambridge: Cambridge University Press, 1982).↩ 12. ibid. 13. G. Pólya, “Two Incidents," ed. T. Dalenius, G. Karlsson, and S. Malmquist (Uppsala: Almquist & Wiksells Boktryckeri AB, 1970).↩ 14. I. Hacking, (Cambridge: Cambridge University Press, 1975). 15. O. Ore, (Princeton: Princeton University Press, 1953). 16. J. d’Alembert, “Croix ou Pile," in ed. Diderot, vol. 4 (Paris, 1754). 17. O. Ore, op. cit., p. 189. 18. O. Ore, “Pascal and the Invention of Probability Theory,” , vol. 67 (1960), pp. 409–419. 19. ibid., p. 414. 20. F. N. David, (London: G. Griffin, 1962), p. 230 ff. 21. ibid., p. 239ff. 22. See Knot X, in Lewis Carroll, vol. 2 (Dover, 1958). 23. K. McKean, “Decisions, Decisions," pp. 22–31. 24. M. Sholander, Problem #1034, vol. 52, no. 3 (May 1979), p. 183. 25. S. Benkoski, Comment on Problem #1034, vol. 52, no. 3 (May 1979), pp. 183-184. 26. M. vos Savant, , 3 March 1996, p. 14.
textbooks/stats/Probability_Theory/Book%3A_Introductory_Probability_(Grinstead_and_Snell)/01%3A_Discrete_Probability_Distributions/1.02%3A_Discrete_Probability_Distribution.txt