chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
The sample mean, denoted $\overline{ x }$, is the average of a sample of a variable X. The sample mean is an estimate of the population mean µ. Every sample has a sample mean and these sample means differ (depending on the sample). Thus, before a sample is selected $\overline{ x }$ is a variable, in fact, if the sample is a random sample then $\overline{ x }$ is a random variable. For this reason, we can think of the “distribution of $\overline{ x }$,” called the “Sampling Distribution of $\overline{ x }$,” as the theoretical histogram constructed from the sample averages of all possible samples of size n.Definition: Word Mean and Standard Deviation of a Sample Mean Let $\overline{ x }$ be the mean of a random sample of size n from a population having mean μ and standard deviation σ, then The mean of the sample means = $\mu_{\bar{x}}$ = µ. The standard deviation (standard error) of the sample means = $\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}}$. This says that the mean of the sample means is the same as the population mean. The standard deviation of the sample means is the population standard deviation divided by the square root of the sample size. This is called the sampling distribution of the mean. Let X be the height of men in the United States. Studies show that the heights of 15-year old boys in the United States are normally distributed with average height 67 inches and a standard deviation of 2.5 inches. A random experiment consists of choosing 16 15-year old boys at random. Compute the mean and standard deviation of $\overline{ x }$, that is, the mean and standard deviation for the average height of a random sample of 16 boys. Solution The mean of the sample means is the same as the population mean $\mu_{\bar{x}}$ = 67. The standard deviation in the sample means is the population standard deviation divided by the square root of the sample size, $\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}}=\frac{2.5}{\sqrt{16}}=0.625$. Notice that the mean of a sample means is always the same as the mean of the population, but the standard deviation is smaller. See Figure 6-30. Sampling Distribution of a Sample Mean If a population is normally distributed N(µ, σ), then the sample mean $\overline{ x }$ of n independent observations is normally distributed as $N\left(\mu, \frac{\sigma}{\sqrt{n}}\right)$ / Figure 6-31 shows three population distributions and the corresponding sampling distributions for sample sizes of 2, 5, 12 and 30. Notice as the sample size gets larger, the sampling distribution gets closer to the dashed red line of the normal distribution. Video explanation of this process: https://youtu.be/lsCc_pS3O28. Retrieved from OpenIntroStatistics. Figure 6-31 The Central Limit Theorem establishes that in some situations the distribution of the sample statistic will take on a normal distribution, even when the population is not normally distributed. This allows us to use the normal distribution to make inferences from samples to populations. The Central Limit Theorem guarantees that the distribution of the sample mean will be normally distributed when the sample size is large (usually 30 or higher) no matter what shape the population distribution is. Finding Probabilities Using the Central Limit Theorem (CLT) If we are finding the probability of a sample mean and have a sample size of 30 or more, or the population was normally distributed, then we can use the normal distribution to find the probability that the sample mean is below, above or between two values using the CLT. Watch this video on using this applet for the Central Limit Theorem, and then take some time to play with the applet to get a sense of the difference between the distribution of the population, the distribution of a sample and the sampling distribution. Watch the video on how to use the applet: https://youtu.be/aIPvgiXyBMI. Try the applet on your own. Applet: http://onlinestatbook.com/stat_sim/sampling_dist/index.html. The population of midterm scores for all students taking a PSU Business Statistics course has a known standard deviation of 5.27. The mean of the population is 18.07 and the median of the population is 19. A sample of 25 was taken and the sample mean was 18.07 and we want to know what the sampling distribution for the mean looks like. Figure 6-32 shows 3 graphs using the Sampling Distribution Applet. a) What is the mean and standard deviation of the sampling distribution? b) Would you expect midterm exam scores to be skewed or bell-shaped? c) Which of these graphs in Figure 6-32 correspond to the distribution of the population, distribution of a single sample and the sampling distribution of the mean? d) Compute the probability that for next term’s class they have a sample mean of more than 20. Solution a) By the Central Limit Theorem (CLT) the mean of the sampling distribution $\mu_{\bar{x}}$ equals the mean of the population which was given as µ=18.07. The standard deviation of the sampling distribution by the CLT would be the population standard deviation divided by the square root of the sample size $\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}}=\frac{5.27}{\sqrt{25}}=1.054$ b) The population mean = 18.07 is smaller than the median = 19 therefore the distribution is negatively skewed, the mean is pulled in the direction of the outliers. c) Using the Sampling Distribution Applet and the CLT, the sampling distribution will be bell-shaped therefore, graph 3 has to be the sampling distribution. Graphs 1 & 2 in Figure 6-32 are both negatively skewed. A single sample of 25 should look similar to the entire population, but we would expect only 25 items and not every score possible would be received from the 25 students. Graph 1 in Figure 6-32 fits this description and therefore the graph of the distribution of a single sample (which is not the same thing as the sampling distribution) is graph 1. This leaves graph 2 as the distribution of the population. Figure 6-33 is a picture of the applet modeling the exam scores. Note the top picture is the population distribution, the second graph is simulating a single sample drawn and the bottom picture is a graph of all the sample means for each sample. This last graph is the sampling distribution of the means. d) The P($\bar{X}$ > 20) would be normally distributed with a mean $\mu_{\bar{x}}$ = 18.07 with a standard deviation of $\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{n}}=\frac{5.27}{\sqrt{25}}=1.054$ Draw and shade the sampling distribution curve. This calculator can be used to draw and shade the sampling distribution: http://homepage.divms.uiowa.edu/~mbognar/applets/normal.html, filling in the mean μ, standard deviation $\frac{\sigma}{\sqrt{n}}$ and x-value (in this case the sample mean) will find the probability. See Figure 6-34. TI Calculator: normalcdf(20,1E99,18.07,5.27/√25) = 0.0335. Excel: P($\bar{X}$ > 20) =1-NORM.DIST(20,18.07,5.27/SQRT(25),TRUE) = 0.0335. Let X be the height of 15-year old boys in the United States. Studies show that the heights of 15-year old boys in the United States are normally distributed with average height of 67 inches and a standard deviation of 2.5 inches. A random experiment consists of randomly choosing sixteen 15-year old boys. Compute the probability that the mean height of those sampled is 69.5 inches or taller. Solution The sample mean $\mu_{\bar{x}}$ is approximately $N(67,0.625) . \mathrm{P}(\bar{X} \geq 69.5)=P\left(\frac{8-67}{0.625} \geq \frac{69.5-67}{0.625}\right)=\mathrm{P}(Z \geq 4) \approx 0.00003$, using the calculator, be careful with the scientific notation. This is a very small probability. This should make sense because one would think that the likelihood of randomly selecting 16 boys that have an average height of 5’9.5” would be slim. Figure 6-35 shows the density curves showing the shaded areas of P(X ≥ 69.5) and P($\mu_{\bar{x}}$ ≥ 69.5). The sampling distribution has a much smaller spread (standard deviation) and hence less area to the right of 69.5. In general, the Central Limit Theorem questions will use the same method as previous sections, however you will use a standard deviation of $\frac{\sigma}{\sqrt{n}}$ and a z-score of $z=\frac{\bar{x}-\mu}{\left(\frac{\sigma}{\sqrt{n}}\right)}$. The average teacher’s salary in Connecticut (ranked first among states) is $57,337. Suppose that the distribution of salaries is normally distributed with a standard deviation of$7,500. a) What is the probability that a randomly selected teacher makes less than $55,000 per year? b) If we sample 10 teachers’ salaries, what is the probability that the sample mean is less than$55,000? c) If we sample 100 teachers’ salaries, what is the probability that the sample mean is less than $55,000? Solution a) Find P(X < 55000), since we are only looking at one person use $z=\frac{x-\mu}{\sigma}$. If we were asked to standardize the salary, $z=\frac{55000-57337}{7500}=-0.3116$, however we can use technology and skip this step. Use the normalcdf(-1E99,55000,57337,7500) (the TI-89 use -∞ for the lower boundary instead of -1E99) and you get the probability of 0.3777. The P(X < 55000) = P(Z < –0.3116) = 0.3777. Note that we are not using the CLT since we are not finding the probability of an average for a group of people, just the probability for one person. b) Find P($\mu_{\bar{x}}$ < 55000), but we are looking at the probability of a mean for 10 teachers, so use $z=\frac{\bar{x}-\mu}{\left(\frac{\sigma}{\sqrt{n}}\right)}$. Standardize the salary, $z=\frac{55000-57337}{\left(\frac{7500}{\sqrt{10}}\right)}=–0.9853657189$, use your calculator to get P($\mu_{\bar{x}}$ < 55000) = P(Z < –0.9853657189) = 0.1622. You do not need the extra step of finding the z-score first. Instead you can use normalcdf(-1E99,55000,57337,7500/√10) = 0.1622. c) Find P($\mu_{\bar{x}}$ < 55000), but we are looking at the probability of a mean for 100 teachers, so use $z=\frac{\bar{x}-\mu}{\left(\frac{\sigma}{\sqrt{n}}\right)}$. Standardize the salary, $z=\frac{55000-57337}{\left(\frac{7500}{\sqrt{100}}\right)}=–3.116$ use your calculator normalcdf(-1E99,55000,57337,7500/√100) to get P($\mu_{\bar{x}}$ < 55000) = P(Z < –3.116) = 0.0009167. As the sample size increase, the probability of seeing a sample mean of less than$55,000 is getting smaller. When you have a z-score that is less than –3 or greater than 3 we would call this a rare event or outlier. We will use this same process in inferential statistics in chapter 8.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/06%3A_Continuous_Probability_Distributions/6.05%3A_The_Central_Limit_Theorem.txt
Chapter 6 Exercises 1. The waiting time for a bus is uniformly distributed between 0 and 15 minutes. What is the probability that a person has to wait at most 8 minutes for a bus? 2. The distance a golf ball travels when hit by a driver club is uniformly distributed between 200 and 300 feet. What is the probability that the ball hit by a driver club with travel at least 280 feet? 3. The time it takes me to wash the dishes is uniformly distributed between 7 minutes and 16 minutes. What is the probability that washing dishes tonight will take me between 9 and 14 minutes? 4. Bus wait times are uniformly distributed between 5 minutes and 18 minutes. Compute the probability that a randomly selected bus wait time will be between 9 and 13 minutes. 5. The lengths of an instructor’s classes have a continuous uniform distribution between 60 and 90 minutes. If one such class is randomly selected, find the probability that the class length is less than 70.7 minutes. 6. The time it takes for students to finish an exam is uniformly distributed between 1 to 2 hours. What is the probability that a randomly selected students finishes their exam in at least 95 minutes? 7. Suppose the commuter trains of a public transit system have a waiting time during peak rush hour periods of fifteen minutes. Assume the waiting times are uniformly distributed. Find the probability of waiting between 7 and 8 minutes. 8. Suppose that elementary students' ages are uniformly distributed from 5 to 13 years old. Compute the probability that a randomly selected elementary student will be between ages 7.07 and 10.33 years old. 9. The amount of gasoline sold daily at a service station is uniformly distributed with a minimum of 2,000 gallons and a maximum of 5,000 gallons. a) Compute the probability that the service station will sell at least 4,000 gallons? b) Compute the probability that daily sales will fall between 2,500 and 3,000 gallons? c) What is the probability that the station will sell exactly 2,500 gallons? 10. The sojourn time (waiting time plus service time) for customers waiting at a movie theater box office is exponentially distributed with a mean of 4 minutes. Find the following. a) The probability that the sojourn time will be more than 5 minutes. b) The probability that the sojourn time will be less than 5 minutes. c) The probability that the sojourn time will be at most 5 minutes. 11. Suppose that the distance, in miles, that people are willing to commute to work is exponentially distributed with mean 24 miles. What is the probability that people are willing to commute at most 12 miles to work? 12. On average, a pair of running shoes lasts 13 months if used every day. The length of time running shoes last is exponentially distributed. What is the probability that a pair of running shoes last less than 6 months if used every day? 13. The lifetime of a light emitting diode (LED) lightbulb is exponentially distributed with an average lifetime of 5,000 hours. Find the following. a) The probability that the LED lightbulb will last more than 3,500 hours. b) The probability that the LED lightbulb will last less than 4,000 hours. c) The probability that the LED lightbulb will last between 3,500 and 4,500 hours. 14. The number of days ahead travelers purchase their airline tickets can be modeled by an exponential distribution with the average amount of time equal to 50 days. a) Compute the probability that a traveler purchases a ticket no more than 85 days ahead of time. b) Compute the probability that a traveler purchases a ticket more than 90 days ahead of time. c) Compute the probability that a traveler purchases a ticket within 85 and 90 days ahead of the flight date. 15. The average time it takes a salesperson to finish a sale on the phone is 5 minutes and is exponentially distributed. a) Compute the probability that less than 10 minutes pass before a sale is completed. b) Compute the probability that more than 15 minutes pass before a sale is completed. c) Compute the probability that between 10 and 15 minutes pass before a sale is completed. 16. A brand of battery is advertised to last 3 years. Assume the advertised claim is true and that the time the battery lasts is exponentially distributed. What is the probability that one of these batteries will last within 15 months of the advertised average? 17. For a standard normal distribution, find the following probabilities. a) P(Z > -2.06) b) P(-2.83 < Z < 0.21) c) P(Z < 1.58) d) P(Z ≥ 1.69) e) P(Z < -2.82) f) P(Z > 2.14) g) P(1.97 ≤ Z ≤ 2.93) h) P(Z ≤ -0.51) 18. Compute the following probabilities where Z ~ N(0,1). a) P(Z < 1.57) b) P(Z > -1.24) c) P(-1.96 ≤ Z ≤ 1.96) d) P(Z ≤ 3) e) P(1.31 < Z < 2.15) f) P(Z ≥ 1.8) 19. Compute the following probabilities where Z ~ N(0,1). a) P(Z ≤ -2.03) b) P(Z > 1.58) c) P(-1.645 ≤ Z ≤ 1.645) d) P(Z < 2) e) P(-2.38 < Z < -1.12) f) P(Z ≥ -1.75) 20. Compute the area under the standard normal distribution to the left of z = -1.05. 21. Compute the area under the standard normal distribution to the left of z = -0.69. 22. Compute the area under the standard normal distribution to the right of z = 2.08. 23. Compute the area under the standard normal distribution to the right of z = 1.22. 24. Compute the area under the standard normal distribution between z = -0.29 and z = 0.14. 25. Compute the area under the standard normal distribution between z = -2.97 and z = -2.14. 26. Compute the area under the curve of the standard normal distribution that is within 1.328 standard deviations from either side of the mean. 27. Compute the area under the standard normal distribution to the left of z = 0.85. 28. Compute the z-score that has an area of 0.85 to the left of the z-score. 29. For the standard normal distribution, find the z-score that gives the 29th percentile. 30. For the standard normal distribution, find the z-score that gives the 75th percentile. 31. Compute the two z-scores that give the middle 99% of the standard normal distribution. 32. Compute the two z-scores that give the middle 95% of the standard normal distribution. 33. Find the IQR for the standard normal distribution. 34. The length of a human pregnancy is normally distributed with a mean of 272 days with a standard deviation of 9 days (Bhat & Kushtagi, 2006). a) Compute the probability that a pregnancy lasts longer than 281 days. b) Compute the probability that a pregnancy lasts less than 250 days. c) How many days would a pregnancy last for the shortest 20%? 35. Arm span is the physical measurement of the length of an individual's arms from fingertip to fingertip. A man’s arm span is approximately normally distributed with mean of 70 inches with a standard deviation of 4.5 inches. a) Compute the probability that a randomly selected man has an arm span below 65 inches. b) Compute the probability that a randomly selected man has an arm span between 60 and 72 inches. c) Compute the length in inches of the 99th percentile for a man’s arm span. 36. The size of fish is very important to commercial fishing. A study conducted in 2012 found the length of Atlantic cod caught in nets in Karlskrona to have a mean of 49.9 cm and a standard deviation of 3.74 cm (Ovegard, Berndt & Lunneryd, 2012). Assume the length of fish is normally distributed. a) Compute the probability that a cod is longer than 55 cm. b) What is the length in cm of the longest 15% of Atlantic cod in this area? 37. A dishwasher has a mean life of 12 years with an estimated standard deviation of 1.25 years ("Appliance life expectancy," 2013). Assume the life of a dishwasher is normally distributed. a) Compute the probability that a dishwasher will last less than 10 years. b) Compute the probability that a dishwasher will last between 8 and 10 years. c) Compute the number of years that the bottom 25% of dishwashers would last. 38. The price of a computer is normally distributed with a mean of \$1400 and a standard deviation of \$60. a) What is the probability that a buyer paid less than \$1220? b) What is the probability that a buyer paid between \$1400 and \$1580? c) What is the probability that a buyer paid more than \$1520? d) What is the probability that a buyer paid between \$1340 and \$1460? e) What is the probability that a buyer paid between \$1400 and \$1520? f) What is the probability that a buyer paid between \$1400 and \$1460? 39. Heights of 10-year-old children, regardless of sex, closely follow a normal distribution with mean 55.7 inches and standard deviation 6.8 inches. a) Compute the probability that a randomly chosen 10-year-old child is less than 50.4 inches. b) Compute the probability that a randomly chosen 10-year-old child is more than 59.2 inches. c) What proportion of 10-year-old children are between 50.4 and 61.5 inches tall? d) Compute the 85th percentile for 10-year-old children. 40. The mean yearly rainfall in Sydney, Australia, is about 137 mm and the standard deviation is about 69 mm ("Annual maximums of," 2013). Assume rainfall is normally distributed. How many yearly mm of rainfall would there be in the top 25%? 41. The mean daily milk production of a herd of cows is assumed to be normally distributed with a mean of 33 liters, and standard deviation of 10.3 liters. Compute the probability that daily production is more than 40.9 liters? 42. The amount of time to complete a physical activity in a PE class is normally distributed with a mean of 33.2 seconds and a standard deviation of 5.8 seconds. Round answers to 4 decimal places. a) What is the probability that a randomly chosen student completes the activity in less than 28.9 seconds? b) What is the probability that a randomly chosen student completes the activity in more than 37.2 seconds? c) What proportion of students take between 28.5 and 37.3 seconds to complete the activity? d) 70% of all students finish the activity in less than _____ seconds. 43. A study was conducted on students from a particular high school over the last 8 years. The following information was found regarding standardized tests used for college admittance. Scores on the SAT test are normally distributed with a mean of 1023 and a standard deviation of 204. Scores on the ACT test are normally distributed with a mean of 19.3 and a standard deviation of 5.2. It is assumed that the two tests measure the same aptitude, but use different scales. a) Compute the SAT score that is the 50-percentile. b) Compute the ACT score that is the 50-percentile. c) If a student gets an SAT score of 1288, find their equivalent ACT score. Go out at least 5 decimal places between steps. 44. Delivery times for shipments from a central warehouse are exponentially distributed with a mean of 1.73 days (note that times are measured continuously, not just in number of days). The standard deviation of an exponential distribution is equal to the mean of 1.73 days. A random sample of 108 shipments are selected and their shipping times are observed. Use the Central Limit Theorem to find the probability that the mean shipping time for the 108 shipments is less than 1.53 days. 45. The MAX light rail in Portland, OR has a waiting time that is uniformly distributed with a mean waiting time of 5 minutes with a standard deviation of 2.9 minutes. A random sample of 40 wait times was selected. What is the probability the sample mean wait time is under 4 minutes? 46. The average credit card debt back in 2016 was \$16,061 with a standard deviation of \$4100. What is the probability that a sample of 35 people owe a mean of more than \$18,000? 47. A certain brand of electric bulbs has an average life of 300 hours with a standard deviation of 45. A random sample of 100 bulbs is tested. What is the probability that the sample mean will be less than 295? 48. Assume that the birth weights of babies are normally distributed with a mean of 3363 grams and a standard deviation of 563 grams. a) Compute the probability that a randomly selected baby weighs between 3200 grams and 3600 grams. b) Compute the probability that the average weight of 30 randomly selected babies is between 3200 grams and 3600 grams. c) Why did the probability increase? 49. If the Central Limit Theorem is applicable, this means that the sampling distribution of a __________ population can be treated as normal since the __________ is __________. a) symmetrical; variance; large b) positively skewed; sample size; small c) negatively skewed; standard deviation; large d) non-normal; mean; large e) negatively skewed; sample size; large 50. Match the following 3 graphs with the distribution of the population, the distribution of the sample, and the sampling distribution. a) Distribution of the Population b) Distribution of the Sample c) Sampling Distribution i. ii. iii. 51. Match the following 3 graphs with the distribution of the population, the distribution of the sample, and the sampling distribution. a) Distribution of the Population b) Distribution of the Sample c) Sampling Distribution i. ii. iii. 52. Match the following 3 graphs with the distribution of the population, the distribution of the sample, and the sampling distribution. a) Distribution of the Population b) Distribution of the Sample c) Sampling Distribution i. ii. iii. 6.07: Chapter 6 Formulas Uniform Distribution $f(x)=\frac{1}{b-a}, \text { for } a \leq x \leq b$ • ${P}(X \geq x)=\mathrm{P}(X>x)=\left(\frac{1}{b-a}\right) \cdot(b-x)$ • $\mathrm{P}(X \leq x)=\mathrm{P}(X<x)=\left(\frac{1}{b-a}\right) \cdot(x-a)$ • ${P}\left(x_{1} \leq X \leq x_{2}\right)=\mathrm{P}\left(x_{1}<X<x_{2}\right)=\left(\frac{1}{b-a}\right) \cdot\left(x_{2}-x_{1}\right)$ Exponential Distribution $f(x)=\frac{1}{\mu} e^{\left(-\frac{x}{\mu}\right)}, \text { for } x \geq 0$ • $\mathrm{P}(X \geq x)=\mathrm{P}(X>x)=\mathrm{e}^{-x / \mu}$ • $\mathrm{P}(X \leq x)=\mathrm{P}(X<x)=1-\mathrm{e}^{-x / \mu}$ • $\mathrm{P}\left(x_{1} \leq X \leq x_{2}\right)=\mathrm{P}\left(x_{1}<X<x_{2}\right)=e^{\left(-\frac{x_{1}}{\mu}\right)}-e^{\left(-\frac{x_{2}}{\mu}\right)}$ Standard Normal Distribution $\mu=0, \sigma=1$ $z \text {-score: } z=\frac{x-\mu}{\sigma}$ $x=z \sigma+\mu$ Central Limit Theorem $Z \text {-score: } z=\frac{\bar{x}-\mu}{\left(\frac{\sigma}{\sqrt{n}}\right)}$ Normal Distribution Probabilities: P(X ≤ x) = P(X < x) Excel: =NORM.DIST(x,µ,σ,true) TI-84: normalcdf(-1E99,x,µ,σ) P(X ≥ x) = P(X > x) Excel: = 1–NORM.DIST(x,µ,σ,true) TI-84: normalcdf(x,1E99,µ,σ) P(x1 ≤ X ≤ x2) = P(x1 < X < x2) = Excel: =NORM.DIST(x2,µ,σ,true)- NORM.DIST(x1,µ,σ,true) TI-84: normalcdf(x1,x2,µ,σ) Percentiles for Normal Distribution: P(X ≤ x) = P(X < x) Excel: =NORM.INV(area,µ,σ) TI-84: invNorm(area,µ,σ) P(X ≥ x) = P(X > x) Excel: =NORM.INV(1–area,µ,σ) TI-84: invNorm(1–area,µ,σ) P(x1 ≤ X ≤ x2) = P(x1 < X < x2) = Excel: x1 =NORM.INV((1–area)/2,µ,σ) x2 =NORM.INV(1–((1–area)/2),µ,σ) TI-84: x1 = invNorm((1–area)/2,µ,σ) x2 =invNorm(1–((1–area)/2),µ,σ)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/06%3A_Continuous_Probability_Distributions/6.06%3A_Chapter_6_Exercises.txt
Statistical inference is used to draw conclusions about a population based on a sample. We can use the probability distributions and Central Limit Theorem to understand what is going on in the population. The population can be difficult to measure so we take a sample from that population and use descriptive statistics to measure the sample. We can then use those sample statistics to infer back to what is happening in our population. Although there are many types of statistical inference tools, we will only cover some of the more common techniques. Distinguishing between a population and a sample is important in statistics. We frequently use a representative sample to generalize a population. • A statistic is any characteristic or measure from a sample. One example is the sample mean $\overline{ x }$. • A parameter is any characteristic or measure from a population. One example is the population mean µ. • A point estimate for a parameter (a characteristic from a population) is a statistic (a characteristic from a sample). For example, the point estimate for the population mean µ is the sample mean $\overline{ x }$. The point estimate for the population standard deviation σ is the sample standard deviation s, etc. • A 100(1 – α)% confidence interval for a population parameter (μ, σ, etc.) represents that the proportion 100(1 – α)% of times the true value of the population parameter is contained within the interval. • The confidence level (or level of confidence) is 1 – α. The common percentages used for confidence interval levels are 90%, 95%, and 99%. Some corresponding values of alpha are: 90% would be α = 0.10 = 10%, 95% would be α = 0.05 = 5%, and 99% would be α = 0.01 = 1%. In this context, α, “alpha,” represents the complement of the confidence level, and its definition will be explained in the next chapter. When a symmetric distribution, such as a normal distribution, is used, confidence intervals are always of the form: point estimate ± margin of error The margin of error defines the “radius” of the interval necessary to obtain the desired confidence level. The margin of error depends on the desired confidence level. Higher levels of confidence come at a cost, namely larger margins of error, which means our estimate will be less accurate. The margin of error formula will usually include a value from a sampling distribution called the critical value. The critical value measures the number of standard errors to be added and subtracted in order to achieve your desired confidence level based on the α level chosen. For large sample sizes, the sampling distribution of a mean is normal. We can use the standard normal distribution values that would give the middle 95% of the distribution when α = 0.05 since 100(1 – 0.05)% = 95%. The two critical values –zα/2 and +zα/2, as shown Figure 7-1. Note: in the notation zα/2 the α/2 represents the area in each of the tails. Figure 7-1 Assumption: If the sample size is small (n < 30), the population we are sampling from must be normal. If the sample size is “large” (n ≥ 30), the Central Limit Theorem guarantees that the sampling distribution will be normally distributed no matter how the population distribution is distributed.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.01%3A_Introduction.txt
Suppose you want to estimate the population proportion, p. As an example, an administrator may want to know what proportion of students at your school smoke. An insurance company may want to know what proportion of accidents are caused by teenage drivers who do not have a drivers’ education class. Every time we collect data from a new sample, we would expect the estimate of the proportion to change slightly. If you were to find a range of values over an interval this would give a better estimate of where the population proportion falls. This range of values that would better predict the true population parameter is called an interval estimate or confidence interval. The sample proportion $\hat{p}$ is the point estimate for p, the standard error (the standard deviation of the sampling distribution) of $\hat{p}$ is $\sqrt{\left(\frac{\hat{p} \cdot \hat{q}}{n}\right)}$, the zα/2 is the critical value using the standard normal distribution, and the margin of error $\mathrm{E}=Z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \cdot \hat{q}}{n}\right)}$. Some textbooks use $\pi$ instead of p for the population proportion, and $\bar{p}$ (pronounced “p-bar”) instead of $\hat{p}$ for sample proportion. Choose a simple random sample of size n from a population having unknown population proportion p. The 100(1 – $\alpha$)% confidence interval estimate for p is given by $\hat{p} \pm Z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}$. Where $\hat{p}=\frac{x}{n}=\frac{\# \text { of successes }}{\# \text { of trials }}$ (read as “p hat”) is the sample proportion, and $\hat{q}=1-\hat{p}$ is the complement. The above confidence interval can be expressed as an inequality or an interval of values. $\hat{p}-z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}<p<\hat{p}+z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)} \quad \text { or } \quad\left(\hat{p}-z_{\frac{\alpha}{2}} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}, \hat{p}+z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}\right)$ Assumption: $n \cdot \hat{p} \geq 10 \text { and } n \cdot \hat{q} \geq 10$ *This assumption must be addressed before using these statistical inferences. This formula is derived from the normal approximation of the binomial distribution, therefore the same conditions for a binomial need to be met, namely a set sample size of independent trials, two outcomes that have the same probability for each trial. Steps for Calculating a Confidence Interval 1. State the random variable and the parameter in words. x = number of successes p = proportion of successes 2. State and check the assumptions for confidence interval. a. A simple random sample of size n is taken. b. The conditions for the binomial distribution are satisfied. c. To determine the sampling distribution of $\hat{p}$, you need to show that $n \cdot \hat{p} \geq 10 \text { and } n \cdot \hat{q} \geq 10$, where $\hat{q}$ = 1 − $\hat{p}$. If this requirement is true, then the sampling distribution of $\hat{p}$ is well approximated by a normal curve. (In reality, this is not really true, since the correct assumption deals with p. However, in a confidence interval you do not know p, so you must use $\hat{p}$. This means you just need to show that x ≥ 10 and n – x ≥ 10.) 3. Compute the sample statistic $\hat{p}=\frac{x}{n}$ and the confidence interval $\hat{p} \pm z_\frac{\alpha}{2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}$. 4. Statistical Interpretation: In general, this looks like: “We can be (1 – α)*100% confident that the interval $\hat{p}-z_\frac{\alpha}{2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}<p<\hat{p}+z_\frac{\alpha}{2} \sqrt{\left(\frac{\hat{p} \hat{q}}{n}\right)}$ Real World Interpretation: This is where you state what interval contains the true proportion. A concern was raised in Australia that the percentage of deaths of indigenous Australian prisoners was higher than the percent of deaths of nonindigenous Australian prisoners, which is 0.27%. A sample of six years (1990- 1995) of data was collected, and it was found that out of 14,495 indigenous Australian prisoners, 51 died (“Indigenous deaths in,” 1996). Find a 95% confidence interval for the proportion of indigenous Australian prisoners who died. Solution 1. State the random variable and the parameter in words. x = number of indigenous Australian prisoners who die p = proportion of indigenous Australian prisoners who die 2. State and check the assumptions for a confidence interval. a. A simple random sample of 14,495 indigenous Australian prisoners was taken. However, the sample was not a random sample, since it was data from six years. It is the numbers for all prisoners in these six years, but the six years were not picked at random. Unless there was something special about the six years that were chosen, the sample is probably a representative sample. This assumption is probably met. b. There are 14,495 prisoners in this case. The prisoners are all indigenous Australians, so you are not mixing indigenous Australian with nonindigenous Australian prisoners. There are only two outcomes, the prisoner either dies or does not. The chance that one prisoner dies over another may not be constant, but if you consider all prisoners the same, then it may be close to the same probability. Thus, the assumptions for the binomial distribution are satisfied. c. In this case, x = 51 and n – x = 14,495 – 51 = 14,444. Both are greater than or equal to 10. The sampling distribution for $\hat{p}$ is a normal distribution. 3. Compute the sample statistic and the confidence interval. Sample Proportion: $\hat{p}=\frac{x}{n}=\frac{51}{14495}=.003518$, Critical Value: $z_{\alpha / 2}=1.96$, since 95% confidence level Margin of Error $\mathrm{E}=z_{\alpha / 2} \sqrt{\left(\frac{\hat{p} \cdot \hat{q}}{n}\right)}=1.96 \sqrt{\left(\frac{0.003518(1-0.003518)}{14495}\right)}=0.000964$ Confidence Interval: $\hat{p}-\mathrm{E}<p<\hat{p}+\mathrm{E}$ 0.003518 – 0.000964 < p < 0.003518 + 0.000964 0.002554 < p < 0.004482 or (0.002554, 0.004482) 4. Statistical Interpretation: We can be 95% confident that 0.002554 < p < 0.004482 contains the proportion of all indigenous Australian prisoners who died. 5. Real World Interpretation: We can be 95% confident that the percentage of all indigenous Australian prisoners who died is between 0.26% and 0.45%. Using Technology Excel has no built-in shortcut key for finding a confidence interval for a proportion, but if you type in the following formulas shown below you can make your own Excel calculator where you just change the highlighted cells and all the numbers below will update with the relevant information. Type in the following be cognizant of cell reference numbers. You get the following answers where the last two numbers are your confidence interval limits. Make sure to put your answer in interval notation (0.002555, 0.004482) or 0.26% < p < 0.45%. You can also do the calculations for the confidence interval with the TI Calculator. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the [A:1-PropZInterval] option and press the [ENTER] key. Then type in the values for x, sample size and confidence level, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the answer in interval notation. Note: Sometimes you are not given the x value but a percentage instead. To find the x to use in the calculator, multiply $\hat{p}$ by the sample size and round off to the nearest integer. The calculator will give you an error message if you put in a decimal for x or n. For example, if $\hat{p}$ = 0.22 and n = 124 then 0.22*124 = 27.28, so use x = 27. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F7 [Ints], then select 5: 1-PropZInt. Type in the values for x, sample size and confidence level, and press the [ENTER] key. The calculator returns the answer in interval notation. Note: sometimes you are not given the x value but a percentage instead. To find the x value to use in the calculator, multiply $\hat{p}$ by the sample size and round off to the nearest integer. The calculator will give you an error message if you put in a decimal for x or n. For example, if $\hat{p}$= 0.22 and n = 124 then 0.22*124 = 27.28, so use x = 27. A researcher studying the effects of income levels on new mothers breastfeeding their infants hypothesizes that those countries where the income level is lower has a higher rate of infants breastfeeding than higher income countries. It is known that in Germany, considered a high-income country by the World Bank, 22% of all babies are breastfed. In Tajikistan, considered a low-income country by the World Bank, researchers found that in a random sample of 500 new mothers that 125 were breastfeeding their infants. Find a 90% confidence interval of the proportion of mothers in low-income countries who breastfeed their infants. Solution 1. State your random variable and the parameter in words. x = The number of new mothers who breastfeed in a low-income country. p = The proportion of new mothers who breastfeed in a low-income country. 2. State and check the assumptions for a confidence interval. a. A simple random sample of 500 breastfeeding habits of new mothers in a low-income country was taken as was stated in the problem. b. There were 500 women in the study. The women are considered identical, though they probably have some differences. There are only two outcomes - either the woman breastfeeds her baby or she does not. The probability of a woman breastfeeding her baby is probably not the same for each woman, but it is probably not that different for each woman. The assumptions for the binomial distribution are satisfied. c. x = 125 and n – x = 500 – 125 = 375 and both are greater than or equal to 10, so the sampling distribution of $\hat{p}$ is well approximated by a normal curve. 3. Compute the sample statistic and the confidence interval. On the TI-83/84: Go into the STAT menu. Move over to TESTS and choose 1- PropZInt, then press Calculate. 4. Statistical Interpretation: We are 90% confident that the interval 0.219 < p < 0.282 contains the population proportion of all women in low-income countries who breastfeed their infants. 5. Real World Interpretation: The proportion of women in low-income countries who breastfeed their infants is between 0.219 and 0.282 with 90% confidence.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.02%3A_Confidence_Interval_for_a_Proportion.txt
A confidence interval for a population proportion p and q = 1 – p, with specific margin of error E is given by: $n=p^{*} \cdot q^{*}\left(\frac{z_{\alpha / 2}}{E}\right)^{2}$ Always round up to the next whole number. Note: If the sample size is determined before the sample is selected, the p* and q* in the above equation are our best guesses. Often times statisticians will use p* = q* = 0.5; this takes the guesswork out of determining p* and provides the “worst case scenario” for n. In other words, if p* = 0.5 is used, then you are guaranteed that the margin of error will not exceed E but you also will have to sample the largest possible sample size. Some texts will use p or π instead of p*. A study found that 73% of prekindergarten children ages 3 to 5 whose mothers had a bachelor’s degree or higher were enrolled in early childhood care and education programs. 1. How large a sample is needed to estimate the true proportion within 3% with 95% confidence? 2. How large a sample is needed if you had no prior knowledge of the proportion? 3. Solution a) Use $n=p^{*} \cdot q^{*}\left(\frac{z_{\alpha / 2}}{E}\right)^{2}=0.73 \cdot 0.27\left(\frac{1.96}{0.03}\right)^{2}=841.3104$. Since we cannot have 0.3104 of a person, we need to round up to the next whole person and use n = 842. Don’t round down since we may not get within our margin of error for a smaller sample size. b) Since no proportion is given, use the planning value of p* = 0.5. $n=0.5 \cdot 0.5\left(\frac{1.96}{0.03}\right)^{2}=1067.1111 \nonumber$ Round up and use $n = 1,068$. Note the sample sizes of 842 and 1,068. If you have a prior knowledge about the sample proportion then you may not have to sample as many people to get the same margin of error. The larger the sample size, the smaller the confidence interval. 7.04: Z-Interval for a Mean Suppose you want to estimate the mean weight of newborn infants, or you want to estimate the mean salary of college graduates. A confidence interval for the mean would be the way to estimate these means. A 100(1 - $\alpha$ )% confidence interval for a population mean μ: (σ known) Choose a simple random sample of size n from a population having unknown mean μ. The 100(1 - $\alpha$)% confidence interval estimate for μ is given by, $\bar{x} \pm z_{\alpha / 2}\left(\frac{\sigma}{\sqrt{n}}\right)$. The point estimate for μ is $\overline{ x }$, and the margin of error is $z_{\alpha / 2}\left(\frac{\sigma}{\sqrt{n}}\right)$. Where $z_\frac{\alpha}{2}$ is the value on the standard normal curve with area 1 – $\alpha$ between the critical values –z$\alpha$/2 and +z$\alpha$/2, as shown below in Figure 7-2. Note: In the notation, z$\alpha$/2 the $\alpha$/2 represents the area in each of the tails, see Figure 7-2. The confidence interval can be expressed as an inequality or an interval of values. $\bar{x}-z_{\alpha / 2} \cdot \frac{\sigma}{\sqrt{n}}<\mu<\bar{x}+z_{\alpha / 2} \cdot \frac{\sigma}{\sqrt{n}} \quad \text { or } \quad\left(\bar{x}-z_{\alpha / 2} \cdot \frac{\sigma}{\sqrt{n}}, \bar{x}+z_{\alpha / 2} \cdot \frac{\sigma}{\sqrt{n}}\right)$ Assumptions: 1. If the sample size is small (n < 30), the population we are sampling from must be normally distributed. If the sample size is “large” (n ≥ 30) the Central Limit Theorem guarantees that the sampling distribution of the mean will be normally distributed no matter how the population distribution is distributed. 2. The population standard deviation σ must be known. Most of the time we are using σ from a similar study or a prior year’s data. If you have a sample standard deviation then we will use a different method introduced in a later section. These assumptions must be addressed before using these statistical inferences. In most cases, we do not know the population standard deviation so will not use the z-interval. Instead, we will use a different sampling distribution called the Student’s t-distribution or t-distribution for short. Suppose we select a random sample of 100 pennies in circulation in order to estimate the average age of all pennies that are still in circulation. The sample average age, in years, was found to be $\overline{ x }$ = 14.6. For the sake of this example, let us assume that the population standard deviation is 4 years old. Find a 95% confidence interval for the true average age of pennies that are still in circulation. Solution We can use the above (z) model because σ is known, the population distribution shape is unknown, but the sample size is over 30. Use Excel or your calculator to find z$\alpha$/2 for a 95% confidence interval. In Excel use =NORM.INV(lower tail area, mean, standard deviation). It is easier to deal with the positive z-score so use the z to the right of the mean which would have 1 – $\alpha$/2 = 0.975 area. In Excel use =NORM.INV(0.975,0,1) or the calculator invNorm(0.975,0,1) which gives z$\alpha$/2 = 1.96. $\bar{x} \pm z_\frac{\alpha}{2} \frac{\sigma}{\sqrt{n}} \quad \Rightarrow \quad 14.6 \pm 1.96\left(\frac{4}{\sqrt{100}}\right) \quad \Rightarrow \quad 14.6 \pm 0.784 \quad \Rightarrow (13.816, 15.384)$ The point estimate for μ is 14.6 years, and the margin of error is 0.784 years. If we were to repeat this same sampling process, we would expect 95 out of 100 such intervals to contain the true population mean age of all pennies in circulation. A shorthand way to say this is, with 95% confidence that the population mean age of all pennies in circulation is between 13.816 to 15.384 years. The answer is expressed as an inequality so the confidence interval is 13.816 < µ < 15.384. You can also use interval notation (13.816, 15.384) which is more common and matches the notation found on most calculators. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the [7:ZInterval] option and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in the population or sample standard deviation, sample mean, sample size and confidence level, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the answer in interval notation. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F7 [Ints], then select 1: ZInterval. Choose the input method, data is when you have entered data into a list previously or stats when you are given the mean and standard deviation already. Type in the population standard deviation, sample mean, sample size (or list name (list1), and Freq: 1) and confidence level, and press the [ENTER] key to calculate. The calculator returns the answer in interval notation.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.03%3A_Sample_Size_Calculation_for_a_Proportion.txt
There is always a chance that the confidence interval would not contain the true parameter that we are looking for. Inferential statistics does not “prove” that the population parameter is within the boundaries of the confidence interval. If the sample we took had all outliers and the sample statistic is far away from the true population parameter, then when we subtract and add the margin of error to the point estimate, the population parameter may not be within the limits. Both the sample size and confidence level affect how wide the interval is. The following discussion demonstrates what happens to the width of the interval as you get more confident. Think about shooting an arrow into the target. Suppose you are really good at that and that you have a 90% chance of hitting the bull’s-eye. Now the bull’s-eye is very small. Since you hit the bull’s eye approximately 90% of the time, then you probably hit inside the next ring out 95% of the time. You have a better chance of doing this, but the circle is bigger. You probably have a 99% chance of hitting the target, but that is a much bigger circle to hit. As your confidence in hitting the target increases, the circle you hit gets bigger. The same is true for confidence intervals. The higher level of confidence makes a wider interval. There is a tradeoff between width and confidence level. You can be really confident about your answer, but your answer will not be very precise. On the other hand, you can have a precise answer (small margin of error) but not be very confident about your answer. When we increase the confidence level, the confidence interval becomes wider to be more confident that the population parameter is within the lower and upper boundaries. A wider margin of error means less accuracy. When one is more confident, one would have a harder time predicting the true parameter with the larger range of values. See Figure 7-3. Figure 7-3 For instance, if we wanted to find the true mean grade for a statistics course using a 99% confidence critical value, we may get a very large margin of error, 75% ± 25%. This would say that we would be 99% confident that the average grade for all students is between 50% to 100%. This is of little help since that is anywhere between the grade range of F to an A. There are two ways to narrow this margin of error. The best way to reduce the margin of error is to increase the sample size, which decreases the standard deviation of the sampling distribution. When you take a larger sample, you will get a narrower interval. The other way to decrease the margin of error is to decrease your confidence level. When you decrease the confidence level, the critical value will be smaller. If we have a smaller margin of error then one can more accurately predict the population parameter. Now look at how the sample size affects the size of the interval. Suppose the following Figure 7-4 represents confidence intervals calculated on a 95% interval. Figure 7-4 A larger sample size from a representative sample makes the standard error smaller and hence the width of the interval narrower. Large samples are closer to the true population so the point estimate is pretty close to the true value. The following website is an applet where you can simulate confidence intervals with different parameters, sample sizes and confidence levels, take a moment and play around with the applet: http://www.rossmanchance.com/applets/ConfSim.html. The vertical bar in Figure 7-5 represents the true population mean test score of 75 (which would be unknown in real life). If you were to compute 100 confidence intervals using a 95% confidence level, then approximately 95/100 = 95% would contain the true population mean. The figure shows the confidence intervals as horizontal lines. There are 95 confidence intervals that contain the population mean shown in green. There are 5 confidence intervals that did not capture the population mean within the interval endpoints that are shown in red. The probability that one confidence interval contains the mean is either zero or one. However, if we were to repeat the same sampling process, the proportion of times that the confidence intervals would capture the populations parameter is (1 – $\alpha$), where α is the complement of the confidence level. As an example, if you have a 95% confidence interval of 0.65 < p < 0.73, then you would say, “If we were to repeat this process, then 95% of the time the interval 0.65 to 0.73 would contain the true population proportion.” This means that if you have 100 intervals, 95 of them will contain the true proportion, and 5% will not. The incorrect interpretation is that there is a 95% probability that the true value of p will fall between 0.65 and 0.73. The reason that this interpretation is incorrect is that the true value is fixed out there somewhere. You are trying to capture it with this interval. This is the chance that your interval captures the true mean, and not that the true value falls in the interval. In addition, a real-world interpretation depends on the situation. It is where you are telling people what numbers you found the parameter to lie between. Therefore, your real-world interpretation is where you tell what values your parameter is between. There is no probability attached to this statement. That probability is in the statistical interpretation. Figure 7-5 In the following Figure 7-6, confidence intervals were simulated using a 90% confidence level and then again using the 99% confidence level. Each confidence level was run 100 times with sample sizes of n = 30, then again using a sample size of n = 100, holding all other variables constant. Figure 7-6 Compare columns 1 & 2 with columns 3 & 4 in Figure 7-6. For columns 1 & 2, 90/100 = 90% of the confidence intervals contain the mean. For columns 3 & 4, 99/100 = 99% of the confidence intervals contain the mean Note the higher confidence level is wider for the same sample size. Compare columns 1 & 3 in Figure 7-6 and you can see that the width of the confidence interval is wider for the 99% confidence level compared to the 90% confidence level. Holding all other variables constant the confidence interval captured the population mean 99% of the time. Then compare columns 2 & 4 to see similar results. The wider confidence intervals will more likely capture the true population mean, however you will have less accuracy in predicting what the true mean is. State the statistical and real-world interpretations of the following confidence intervals. 1. Suppose you have a 95% confidence interval for the mean age a woman gets married in 2013 is 26 < μ < 28. 2. Suppose a 99% confidence interval for the proportion of Americans who have tried cannabis as of 2019 is 0.55 < p < 0.61. 3. Solution a • Statistical Interpretation: We are 95% confident that the interval 26 < μ < 28 contains the population mean age of all women that got married in 2013. • Real World Interpretation: We are 95% confident that the mean age of women that got married in 2013 is between 26 and 28 years of age. b • Statistical Interpretation: We are 99% confident that the interval 0.55 < p < 0.61 contains the population proportion of all Americans who have tried cannabis as of 2019. • Real World Interpretation: We are 99% confident that the proportion of all Americans who have tried cannabis as of 2019 is between 55% and 61%. “I'm not trying to prove anything, by the way. I'm a scientist and I know what constitutes proof. But the reason I call myself by my childhood name is to remind myself that a scientist must also be absolutely like a child. If he sees a thing, he must say that he sees it, whether it was what he thought he was going to see or not. See first, think later, then test. But always see first. Otherwise you will only see what you were expecting. Most scientists forget that.” (Adams, 2002) 7.06: Sample Size for a Mean Often, we need a specific confidence level, but we need our margin of error to be within a set range. We are able to accomplish this by increasing the sample size. However, taking large samples is often difficult or costly to accomplish. Thus, it is useful to be able to determine the minimum sample size necessary to achieve our confidence interval. A confidence interval for a population mean µ with specific margin of error E and known population standard deviation σ is given by, $n=\left(\frac{z_{\alpha / 2} \cdot \sigma}{E}\right)^{2}$. Always round up to the next whole number. Keep in mind that we rarely know the value of the population standard deviation. We can estimate σ by using a previous year’s standard deviation or a standard deviation from a similar study, a pilot sample, or by dividing the range by 4. A researcher is interested in estimating the average salary of teachers. She wants to be 95% confident that her estimate is correct. In a previous study, she found the population standard deviation was $1,175. How large a sample is needed to be accurate within$100? Solution First find the z$\alpha$/2 for 95% confidence using Excel or your calculator, so z$\alpha$/2 = 1.96. Most of the time the margin of error = E follows the word “within” in the question, E = 100. The standard deviation σ = 1175. Replace each number into the formula: $n=\left(\frac{1.96 \cdot 1175}{100}\right)^{2} = 530.38$. If we round down, we would not get “within” the \$100 margin of error. Always round sample sizes up to the next whole number so that your margin of error will be within the specified amount. The larger the sample size, the smaller the confidence interval. The answer is n = 531.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.05%3A_Interpreting_a_Confidence_Interval.txt
7.7.1 Student’s T-Distribution A t-distribution is another symmetric distribution for a continuous random variable. Gosset William Gosset was a statistician employed at Guinness and performed statistics to find the best yield of barley for their beer. Guinness prohibited its employees to publish papers so Gosset published under the name Student. Gosset’s distribution is called the Student’s t-distribution. A t-distribution is another special type of distribution for a continuous random variable. Properties of the t-distribution density curve: 1. Symmetric, Unimodal (one mode) Bell-shaped. 2. Centered at the mean μ = median = mode = 0. 3. The spread of a t-distribution is determined by the degrees of freedom which are determined by the sample size. 4. As the degrees of freedom increase, the t-distribution approaches the standard normal curve. 5. The total area under the curve is equal to 1 or 100%. Figure 7-7 Figure 7-7 shows examples of three different t-distributions with degrees of freedom of 1, 5 and 30. Note that as the degrees of freedom increase the distribution has a smaller standard deviation and will get closer in shape to the normal distribution. The t-critical value that has 5% of the area in the upper tail for n = 13. Solution Use a t-distribution with the degrees of freedom, df = n – 1 = 13 – 1 = 12. Draw and shade the upper tail area as in Figure 7-8. Use the DISTR menu invT option. Note that if you have an older TI-84 or a TI-83 calculator you need to have the program INVT installed. For this function, you always use the area to the left of the point. If want 5% in the upper tail, then that means there is 95% in the bottom tail area. tα = invT(area below t-score, df) = invT(0.95,12) = 1.782 You can download the INVT program to your calculator from http://MostlyHarmlessStatistics.com or use Excel =T.INV(0.95,12) = 1.7823. Compute the probability of getting a t-score larger than 1.8399 with a sample size of 13. Solution To find the P(t > 1.8399) on the TI calculator, go to DISTR use tcdf(lower,upper,df). For this example, we would have tcdf(1.8399,∞,12). In Excel use =1-T.DIST(1.8399,12,TRUE) = 0.0453. P(t > 1.8399) = 0.0453. Figure 7-9 7.7.2 T-Confidence Interval Note that we rarely have a calculation for the population standard deviation so in most cases we would need to use the sample standard deviation as an estimate for the population standard deviation. If we have a normally distributed population with an unknown population standard deviation then the sampling distribution of the sample mean will follow a t-distribution. Figure 7-10 A 100(1 - $\alpha$)% Confidence Interval for a Population Mean μ: (σ unknown) Choose a simple random sample of size n from a population having unknown mean μ. The 100(1 - $\alpha$)% confidence interval estimate for μ is given by $\bar{x} \pm t_{\alpha / 2, n-1}\left(\frac{s}{\sqrt{n}}\right)$. The df = degrees of freedom* are n – 1. The degrees of freedom are the number of values that are free to vary after a sample statistic has been computed. For example, if you know the mean was 50 for a sample size of 4, you could pick any 3 numbers you like, but the 4th value would have to be fixed to have the mean come out to be 50. For this class we just need to know that degrees of freedom will be based on the sample size. The sample mean $\bar{x}$ is the point estimate for μ, and the margin of error is $t_{\alpha / 2}\left(\frac{s}{\sqrt{n}}\right)$. Where t$\alpha$/2 is the positive critical value on the t-distribution curve with df = n – 1 and area 1 – $\alpha$ between the critical values –t$\alpha$/2 and +t$\alpha$/2, as shown in Figure 7-11. Figure 7-11 Before we compute a t-interval we will practice getting t critical values using Excel and the TI calculator’s built in tdistribution. Compute the critical values –t$\alpha$/2 and +t$\alpha$/2 for a 90% confidence interval with a sample size of 10. Solution Draw and t-distribution with df = n – 1 = 9, see Figure 7-12. In Excel use =T.INV(lower tail area, df) =T.INV(0.95,9) or in the TI calculator use invT(lower tail area, df) = invT(0.95,9). The critical values are t = ±1.833 Figure 7-12 We can use Excel to find the margin of error when raw data is given in a problem. The following example is first done longhand and then done using Excel’s Data Analysis Tool and the T-Interval shortcut key on the TI calculator. The yearly salary for mathematics assistant professors are normally distributed. A random sample of 8 math assistant professor’s salaries are listed below in thousands of dollars. Estimate the population mean salary with a 99% confidence interval. 66.0 75.8 70.9 73.9 63.4 68.5 73.3 65.9 Solution First find the t critical value using df = n – 1 = 7 and 99% confidence, t$\alpha$/2 = 3.4995. Then use technology to find the sample mean and sample standard deviation and substitute the numbers into the formula. $\bar{x} \pm t_{\alpha / 2, n-1}\left(\frac{s}{\sqrt{n}}\right) \Rightarrow 69.7125 \pm 3.4995\left(\frac{4.4483}{\sqrt{8}}\right) \Rightarrow 69.7125 \pm 5.5037 \Rightarrow(64.2088,75.2162)$ The answer can be given as an inequality 64.2088 < µ < 75.2162 or in interval notation (64.2088, 75.2162). We are 99% confident that the interval 64.2 and 75.2 contains the true population mean salary for all mathematics assistant professors. We are 99% confident that the mean salary for mathematics assistant professors is between $64,208.80 and$75,216.20. Assumption: The population we are sampling from must be normal* or approximately normal, and the population standard deviation σ is unknown. *This assumption must be addressed before using statistical inference for sample sizes of under 30. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the [8:TInterval] option and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in the mean, sample standard deviation, sample size and confidence level, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the answer in interval notation. Be careful. If you accidentally use the [7:ZInterval] option you would get the wrong answer. Alternatively (If you have raw data in list one) Arrow over to the [Data] menu and press the [ENTER] key. Then type in the list name, L1, leave Freq:1 alone, enter the confidence level, arrow down to [Calculate] and press the [ENTER] key. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F7 [Ints], then select 2:TInterval. Choose the input method, data is when you have entered data into a list previously or stats when you are given the mean and standard deviation already. Type in the mean, standard deviation, sample size (or list name (list1), and Freq: 1) and confidence level, and press the [ENTER] key. The calculator returns the answer in interval notation. Be careful: If you accidentally use the [1:ZInterval] option you would get the wrong answer. Excel Directions Type the data into Excel. Select the Data Analysis Tool under the Data tab. Select Descriptive Statistics. Select OK. Use your mouse and click into the Input Range box, then select the cells containing the data. If you highlighted the label then check the box next to Labels in first row. In this case no label was typed in so the box is left blank. (Be very careful with this step. If you check the box and do not have a label then the first data point will become the label and all your descriptive statistics will be incorrect.) Check the boxes next to Summary statistics and Confidence Level for Mean. Then change the confidence level to fit the question. Select OK. The table output does not find the confidence interval. However, the output does give you the sample mean and margin of error. The margin of error is the last entry labeled Confidence Level. To find the confidence interval subtract and add the margin of error to the sample mean to get the lower and upper limit of the interval in two separate cells. The following screenshot shows the cell references to find the lower limit as =D3-D16 and the upper limit as =D3+D16. Make sure to put your answer in interval notation. The answer is given as an inequality 64.2088 < µ < 75.2162 or in interval notation (64.2088, 75.2162). We are 99% confident that the interval 64.2 and 75.2 contains the true population mean salary for all mathematics assistant professors. Summary A t-confidence interval is used to estimate an unknown value of the population mean for a single sample. We need to make sure that the population is normally distributed or the sample size is 30 or larger. Once this is verified we use the interval $\bar{x}-t_{\alpha / 2, n-1}\left(\frac{s}{\sqrt{n}}\right)<\mu<\bar{x}+t_{\alpha / 2, n-1}\left(\frac{s}{\sqrt{n}}\right)$ to estimate the true population mean. Most of the time we will be using the t-interval, not the z-interval, when estimating a mean since we rarely know the population standard deviation. It is important to interpret the confidence interval correctly. A general interpretation where you would change what is in the parentheses to fit the context of the problem is: “One can be 100(1 – $\alpha$)% confident that between (lower boundary) and (upper boundary) contains the population mean of (random variable in words using context and units from problem).”
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.07%3A_t-Interval_for_a_Mean.txt
Chapter 7 Exercises 1. Which confidence level would give the narrowest margin of error? a) 80% b) 90% c) 95% d) 99% 2. Suppose you compute a confidence interval with a sample size of 25. What will happen to the width of the confidence interval if the sample size increases to 50, assuming everything else stays the same? Choose the correct answer below. a) Gets smaller b) Stays the same c) Gets larger 3. For a confidence level of 90% with a sample size of 35, find the critical z values. 4. For a confidence level of 99% with a sample size of 18, find the critical z values. 5. A researcher would like to estimate the proportion of all children that have been diagnosed with autism spectrum disorder (ASD) in their county. They are using 95% confidence level and the Centers for Disease Control and Prevention (CDC) 2018 national estimate that 1 in 68 \(\approx\) 0.0147 children are diagnosed with ASD. What sample size should the researcher use to get a margin of error to be within 2%? 6. A political candidate has asked you to conduct a poll to determine what percentage of people support her. If the candidate only wants a 9% margin of error at a 99% confidence level, what size of sample is needed? 7. A pilot study found that 72% of adult Americans would like an Internet connection in their car. a) Use the given preliminary estimate to determine the sample size required to estimate the proportion of adult Americans who would like an Internet connection in their car to within 0.02 with 95% confidence. b) Use the given preliminary estimate to determine the sample size required to estimate the proportion of adult Americans who would like an Internet connection in their car to within 0.02 with 99% confidence. c) If the information in the pilot study was not given, determine the sample size required to estimate the proportion of adult Americans who would like an Internet connection in their car to within 0.02 with 99% confidence. 8. Out of a sample of 200 adults ages 18 to 30, 54 still lived with their parents. Based on this, construct a 95% confidence interval for the true population proportion of adults ages 18 to 30 that still live with their parents. 9. In a random sample of 200 people, 135 said that they watched educational TV. Find and interpret the 95% confidence interval of the true proportion of people who watched educational TV. 10. In a certain state, a survey of 600 workers showed that 35% belonged to a union. Find and interpret the 95% confidence interval of true proportion of workers who belong to a union. 11. A teacher wanted to estimate the proportion of students who take notes in her class. She used data from a random sample size of 82 and found that 50 of them took notes. The 99% confidence interval for the proportion of student that take notes is _______ < p < _________. 12. A random sample of 150 people was selected and 12% of them were left-handed. Find and interpret the 90% confidence interval for the proportion of left-handed people. 13. A survey asked people if they were aware that maintaining a healthy weight could reduce the risk of stroke. A 95% confidence interval was found using the survey results to be (0.54, 0.62). Which of the following is the correct interpretation of this interval? a) We are 95% confident that the interval 0.54 < p < 0.62 contains the population proportion of people who are aware that maintaining a healthy weight could reduce the risk of stroke. b) There is a 95% chance that the sample proportion of people who are aware that maintaining a healthy weight could reduce the risk of stroke is between 0.54 < p < 0.62. c) There is a 95% chance of having a stroke if you do not maintain a healthy weight. d) There is a 95% chance that the proportion of people who will have a stroke is between 54% and 62%. 14. Gallup tracks daily the percentage of Americans who approve or disapprove of the job Donald Trump is doing as president. Daily results are based on telephone interviews with approximately 1,500 national adults. Margin of error is ±3 percentage points. On December 15, 2017, the gallop poll using a 95% confidence level showed that 34% approved of the job Donald Trump was doing. Which of the following is the correct statistical interpretation of the confidence interval? a) As of December 15, 2017, 34% of American adults approve of the job Donald Trump is doing as president. b) We are 95% confident that the interval 0.31 < p < 0.37 contains the proportion of American adults who approve of the job Donald Trump is doing as president as of December 15, 2017. c) As of December 15, 2017, 95% of American adults approve of the job Donald Trump is doing as president. d) We are 95% confident that the proportion of adult Americans who approve of the job Donald Trump is doing as president is 0.34 as of December 15, 2017. 15. A laboratory in Florida is interested in finding the mean chloride level for a healthy resident in the state. A random sample of 25 healthy residents has a mean chloride level of 80 mEq/L. If it is known that the chloride levels in healthy individuals residing in Florida is normally distributed with a population standard deviation of 27 mEq/L, find and interpret the 95% confidence interval for the true mean chloride level of all healthy Florida residents. 16. Out of 500 people sampled in early October 2020, 315 preferred Biden. Based on this, compute the 95% confidence interval for the proportion of the voting population that preferred Biden. 17. The age when smokers first start from previous studies is normally distributed with a mean of 13 years old with a population standard deviation of 2.1 years old. A survey of smokers of this generation was done to estimate if the mean age has changed. The sample of 33 smokers found that their mean starting age was 13.7 years old. Find the 99% confidence interval of the mean. 18. The scores on an examination in biology are approximately normally distributed with a known standard deviation of 20 points. The following is a random sample of scores from this year’s examination: 403, 418, 460, 482, 511, 543, 576, 421. Find and interpret the 99% confidence interval for the population mean scores. 19. The undergraduate grade point average (GPA) for students admitted to the top graduate business schools was 3.53. Assume this estimate was based on a sample of 8 students admitted to the top schools. Assume that the population is normally distributed with a standard deviation of 0.18. Find and interpret the 99% confidence interval estimate of the mean undergraduate GPA for all students admitted to the top graduate business schools. 20. The Food & Drug Administration (FDA) regulates that fresh albacore tuna fish that is consumed is allowed to contain 0.82 ppm of mercury or less. A laboratory is estimating the amount of mercury in tuna fish for a new company and needs to have a margin of error within 0.03 ppm of mercury with 95% confidence. Assume the population standard deviation is 0.138 ppm of mercury. What sample size is needed? 21. You want to obtain a sample to estimate a population mean age of the incoming fall term transfer students. Based on previous evidence, you believe the population standard deviation is approximately 5.3. You would like to be 90% confident that your estimate is within 1.9 of the true population mean. How large of a sample size is required? 22. SAT scores are distributed with a mean of 1,500 and a standard deviation of 300. You are interested in estimating the average SAT score of first year students at your college. If you would like to limit the margin of error of your 95% confidence interval to 25 points, how many students should you sample? 23. An engineer wishes to determine the width of a particular electronic component. If she knows that the standard deviation is 1.2 mm, how many of these components should she consider to be 99% sure of knowing the mean will be within 0.5 mm? 24. For a confidence level of 90% with a sample size of 30, find the critical t values. 25. For a confidence level of 99% with a sample size of 24, find the critical t values. 26. For a confidence level of 95% with a sample size of 40, find the critical t values. 27. The amount of money in the money market accounts of 26 customers is found to be approximately normally distributed with a mean of \$18,240 and a sample standard deviation of \$1,100. Find and interpret the 95% confidence interval for the mean amount of money in the money market accounts at this bank. 28. A professor wants to estimate how long students stay connected during two-hour online lectures. From a random sample of 25 students, the mean stay time was 93 minutes with a standard deviation of 10 minutes. Assuming the population has a normal distribution, compute a 95% confidence interval estimate for the population mean. 29. A random sample of stock prices per share (in dollars) is shown. Find and interpret the 90% confidence interval for the mean stock price. Assume the population of stock prices is normally distributed. 26.60 75.37 3.81 28.37 40.25 13.88 53.80 28.25 10.87 12.25 30. In a certain city, a random sample of executives have the following monthly personal incomes (in thousands) 35, 43, 29, 55, 63, 72, 28, 33, 36, 41, 42, 57, 38, 30. Assume the population of incomes is normally distributed. Find and interpret the 95% confidence interval for the mean income. 31. A tire manufacturer wants to estimate the average number of miles that may be driven in a tire of a certain type before the tire wears out. Assume the population is normally distributed. A random sample of tires is chosen and are driven until they wear out and the number of thousands of miles is recorded, find and interpret the 99% confidence interval for the mean using the sample data 32, 33, 28, 37, 29, 30, 22, 35, 23, 28, 30, 36. 32. Recorded here are the germination times (in days) for ten randomly chosen seeds of a new type of bean: 18, 12, 20, 17, 14, 15, 13, 11, 21, 17. Assume that the population germination time is normally distributed. Find and interpret the 99% confidence interval for the mean germination time. 33. A sample of the length in inches for newborns is given below. Assume that lengths are normally distributed. Find the 95% confidence interval of the mean length. Length 20.8 16.9 21.9 18 15 20.8 15.2 22.4 19.4 20.5 34. Suppose you are a researcher in a hospital. You are experimenting with a new tranquilizer. You collect data from a random sample of 10 patients. The period of effectiveness of the tranquilizer for each patient (in hours) is as follows: Hours 2 2.9 2.6 2.9 3 3 2 2.1 2.9 2.1 a) What is a point estimate for the population mean length of time? b) What must be true in order to construct a confidence interval for the population mean length of time in this situation? Choose the correct answer below. i. The sample size must be greater than 30. ii. The population must be normally distributed. iii. The population standard deviation must be known. iv. The population mean must be known. c) Construct a 99% confidence interval for the population mean length of time. d) What does it mean to be "99% confident" in this problem? Choose the correct answer below. i. 99% of all confidence intervals found using this same sampling technique will contain the population mean time. ii. There is a 99% chance that the confidence interval contains the sample mean time. iii. The confidence interval contains 99% of all sample times. iv. 99% of all times will fall within this interval. e) Suppose that the company releases a statement that the mean time for all patients is 2 hours. Is this possible? Is it likely? 35. Which of the following would result in the widest confidence interval? a) A sample size of 100 with 99% confidence. b) A sample size of 100 with 95% confidence. c) A sample size of 30 with 95% confidence. d) A sample size of 30 with 99% confidence. 36. The world’s smallest mammal is the bumblebee bat (also known as Kitti’s hog-nosed bat or Craseonycteris thonglongyai). Such bats are roughly the size of a large bumblebee. A sample of bats, weighed in grams, is given in the below. Assume that bat weights are normally distributed. Find the 99% confidence interval of the mean. Weight 2.11 1.53 2.27 1.98 2.27 2.11 1.75 2.06 1.92 2.01 37. The total of individual weights of garbage discarded by 20 households in one week is normally distributed with a mean of 30.2 lbs. with a sample standard deviation of 8.9 lbs. Find the 90% confidence interval of the mean. 38. A student was asked to find a 90% confidence interval for widget width using data from a random sample of size n = 29. Which of the following is a correct interpretation of the interval 14.3 < μ < 26.8? Assume the population is normally distributed. a) There is a 90% chance that the sample mean widget width will be between 14.3 and 26.8. b) There is a 90% chance that the widget width is between 14.3 and 26.8. c) With 90% confidence, the width of a widget will be between 14.3 and 26.8. d) With 90% confidence, the mean width of all widgets is between 14.3 and 26.8. e) The sample mean width of all widgets is between 14.3 and 26.8, 90% of the time. 39. A researcher finds a 95% confidence interval for the average commute time in minutes using public transit is (15.75, 28.25). Which of the following is the correct interpretation of this interval? a) We are 95% confident that all commute time in minutes for the population using public transit is between 15.75 and 28.25 minutes. b) There is a 95% chance commute time in minutes using public transit is between 15.75 and 28.25 minutes. c) We are 95% confident that the interval 15.75 < μ < 28.25 contains the sample mean commute time in minutes using public transportation. d) We are 95% confident that the interval 15.75 < μ < 28.25 contains the population mean commute time in minutes using public transportation 7.09: Chapter 7 Formulas Confidence Interval for One Proportion \begin{aligned} &\hat{p} \pm z_{\frac{\alpha}{2}} \sqrt{\left(\frac{\hat{p q}}{n}\right)} \ &\hat{p}=\frac{x}{n} \ &\hat{q}=1-\hat{p} \end{aligned} TI-84: 1-PropZInt Sample Size for Proportion $n=p^{*} \cdot q^{*}\left(\frac{z_{\alpha / 2}}{E}\right)^{2}$ Always round up to whole number. If p is not given use p* = 0.5. E = Margin of Error Confidence Interval for One Mean Use z-interval when σ is given. Use t-interval when s is given. If n < 30, population needs to be normal. Z-Confidence Interval $\bar{x} \pm z_{\frac{\alpha}{2}}\left(\frac{\sigma}{\sqrt{n}}\right)$ TI-84: ZInterval Z-Critical Values Excel: z$\alpha$/2 =NORM.INV(1–area/2,0,1) TI-84: z$\alpha$/2 = invNorm(1–area/2,0,1) t-Critical Values Excel: t$\alpha$/2 =T.INV(1–area/2,df) TI-84: t$\alpha$/2 = invT(1–area/2,df) t-Confidence Interval $\bar{x} \pm t_{\alpha / 2}\left(\frac{s}{\sqrt{n}}\right)$ df = n – 1 TI-84: TInterval Sample Size for Mean $n=\left(\frac{z_{\alpha / 2} \cdot \sigma}{E}\right)^{2}$ Always round up to whole number. E = Margin of Error
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/07%3A_Confidence_Intervals_for_One_Population/7.08%3A_Chapter_7_Exercises.txt
A statistic is a characteristic or measure from a sample. A parameter is a characteristic or measure from a population. We use statistics to generalize about parameters, known as estimations. Every time we take a sample statistic, we would expect that estimate to be close to the parameter, but not necessarily exactly equal to the unknown population parameter. How close would depend on how large a sample we took, who was sampled, how they were sampled and other factors. Hypothesis testing is a scientific method used to evaluate claims about population parameters. A statistical hypothesis is an educated conjecture about a population parameter. This conjecture may or may not be true. We will take sample data and infer from the sample if there is evidence to support our claim about the unknown population parameter. The null hypothesis (H0, pronounced “H-naught” or “H-zero”), is a statistical hypothesis that states that there is no difference between a parameter and a specific value, or that there is no difference between two parameters. The null hypothesis is assumed true until there is sufficient evidence otherwise. The alternative hypothesis (H1 or Ha, pronounced “H-one” or “H-ā”), is a statistical hypothesis that states that there is a difference between a parameter and a specific value, or that there is a difference between two parameters. H1 is always the complement of H0. The researcher decides the probability that the test is true by setting the level of significance, also called the significance level. We use the Greek letter α, pronounced “alpha,” to represent the significance level. The level of significance is the probability that the null hypothesis is rejected when it is actually true. Note: like in the previous chapter, 1 – $\alpha$ is the confidence level. When doing your own research, you should set up their hypotheses and choose the significance level before analyzing the sample data. When reading a word problem, your first step is to identify the parameter(s), for example μ, you are testing and which direction (left, right, or two-tail) test you are being asked to perform. For this course, the homework problems will state the researcher’s claim; usually this is the alternative hypothesis. The null-hypothesis is always set up as a parameter equal to some value (called the test value) or equal to another parameter. The null hypothesis is assumed true unless there is strong evidence from the sample to suggest otherwise. Similar to our judicial system that a person is innocent until the prosecutor shows enough evidence that they are not innocent. For example, an investment company wants to build a new food cart. They know from experience that food carts are successful if they have on average more than 100 people a day walk by the location. They have a potential site to build on, but before they begin, they want to see if they have enough foot traffic. They observe how many people walk by the site every day over a month. The investors want to be very careful about setting up in a bad location where the food cart will fail, rather than the missed opportunity build in a prime location. We have two hypotheses. For an average of more than 100 people, we would write this in symbols as μ > 100. This claim needs to go into the alternative hypothesis since there is no equality, just strictly greater than 100. The complement of greater than is μ ≤ 100. This has a form of equality (≤) so needs to go in the null hypothesis. We then would set up the hypotheses as: • H0: μ ≤ 100 (Do not build) • H1: μ > 100 (Build). When performing the hypothesis test, the test statistic assumes that the parameter is the null hypothesis equal to some value. This still implies that the parameter could be any value less than or equal to 100 but our hypothesis test should be written as: • H0: μ = 100 • H1: μ > 100 Either notation is fine, but most textbooks will always have the = sign in the null hypothesis. The null hypothesis is based off historical value, a claim or product specification. Signs are Important When there is a greater than sign (>) in the alternative hypothesis, we call this a right-tailed test. If we had a less than sign (<) in the alternative hypothesis, then we would have a left-tailed test. If there were a not equal sign (≠) in the alternative hypothesis, we would have a two-tailed test. The tails will determine which side the critical region will fall on the sampling distribution. Note that you should never have an =, ≤ or ≥ sign appear in the alternative hypothesis. There are three ways to set up the hypotheses for a population mean μ: Two-tailed test Right-tailed test Left-tailed test $\begin{array}{lll} \mathrm{H}_{0}: \mu=\mu_{0} & \mathrm{H}_{0}: \mu=\mu_{0} & \mathrm{H}_{0}: \mu=\mu_{0} \ \mathrm{H}_{1}: \mu \neq \mu_{0} & \mathrm{H}_{1}: \mu>\mu_{0} & \mathrm{H}_{1}: \mu<\mu_{0} \end{array}$ or Two-tailed test Right-tailed test Left-tailed test $\begin{array}{lll} \mathrm{H}_{0}: \mu=\mu_{0} & \mathrm{H}_{0}: \mu \leq \mu_{0} & \mathrm{H}_{0}: \mu \geq \mu_{0} \ \mathrm{H}_{1}: \mu \neq \mu_{0} & \mathrm{H}_{1}: \mu>\mu_{0} & \mathrm{H}_{1}: \mu<\mu_{0} \end{array}$ where μ0 is a placeholder for the numeric test value. • The null-hypothesis of a two-tailed test states that the mean μ is equal to some value μ0. • The null-hypothesis of a right-tailed test implies that the mean μ is less than or equal to some value μ0. • The null-hypothesis of a left-tailed test implies that the mean μ is greater than or equal to some value μ0. Look for key phrases in the research question to help you set up the hypotheses. Make sure that the =, ≤ and ≥ sign always go in the null hypothesis. The ≠, > and < sign always go in the alternative hypothesis. Look for these phrases in Figure 8-1 to help you decide if you are setting up a two-tailed test (first column), a right-tailed test (second column), or left-tailed test (third column). When you read a question, it is essential that you identify the parameter of interest. The parameter determines which distribution to use. Make sure that you can recognize and distinguish which parameter you are making a conjecture about: mean = µ, proportion = p, variance = σ2 , standard deviation = σ. There will be more parameters in later chapters. Do not use the sample statistics, like $\overline{ x }$ or $\hat{p}$, in the hypotheses. We are not making any inference about the sample statistics. We know the value of the sample statistic. We use the sample statistics to infer if a change has occurred in the population. For example, if we were making a conjecture about the percent or proportion in a population we would have the hypotheses: H0: p = p0 H1: p ≠ p0. Setting up the hypotheses correctly is the most important step in hypothesis testing. Here are some example research questions and how to set up the null and alternative hypotheses correctly; in a later section, we will perform the entire hypothesis test. Use Figure 8-1 as a guide in setting up your hypotheses. The first column shows the hypotheses and how to shade in the distribution for a two-tailed test, with common phrases in the claim. The two-tailed test will always have a not equal ≠ sign in the alternative hypothesis and both tails shaded. The second column is for a right-tailed test. Note that the greater than > sign always will be in the alternative hypothesis and the right tail is shaded. The third column is for a left-tailed test. The left-tailed test will always have a less than < sign in the alternative hypothesis and the left tail shaded in. Hypothesis Testing Common Phrases Figure 8-1 State the hypotheses in both words and symbols for the following claims. 1. The national mean salary for high school teachers is $61,420. A random sample of 30 teacher’s salaries had a mean of$49,850. A new director for a graduate teacher education program (GTEP) believes that the average salary of a teacher in Oregon is significantly less than national average. 2. A high school principal is looking into assigning parking spaces at their school if the proportion of students who own their own car is more than 30%. The principal does not have the time to ask all 1,200 students at their school so instead takes a random sample of 70 students and found that 33% owned their own car. 3. A teacher would like to know if the average age of students taking evening classes is different from the university’s average age of 26. They sample 40 students from a random sample of evening classes and found the average age to be 27. 4. Solution a) The key phrase in the claim is “less than.” The less than sign < is only allowed in the alternative hypothesis and we are testing against the national average. H0: The national mean salary is $61,420. H1: The GTEP director believes the mean salary in Oregon is less than$61,420. H0: μ = 61420 H1: μ < 61420 b) The key phrase in the claim is “more than.” The greater than sign > is only allowed in the alternative hypothesis. This is about a proportion, not a mean, so use the parameter p. H0: The principal will not assign parking spaces if 30% or less of students own a car. H1: The principal will assign parking spaces if more than 30% of students own a car. H0: p = 0.3 H1: p > 0.3 c) The key word in the claim is “different.” The not equal sign ≠ is only allowed in the alternative hypothesis. H0: The population mean age is 26 years old. H1: The evening students’ mean age is believed to be different from 26 years old. H0: μ = 26 H1: μ ≠ 26 Once we collect sample data we need to find out how far away the sample statistic can be from the hypothesized parameter to say that a statistically significant change has occurred. Suppose a manufacturer of a new laptop battery claims the mean life of the battery is 900 days with a standard deviation of 40 days. You are the buyer of this battery and you think this claim is inflated. You would like to test your belief because without a good reason you cannot get out of your contract. You take a random sample of 35 batteries and find that the mean battery life is 890 days. What are the hypotheses for this question? Solution You have a guess that the mean life of a battery is less than 900 days. This is opposed to what the manufacturer claims. There really are two hypotheses, which are just guesses here – the one that the manufacturer claims and the one that you believe. For this problem: H0: μ = 900, since the manufacturer says the mean life of a battery is 900 days. H1: μ < 900, since you believe the mean life of the battery is less than 900 days. Note that we do not put the sample mean of 890 in our hypotheses. Is the sample mean of 890 days small enough to believe that you are right and the manufacturer is wrong? We would expect variation in our sample data and every time we take a new sample, the sample mean will most likely be different. How far away does the sample mean have to be from the product specification to verify our claim was correct? The sample data and the answer to these questions will be answered once we run the hypothesis test. If you calculated a sample mean of 435, you would definitely believe the population mean is less than 900. However, even if you had a sample mean of 835 you would probably believe that the true mean was less than 900. What about 875? Or 893? There is some point where you would stop being so sure that the population mean is less than 900. That point separates the values of where you are sure or pretty sure that the mean is less than 900 from the area where you are not so sure. How do you find that point where the sample mean is close enough to the hypothesized population mean? How close depends on how much error you want to make. Of course, you do not want to make any errors, but unfortunately, that is unavoidable in statistics since we are not measuring the entire population. You need to figure out how much error you made with your sample. Take the sample mean, and find the probability of getting another sample mean less than it, assuming for the moment that the manufacturer is right. The idea behind this is that you want to know what is the chance that you could have come up with your sample mean even if the population mean really is 900 days. You want to find P($\bar{X}$ < 890 | H0 is true) = P($\bar{X}$ < 890 | μ = 900). For short, we will call this probability the p-value or simply p. To compute this p-value, you need to know how the sample mean is distributed. Since the sample size is at least 30 you know the sample mean is approximately normally distributed, by the Central Limit Theorem (CLT). Remember $\mu_{\bar{x}}=\mu$ and $\sigma_{\bar{x}}=\frac{\delta}{\sqrt{n}}$. Before calculating the probability, it is useful to see how many standard deviations away from the mean the sample mean is. Using the formula for the z-score for CLT, $z=\frac{\bar{x}-\mu}{\left(\frac{\sigma}{\sqrt{n}}\right)}$ we can compare this z-score to a zscore based on how sure we want to be of not making a mistake. Using our sample mean we compute the z-score: $z=\frac{\bar{x}-\mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}=\frac{890-900}{\left(\frac{40}{\sqrt{35}}\right)}=-1.479$. This sample mean is more than one standard deviation away from the mean. Is that far enough? Look at the probability P($\bar{X}$ < 890 | H0 is true) = P($\bar{X}$ < 890 | μ = 900) = P(Z < –1.479). Using the TI Calculator normalcdf(-1E99,890,900,40/$\sqrt{35}$) $\approx$ 0.0696. Alternatively, in Excel use =NORM.DIST(890,900,40/SQRT(35),TRUE) $\approx$ 0.0696. Hence the p-value = 0.0696. A picture is always useful. Figures 8-2 shows the populations distribution. Figure 8-3 shows the sampling distribution of the mean. Figure 8-2 Figure 8-3 To understand the process of a hypothesis test, you need to first understand what a hypothesis is, which is an educated guess about a parameter. Once you have the alternative hypothesis, you collect data and use the data to decide to see if there is enough evidence to show that the alternative hypothesis is true. However, in hypothesis testing you actually assume something else is true, the null hypothesis, and then you look at your data to see how likely it is to get an event that your data demonstrates with that assumption. If the event is very unusual, then you might think that your assumption is actually false. If you are able to say this assumption is false, then your alternative hypothesis could be true. You assume the opposite of your alternative hypothesis is true and show that it cannot be true. If this happens, then your alternative hypothesis is probably true. All hypothesis tests go through the same process. Once you have the process down, then the concept is much easier. When setting up your hypotheses make sure the parameter, not the statistic, is used in the hypotheses. The equality always goes in the null hypothesis H0 and the alternative hypothesis Ha will be a left-tailed test with a less than sign <, a two-tailed test with a not equal sign $\neq$, or a right-tailed test with a greater than sign >. “‘But alright,’ went on the rumblings, ‘so what's the alternative?’ ‘Well,’ said Ford, brightly but slowly, ‘stop doing it of course!’” (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/08%3A_Hypothesis_Tests_for_One_Population/8.01%3A_Introduction.txt
How do you quantify really small? Is 5% or 10% or 15% really small? How do you decide? That depends on your field of study and the importance of the situation. Is this a pilot study? Is someone’s life at risk? Would you lose your job? Most industry standards use 5% as the cutoff point for how small is small enough, but 1%, 5% and 10% are frequently used depending on what the situation calls for. Now, how small is small enough? To answer that, you really want to know the types of errors you can make in hypothesis testing. The first error is if you say that H0 is false, when in fact it is true. This means you reject H0 when H0 was true. The second error is if you say that H0 is true, when in fact it is false. This means you fail to reject H0 when H0 is false. Figure 8-4 shows that if we “Reject H0 ” when H0 is actually true, we are committing a type I error. The probability of committing a type I error is the Greek letter $\alpha$, pronounced alpha. This can be controlled by the researcher by choosing a specific level of significance $\alpha$. Figure 8-4 Figure 8-4 shows that if we “Do Not Reject H0 ” when H0 is actually false, we are committing a type II error. The probability of committing a type II error is denoted with the Greek letter β, pronounced beta. When we increase the sample size this will reduce β. The power of a test is 1 – β. A jury trial is about to take place to decide if a person is guilty of committing murder. The hypotheses for this situation would be: • $H_0$: The defendant is innocent • $H_1$: The defendant is not innocent The jury has two possible decisions to make, either acquit or convict the person on trial, based on the evidence that is presented. There are two possible ways that the jury could make a mistake. They could convict an innocent person or they could let a guilty person go free. Both are bad news, but if the death penalty was sentenced to the convicted person, the justice system could be killing an innocent person. If a murderer is let go without enough evidence to convict them then they could possibly murder again. In statistics we call these two types of mistakes a type I and II error. Figure 8-5 is a diagram to see the four possible jury decisions and two errors. Figure 8-5 Type I Error is rejecting H0 when H0 is true, and Type II Error is failing to reject H0 when H0 is false. Since these are the only two possible errors, one can define the probabilities attached to each error. $\alpha$ = P(Type I Error) = P(Rejecting H0 | H0 is true) β = P(Type II Error) = P(Failing to reject H0 | H0 is false) An investment company wants to build a new food cart. They know from experience that food carts are successful if they have on average more than 100 people a day walk by the location. They have a potential site to build on, but before they begin, they want to see if they have enough foot traffic. They observe how many people walk by the site every day over a month. They will build if there is more than an average of 100 people who walk by the site each day. In simple terms, explain what the type I & II errors would be using context from the problem. Solution The hypotheses are: H0: μ = 100 and H1: μ > 100. Sometimes it is helpful to use words next to your hypotheses instead of the formal symbols • H0: μ ≤ 100 (Do not build) • H1: μ > 100 (Build). A type I error would be to reject the null when in fact it is true. Take your finger and cover up the null hypothesis (our decision is to reject the null), then what is showing? The alternative hypothesis is what action we take. If we reject H0 then we would build the new food cart. However, H0 was actually true, which means that the mean was less than or equal to 100 people walking by. In more simple terms, this would mean that our evidence showed that we have enough foot traffic to support the food cart. Once we build, though, there was not on average more than 100 people that walk by and the food cart may fail. A type II error would be to fail to reject the null when in fact the null is false. Evidence shows that we should not build on the site, but this actually would have been a prime location to build on. The missed opportunity of a type II error is not as bad as possibly losing thousands of dollars on a bad investment. What is more severe of an error is dependent on what side of the desk you are sitting on. For instance, if a hypothesis is about miles per gallon for a new car the hypotheses may be set up differently depending on if you are buying the car or selling the car. For this course, the claim will be stated in the problem and always set up the hypotheses to match the stated claim. In general, the research question should be set up as some type of change in the alternative hypothesis. Controlling for Type I Error The significance level used by the researcher should be picked prior to collection and analyzing data. This is called “a priori,” versus picking α after you have done your analysis which is called “post hoc.” When deciding on what significance level to pick, one needs to look at the severity of the consequences of the type I and type II errors. For example, if the type I error may cause the loss of life or large amounts of money the researcher would want to set $\alpha$ low. Controlling for Type II Error The power of a test is the complement of a type II error or correctly rejecting a false null hypothesis. You can increase the power of the test and hence decrease the type II error by increasing the sample size. Similar to confidence intervals, where we can reduce our margin of error when we increase the sample size. In general, we would like to have a high confidence level and a high power for our hypothesis tests. When you increase your confidence level, then in turn the power of the test will decrease. Calculating the probability of a type II error is a little more difficult and it is a conditional probability based on the researcher’s hypotheses and is not discussed in this course. “‘That's right!’ shouted Vroomfondel, ‘we demand rigidly defined areas of doubt and uncertainty!’” (Adams, 2002) Visualizing $\alpha$ and β If $\alpha$ increases that means the chances of making a type I error will increase. It is more likely that a type I error will occur. It makes sense that you are less likely to make type II errors, only because you will be rejecting H0 more often. You will be failing to reject H0 less, and therefore, the chance of making a type II error will decrease. Thus, as α increases, β will decrease, and vice versa. That makes them seem like complements, but they are not complements. Consider one more factor – sample size. Consider if you have a larger sample that is representative of the population, then it makes sense that you have more accuracy than with a smaller sample. Think of it this way, which would you trust more, a sample mean of 890 if you had a sample size of 35 or sample size of 350 (assuming a representative sample)? Of course, the 350 because there are more data points and so more accuracy. If you are more accurate, then there is less chance that you will make any error. By increasing the sample size of a representative sample, you decrease β. • For a constant sample size, n, if $\alpha$ increases, β decreases. • For a constant significance level, $\alpha$, if n increases, β decreases. When the sample size becomes large, point estimates become more precise and any real differences in the mean and null value become easier to detect and recognize. Even a very small difference would likely be detected if we took a large enough sample size. Sometimes researchers will take such a large sample size that even the slightest difference is detected. While we still say that difference is statistically significant, it might not be practically significant. Statistically significant differences are sometimes so minor that they are not practically relevant. This is especially important to research: if we conduct a study, we want to focus on finding a meaningful result. We do not want to spend lots of money finding results that hold no practical value. The role of a statistician in conducting a study often includes planning the size of the study. The statistician might first consult experts or scientific literature to learn what would be the smallest meaningful difference from the null value. They also would obtain some reasonable estimate for the standard deviation. With these important pieces of information, they would choose a sufficiently large sample size so that the power for the meaningful difference is perhaps 80% or 90%. While larger sample sizes may still be used, the statistician might advise against using them in some cases, especially in sensitive areas of research. If we look at the following two sampling distributions in Figure 8-6, the one on the left represents the sampling distribution for the true unknown mean. The curve on the right represents the sampling distribution based on the hypotheses the researcher is making. Do you remember the difference between a sampling distribution, the distribution of a sample, and the distribution of the population? Revisit the Central Limit Theorem in Chapter 6 if needed. If we start with $\alpha$ = 0.05, the critical value is represented by the vertical green line at $z_{\alpha}$ = 1.96. Then the blue shaded area to the right of this line represents $\alpha$. The area under the curve to the left of $z_{\alpha / 2}$ = 1.96 based on the researcher’s claim would represent β. Figure 8-6 Figure 8-7 If we were to change $\alpha$ from 0.05 to 0.01 then we get a critical value of $z_{\alpha / 2}$ = 2.576. Note that when $\alpha$ decreases, then β increases which means your power 1 – β decreases. See Figure 8-7. This text does not go over how to calculate β. You will need to be able to write out a sentence interpreting either the type I or II errors given a set of hypotheses. You also need to know the relationship between $\alpha$, β, confidence level, and power. Hypothesis tests are not flawless, since we can make a wrong decision in statistical hypothesis tests based on the data. For example, in the court system, innocent people are sometimes wrongly convicted and the guilty sometimes walk free, or diagnostic tests that have false negatives or false positives. However, the difference is that in statistical hypothesis tests, we have the tools necessary to quantify how often we make such errors. A type I Error is rejecting the null hypothesis when H0 is actually true. A type II Error is failing to reject the null hypothesis when the alternative is actually true (H0 is false). We use the symbols $\alpha$ = P(Type I Error) and β = P(Type II Error). The critical value is a cutoff point on the horizontal axis of the sampling distribution that you can compare your test statistic to see if you should reject the null hypothesis. For a left-tailed test the critical value will always be on the left side of the sampling distribution, the right-tailed test will always be on the right side, and a two-tailed test will be on both tails. Use technology to find the critical values. Most of the time in this course the shortcut menus that we use will give you the critical values as part of the output. 8.2.1 Finding Critical Values A researcher decides they want to have a 5% chance of making a type I error so they set α = 0.05. What z-score would represent that 5% area? It would depend on if the hypotheses were a left-tailed, two-tailed or right-tailed test. This zscore is called a critical value. Figure 8-8 shows examples of critical values for the three possible sets of hypotheses. Figure 8-8 Two-tailed Test If we are doing a two-tailed test then the $\alpha$ = 5% area gets divided into both tails. We denote these critical values $z_{\alpha / 2}$ and $z_{1-\alpha / 2}$. When the sample data finds a z-score (test statistic) that is either less than or equal to $z_{\alpha / 2}$ or greater than or equal to $z_{1-\alpha / 2}$ then we would reject H0. The area to the left of the critical value $z_{\alpha / 2}$ and to the right of the critical value $z_{1-\alpha / 2}$ is called the critical or rejection region. See Figure 8-9. Figure 8-9 When $\alpha$ = 0.05 then the critical values $z_{\alpha / 2}$ and $z_{1-\alpha / 2}$ are found using the following technology. Excel: $z_{\alpha / 2}$ =NORM.S.INV(0.025) = –1.96 and $z_{1-\alpha / 2}$ =NORM.S.INV(0.975) = 1.96 TI-Calculator: $z_{\alpha / 2}$ = invNorm(0.025,0,1) = –1.96 and $z_{1-\alpha / 2}$ = invNorm(0.975,0,1) = 1.96 Since the normal distribution is symmetric, you only need to find one side’s z-score and we usually represent the critical values as ± $z_{\alpha / 2}$. Most of the time we will be finding a probability (p-value) instead of the critical values. The p-value and critical values are related and tell the same information so it is important to know what a critical value represents. Right-tailed Test If we are doing a right-tailed test then the $\alpha$ = 5% area goes into the right tail. We denote this critical value $z_{1-\alpha}$. When the sample data finds a z-score more than $z_{1-\alpha}$ then we would reject H0, reject H0 if the test statistic is ≥ $z_{1-\alpha}$. The area to the right of the critical value $z_{1-\alpha}$ is called the critical region. See Figure 8-10. Figure 8-10 When $\alpha$ = 0.05 then the critical value $z_{1-\alpha}$ is found using the following technology. Excel: $z_{1-\alpha}$ =NORM.S.INV(0.95) = 1.645 Figure 8-10 TI-Calculator: $z_{1-\alpha}$ = invNorm(0.95,0,1) = 1.645 Left-tailed Test If we are doing a left-tailed test then the $\alpha$ = 5% area goes into the left tail. If the sampling distribution is a normal distribution then we can use the inverse normal function in Excel or calculator to find the corresponding z-score. We denote this critical value $z_{\alpha}$. When the sample data finds a z-score less than $z_{\alpha}$ then we would reject H0, reject Ho if the test statistic is ≤ $z_{\alpha}$. The area to the left of the critical value $z_{\alpha}$ is called the critical region. See Figure 8-11. Figure 8-11 When $\alpha$ = 0.05 then the critical value $z_{\alpha}$ is found using the following technology. Excel: $z_{\alpha}$ =NORM.S.INV(0.05) = –1.645 TI-Calculator: $z_{\alpha}$ = invNorm(0.05,0,1) = –1.645 The Claim and Summary The wording on the summary statement changes depending on which hypothesis the researcher claims to be true. We really should always be setting up the claim in the alternative hypothesis since most of the time we are collecting evidence to show that a change has occurred, but occasionally a textbook will have the claim in the null hypothesis. Do not use the phrase “accept H0” since this implies that H0 is true. The lack of evidence is not evidence of nothing. There were only two possible correct answers for the decision step. i. Reject H0 ii. Fail to reject H0 Caution! If we fail to reject the null this does not mean that there was no change, we just do not have any evidence that change has occurred. The absence of evidence is not evidence of absence. On the other hand, we need to be careful when we reject the null hypothesis we have not proved that there is change. When we reject the null hypothesis, there is only evidence that a change has occurred. Our evidence could have been false and lead to an incorrect decision. If we use the phrase, “accept H0” this implies that H0 was true, but we just do not have evidence that it is false. Hence you will be marked incorrect for your decision if you use accept H0, use instead “fail to reject H0” or “do not reject H0.”
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/08%3A_Hypothesis_Tests_for_One_Population/8.02%3A_Type_I_and_II_Errors.txt
There are three methods used to test hypotheses: The Traditional Method (Critical Value Method) There are five steps in hypothesis testing when using the traditional method: 1. Identify the claim and formulate the hypotheses. 2. Compute the test statistic. 3. Compute the critical value(s) and state the rejection rule (the rule by which you will reject the null hypothesis (H0). 4. Make the decision to reject or not reject the null hypothesis by comparing the test statistic to the critical value(s). Reject H0 when the test statistic is in the critical tail(s). 5. Summarize the results and address the claim using context and units from the research question. Steps ii and iii do not have to be in that order so make sure you know the difference between the critical value, which comes from the stated significance level $\alpha$, and the test statistic, which is calculated from the sample data. Note: The test statistic and the critical value(s) come from the same distribution and will usually have the same letter such as z, t, or F. The critical value(s) will have a subscript with the lower tail area $(z_{\alpha}, z_{1–\alpha}, z_{\alpha / 2})$ or an asterisk next to it (z*) to distinguish it from the test statistic. You can find the critical value(s) or test statistic in any order, but make sure you know the difference when you compare the two. The critical value is found from α and is the start of the shaded area called the critical region (also called rejection region or area). The test statistic is computed using sample data and may or may not be in the critical region. The critical value(s) is set before you begin (a priori) by the level of significance you are using for your test. This critical value(s) defines the shaded area known as the rejection area. The test statistic for this example is the z-score we find using the sample data that is then compared to the shaded tail(s). When the test statistic is in the shaded rejection area, you reject the null hypothesis. When your test statistic is not in the shaded rejection area, then you fail to reject the null hypothesis. Depending on if your claim is in the null or the alternative, the sample data may or may not support your claim. The P-value Method Most modern statistics and research methods utilize this method with the advent of computers and graphing calculators. There are five steps in hypothesis testing when using the p-value method: 1. Identify the claim and formulate the hypotheses. 2. Compute the test statistic. 3. Compute the p-value. 4. Make the decision to reject or not reject the null hypothesis by comparing the p-value with $\alpha$. Reject H0 when the p-value ≤ $\alpha$. 5. Summarize the results and address the claim. The ideas below review the process of evaluating hypothesis tests with p-values: • The null hypothesis represents a skeptic’s position or a position of no difference. We reject this position only if the evidence strongly favors the alternative hypothesis. • A small p-value means that if the null hypothesis is true, there is a low probability of seeing a point estimate at least as extreme as the one we saw. We interpret this as strong evidence in favor of the alternative hypothesis. • The p-value is constructed in such a way that we can directly compare it to the significance level ($\alpha$) to determine whether to reject H0. We reject the null hypothesis if the p-value is smaller than the significance level, $\alpha$, which is usually 0.05. Otherwise, we fail to reject H0. • We should always state the conclusion of the hypothesis test in plain language use context and units so non-statisticians can also understand the results. The Confidence Interval Method (results are in the same units as the data) There are four steps in hypothesis testing when using the confidence interval method: 1. Identify the claim and formulate the hypotheses. 2. Compute confidence interval. 3. Make the decision to reject or not reject the null hypothesis by comparing the p-value with $\alpha$. Reject H0 when the hypothesized value found in H0 is outside the bounds of the confidence interval. We only will be doing a two-tailed version of this. 4. Summarize the results and address the claim. For all 3 methods, Step i is the most important step. If you do not correctly set up your hypotheses then the next steps will be incorrect. The decision and summary would be the same no matter which method you use. Figure 8-12 is a flow chart that may help with starting your summaries, but make sure you finish the sentence with context and units from the question. Figure 8-12 The hypothesis-testing framework is a very general tool, and we often use it without a second thought. If a person makes a somewhat unbelievable claim, we are initially skeptical. However, if there is sufficient evidence that supports the claim, we set aside our skepticism and reject the null hypothesis in favor of the alternative. 8.3.1 Z-Test When the population standard deviation is known and stated in the problem, we will use the z-test. The z-test is a statistical test for the mean of a population. It can be used when σ is known. The population should be approximately normally distributed when n < 30. When using this model, the test statistic is $Z=\frac{\bar{x}-\mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}$ where µ0 is the test value from the H0. M&Ms candies advertise a mean weight of 0.8535 grams. A sample of 50 M&M candies are randomly selected from a bag of M&Ms and the mean is found to be $\overline{ x }$ = 0.8472 grams. The standard deviation of the weights of all M&Ms is (somehow) known to be σ = 0.06 grams. A skeptic M&M consumer claims that the mean weight is less than what is advertised. Test this claim using the traditional method of hypothesis testing. Use a 5% level of significance. Solution By letting $\alpha$ = 0.05, we are allowing a 5% chance that the null hypothesis (average weight that is at least 0.8535 grams) is rejected when in actuality it is true. 1. Identify the Claim: The claim is “M&Ms candies have a mean weight that is less than 0.8535 grams.” This translates mathematically to µ < 0.8535 grams. Therefore, the null and alternative hypotheses are: H0: µ = 0.8535 H1: µ < 0.8535 (claim) This is a left-tailed test since the alternative hypothesis has a “less than” sign. We are performing a test about a population mean. We can use the z-test because we were given a population standard deviation σ (not a sample standard deviation s). In practice, σ is rarely known and usually comes from a similar study or previous year’s data. 2. Find the Critical Value: The critical value for a left-tailed test with a level of significance $\alpha$ = 0.05 is found in a way similar to finding the critical values from confidence intervals. Because we are using the z-test, we must find the critical value $z_{\alpha}$ from the z (standard normal) distribution. This is a left-tailed test since the sign in the alternative hypothesis is < (most of the time a left-tailed test will have a negative z-score test statistic). Figure 8-13 First draw your curve and shade the appropriate tale with the area $\alpha$ = 0.05. Usually the technology you are using only asks for the area in the left tail, which in this case is $\alpha$ = 0.05. For the TI calculators, under the DISTR menu use invNorm(0.05,0,1) = –1.645. See Figure 8-13. For Excel use =NORM.S.INV(0.05). 3. Find the Test Statistic: The formula for the test statistic is the z-score that we used back in the Central Limit Theorem section $z=\frac{\bar{x}-\mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}=\frac{0.8472-0.8535}{\left(\frac{0.06}{\sqrt{50}}\right)}=-0.7425$. 4. Make the Decision: Figure 8-14 shows both the critical value and the test statistic. There are only two possible correct answers for the decision step. i. Reject H0 ii. Fail to reject H0 Figure 8-14 To make the decision whether to “Do not reject H0” or “Reject H0” using the traditional method, we must compare the test statistic z = –0.7425 with the critical value zα = –1.645. When the test statistic is in the shaded tail, called the rejection area, then we would reject H0, if not then we fail to reject H0. Since the test statistic z ≈ –0.7425 is in the unshaded region, the decision is: Do not reject H0. 5. Summarize the Results: At 5% level of significance, there is not enough evidence to support the claim that the mean weight is less than 0.8535 grams. Example 8-5 used the traditional critical value method. With the onset of computers, this method is outdated and the p-value and confidence interval methods are becoming more popular. Most statistical software packages will give a p-value and confidence interval but not the critical value. TI-84: Press the [STAT] key, go to the [TESTS] menu, arrow down to the [Z-Test] option and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in value for the hypothesized mean (µ0), standard deviation, sample mean, sample size, arrow over to the $\neq$, <, > sign that is in the alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. Alternatively (If you have raw data in a list) Select the [Data] menu and press the [ENTER] key. Then type in the value for the hypothesized mean (µ0), type in your list name (TI-84 L1 is above the 1 key). Press the [STAT] key, go to the [TESTS] menu, arrow down to either the [Z-Test] option and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in value for the hypothesized mean (µ0), standard deviation, sample mean, sample size, arrow over to the $\neq$, <, > sign that is in the alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. Alternatively (If you have raw data in a list) Select the [Data] menu and press the [ENTER] key. Then type in the value for the hypothesized mean (µ0), type in your list name (TI-84 L1 is above the 1 key). The calculator returns the alternative hypothesis (check and make sure you selected the correct sign), the test statistic, p-value, sample mean, and sample size. TI-89: Go in to the Stat/List Editor App. Select [F6] Tests. Select the first option Z-Test. Select Data if you have raw data in a list, select Stats if you have the summarized statistics given to you in the problem. If you have data, press [2nd] Var-Link, the go down to list1 in the main folder to select the list name. If you have statistics then enter the values. Leave Freq:1 alone, arrow over to the $\neq$, <, > sign that is in the alternative hypothesis statement then press the [ENTER]key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the test statistic and the p-value. What is the p-value? The p-value is the probability of observing an effect as least as extreme as in your sample data, assuming that the null hypothesis is true. The p-value is calculated based on the assumptions that the null hypothesis is true for the population and that the difference in the sample is caused entirely by random chance. Recall the example at the beginning of the chapter. Suppose a manufacturer of a new laptop battery claims the mean life of the battery is 900 days with a standard deviation of 40 days. You are the buyer of this battery and you think this claim is inflated. You would like to test your belief because without a good reason you cannot get out of your contract. You take a random sample of 35 batteries and find that the mean battery life is 890 days. Test the claim using the p-value method. Let $\alpha$ = 0.05. Solution We had the following hypotheses: H0: μ = 900, since the manufacturer says the mean life of a battery is 900 days. H1: μ < 900, since you believe the mean life of the battery is less than 900 days. The test statistic was found to be: $Z=\frac{\bar{x}-\mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}=\frac{890-900}{\left(\frac{40}{\sqrt{35}}\right)}=-1.479$. The p-value is P($\overline{ x }$ < 890 | H0 is true) = P($\overline{ x }$< 890 | μ = 900) = P(Z < –1.479). On the TI Calculator use normalcdf(-1E99,890,900,40/$\sqrt{35}$) $\approx$ 0.0696. See Figure 8-15. Figure 8-15 Alternatively, in Excel use =NORM.DIST(890,900,40/SQRT(35),TRUE) $\approx$ 0.0696. The TI calculators will easily find the p-value for you. Now compare the p-value = 0.0696 to $\alpha$ = 0.05. Make the decision to reject or not reject the null hypothesis by comparing the p-value with $\alpha$. Reject H0 when the p-value ≤ α, and do not reject H0 when the p-value > $\alpha$. The p-value for this example is larger than alpha 0.0696 > 0.05, therefore the decision is to not reject H0. Since we fail to reject the null, there is not enough evidence to indicate that the mean life of the battery is less than 900 days. 8.3.2 T-Test When the population standard deviation is unknown, we will use the t-test. The t-test is a statistical test for the mean of a population. It will be used when σ is unknown. The population should be approximately normally distributed when n < 30. When using this model, the test statistic is $t=\frac{\bar{x}-\mu_{0}}{\left(\frac{s}{\sqrt{n}}\right)}$ where µ0 is the test value from the H0. The degrees of freedom are df = n – 1. Z Versus T The z and t-tests are easy to mix up. Sometimes a standard deviation will be stated in the problem without specifying if it is a population’s standard deviation σ or the sample standard deviation s. If the standard deviation is in the same sentence that describes the sample or only raw data is given then this would be s. When you only have sample data, use the t-test. Figure 8-16 is a flow chart to remind you when to use z versus t. Figure 8-16 Use Figure 8-17 as a guide in setting up your hypotheses. The two-tailed test will always have a not equal ≠ sign in H1 and both tails shaded. The right-tailed test will always have the greater than > sign in H1 and the right tail shaded. The left-tailed test will always have a less than < sign in H1 and the left tail shaded. Figure 8-17 The label on a particular brand of cream of mushroom soup states that (on average) there is 870 mg of sodium per serving. A nutritionist would like to test if the average is actually more than the stated value. To test this, 13 servings of this soup were randomly selected and amount of sodium measured. The sample mean was found to be 882.4 mg and the sample standard deviation was 24.3 mg. Assume that the amount of sodium per serving is normally distributed. Test this claim using the traditional method of hypothesis testing. Use the $\alpha$ = 0.05 level of significance. Solution Step 1: State the hypotheses and identify the claim: The statement “the average is more (>) than 870” must be in the alternative hypothesis. Therefore, the null and alternative hypotheses are: H0: µ = 870 H1: µ > 870 (claim) This is a right-tailed test with the claim in the alternative hypothesis. Step 2: Compute the test statistic: We are using the t-test because we are performing a test about a population mean. We must use the t-test (instead of the z-test) because the population standard deviation σ is unknown. (Note: be sure that you know why we are using the t-test instead of the z-test in general.) The formula for the test statistic is $t=\frac{\bar{x}-\mu_{0}}{\left(\frac{S}{\sqrt{n}}\right)}=\frac{882.4-870}{\left(\frac{24.3}{\sqrt{13}}\right)}=1.8399$. Note: If you were given raw data use 1-var Stats on your calculator to find the sample mean, sample size and sample standard deviation. Step 3: Compute the critical value(s): The critical value for a right-tailed test with a level of significance $\alpha$ = 0.05 is found in a way similar to finding the critical values from confidence intervals. Since we are using the t-test, we must find the critical value t1–$\alpha$ from a t-distribution with the degrees of freedom, df = n – 1 = 13 –1 = 12. Use the DISTR menu invT option. Note that if you have an older TI-84 or a TI-83 calculator you need to have the invT program installed or use Excel. Draw and label the t-distribution curve with the critical value as in Figure 8-18. Figure 8-18 The critical value is t1–$\alpha$ = 1.782 and the rejection rule becomes: Reject H0 if the test statistic t ≥ t1–$\alpha$ = 1.782. Step 4: State the decision. Decision: Since the test statistic t =1.8399 is in the critical region, we should Reject H0. Step 5: State the summary. Summary: At the 5% significance level, we have sufficient evidence to say that the average amount of sodium per serving of cream of mushroom soup exceeds the stated 870 mg amount. Example 8-7 Continued: Use the prior example, but this time use the p-value method. Again, let the significance level be $\alpha$ = 0.05. Solution Step 1: The hypotheses remain the same. H0: µ = 870 H1: µ > 870 (claim) Step 2: The test statistic remains the same, $t=\frac{\bar{x}-\mu_{0}}{\left(\frac{S}{\sqrt{n}}\right)}=\frac{882.4-870}{\left(\frac{24.3}{\sqrt{13}}\right)}=1.8399$. Step 3: Compute the p-value. For a right-tailed test, the p-value is found by finding the area to the right of the test statistic t = 1.8339 under a tdistribution with 12 degrees of freedom. See Figure 8-19. Figure 8-19 Note that exact p-values for a t-test can only be found using a computer or calculator. For the TI calculators this is in the DISTR menu. Use tcdf(lower,upper,df). For this example, we would have p-value = tcdf(1.8399,∞,12) = 0.0453. The p-value is the probability of observing an effect as least as extreme as in your sample data, assuming that the null hypothesis is true. The p-value is calculated based on the assumptions that the null hypothesis is true for the population and that the difference in the sample is caused entirely by random chance. Step 4: State the decision. The rejection rule: reject the null hypothesis if the p-value ≤ $\alpha$. Decision: Since the p-value = 0.0453 is less than $\alpha$ = 0.05, we Reject H0. This agrees with the decision from the traditional method. (These two methods should always agree!) Step 5: State the summary. The summary remains the same as in the previous method. At the 5% significance level, we have sufficient evidence to say that the average amount of sodium per serving of cream of mushroom soup exceeds the stated 870 mg amount. We can use technology to get the test statistic and p-value. TI-84: If you have raw data, enter the data into a list before you go to the test menu. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the [2:T-Test] option and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in the hypothesized mean (µ0), sample or population standard deviation, sample mean, sample size, arrow over to the $\neq$, <, > sign that is the same as the problem’s alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the t-test statistic and p-value. Alternatively (If you have raw data in list one) Arrow over to the [Data] menu and press the [ENTER] key. Then type in the hypothesized mean (µ0), L1, leave Freq:1 alone, arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the t-test statistic and the p-value. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 2: T-Test. Choose the input method, data is when you have entered data into a list previously or stats when you are given the mean and standard deviation already. Then type in the hypothesized mean (μ0), sample standard deviation, sample mean, sample size (or list name (list1), and Freq: 1), arrow over to the $\neq$, <, > and select the sign that is the same as the problem’s alternative hypothesis statement then press the [ENTER] key to calculate. The calculator returns the t-test statistic and p-value. The weight of the world’s smallest mammal is the bumblebee bat (also known as Kitti’s hog-nosed bat or Craseonycteris thonglongyai) is approximately normally distributed with a mean 1.9 grams. Such bats are roughly the size of a large bumblebee. A chiropterologist believes that the Kitti’s hog-nosed bats in a new geographical region under study has a different average weight than 1.9 grams. A sample of 10 bats weighed in grams in the new region are shown below. Use the confidence interval method to test the claim that mean weight for all bumblebee bats is not 1.9 g using a 10% level of significance. Solution Step 1: State the hypotheses and identify the claim. The key phrase is “mean weight not equal to 1.9 g.” In mathematical notation, this is μ ≠ 1.9. The not equal ≠ symbol is only allowed in the alternative hypothesis so the hypotheses would be: H0: μ = 1.9 H1: μ ≠ 1.9 Step 2: Compute the confidence interval. First, find the t critical value using df = n – 1 = 9 and 90% confidence. In Excel t$\alpha$/2 = T.INV(.1/2,9) = 1.833113. Then use technology to find the sample mean and sample standard deviation and substitute in your numbers to the formula. \begin{aligned} &\bar{x} \pm t_{\alpha / 2}\left(\frac{s}{\sqrt{n}}\right) \ &\Rightarrow 1.985 \pm 1.833113\left(\frac{0.235242}{\sqrt{10}}\right) \ &\Rightarrow 1.985 \pm 1.833113(0.07439) \ &\Rightarrow 1.985 \pm 0.136365 \ &\Rightarrow(1.8486,2.1214) \end{aligned} The answer can be given as an inequality 1.8486 < µ < 2.1214 or in interval notation (1.8486, 2.1214). Step 3: Make the decision: The rejection rule is to reject H0 when the hypothesized value found in H0 is outside the bounds of the confidence interval. The null hypothesis was μ = 1.9 g. Since 1.9 is between the lower and upper boundary of the confidence interval 1.8486 < µ < 2.1214 then we would not reject H0. The sampling distribution, assuming the null hypothesis is true, will have a mean of μ = 1.9 and a standard error of $\frac{0.2352}{\sqrt{10}}=0.07439$. When we calculated the confidence interval using the sample mean of 1.985 the confidence interval captured the hypothesized mean of 1.9. See Figure 8-20. Figure 8-20 Step 4: State the summary: At the 10% significance level, there is not enough evidence to support the claim that the population mean weight for bumblebee bats in the new geographical region is different from 1.9 g. This interval can also be computed using a TI calculator or Excel. TI-84: Enter the data in a list, choose Tests > TInterval. Select and highlight Data, change the list and confidence level to match the question. Choose Calculate. Excel: Select Data Analysis > Descriptive Statistics: Note, you will need to change the cell reference numbers to where you copy and paste your data, only check the label box if you selected the label in the input range, and change the confidence level to 1 – $\alpha$. Below is the Excel output. Excel only calculates the descriptive statistics with the margin of error. Use Excel to find each piece of the interval $\bar{x} \pm t_{\alpha / 2}\left(\frac{s}{\sqrt{n}}\right)$. Excel $t_{\alpha / 2}$ = T.INV(0.1/2,9) = 1.8311. \begin{aligned} &\bar{x} \pm t_{\alpha / 2}\left(\frac{s}{\sqrt{n}}\right) \ &\Rightarrow 1.985 \pm 1.8311\left(\frac{0.2352}{\sqrt{10}}\right) \ &\Rightarrow 1.985 \pm 1.8311(0.07439) \end{aligned} Can you find the mean and standard error $\frac{s}{\sqrt{n}}=0.07439$ in the Excel output? $\Rightarrow 1.985 \pm 0.136365$ Can you find the margin of error $t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right)=0.136365$ in the Excel output? Subtract and add the margin of error from the sample mean to get each confidence interval boundary (1.8486, 2.1214). If we have raw data, Excel will do both the traditional and p-value method. Example 8-8 Continued: Use the prior example, but this time use the p-value method. Again, let the significance level be $\alpha$ = 0.05. Solution Step 1: State the hypotheses. The hypotheses are: H0: μ = 1.9 H1: μ ≠ 1.9 Step 2: Compute the test statistic, $t=\frac{\bar{x}-\mu_{0}}{\left(\frac{s}{\sqrt{n}}\right)}=\frac{1.985-1.9}{\left(\frac{.235242}{\sqrt{10}}\right)}=1.1426$ Verify using Excel. Excel does not have a one-sample t-test, but it does have a twosample t-test that can be used with a dummy column of zeros as the second sample to get the results for just one sample. Copy over the data into cell A1. In column B, next to the data, type in a dummy column of zeros, and label it Dummy. (We frequently use placeholders in statistics called dummy variables.) Select the Data Analysis tool and then select t-Test: Paired Two Sample for Means, then select OK. For the Variable 1 Range select the data in cells A1:A11, including the label. For the Variable 2 Range select the dummy column of zeros in cells B1:B11, including the label. Change the hypothesized mean to 1.9. Check the Labels box and change the alpha value to 0.10, then select OK. Excel provides the following output: Step 3: Compute the p-value. Since the alternative hypothesis has a ≠ symbol, use the Excel output next two-tailed p-value = 0.2826. Step 4: Make the decision. For the p-value method we would compare the two-tailed p-value = 0.2826 to $\alpha$ = 0.10. The rule is to reject H0 if the p-value ≤ $\alpha$. In this case the p-value > $\alpha$, therefore we do not reject H0. Again, the same decision as the confidence interval method. For the critical value method, we would compare the test statistic t = 1.142625 with the critical values for a twotailed test $t_{\frac{\alpha}{2}}$ = ±1.833113. Since the test statistic is between –1.8331 and 1.8331 we would not reject H0, which is the same decision using the p-value method or the confidence interval method. Step 5: State the summary. There is not enough evidence to support the claim that the population mean weight for all bumblebee bats is not equal to 1.9 g. One-Tailed Versus Two-Tailed Tests Most software packages do not ask which tailed test you are performing. Make sure you look at the sign in the alternative hypothesis to and determine which p-value to use. The difference is just what part of the picture you are looking at. In Excel, the critical value shown is for a one-tail test and does not specify left or right tail. The critical value in the output will always be positive, it is up to you to know if the critical value should be a negative or positive value. For example, Figures 8-21, 8-22, and 8-23 uses df = 9, $\alpha$ = 0.10 to show all three tests comparing either the test statistic with the critical value or the p-value with $\alpha$. Two-Tailed Test The test statistic can be negative or positive depending on what side of the distribution it falls; however, the p-value is a probability and will always be a positive number between 0 and 1. See Figure 8-21. Figure 8-21 Right-Tailed Test If we happened to do a right-tailed test with df = 9 and $\alpha$ = 0.10, the critical value t1-$\alpha$ = 1.383 will be in the right tail and usually the test statistic will be a positive number. See Figure 8-22. Figure 8-22 Left-Tailed Test If we happened to do a left-tailed test with df = 9 and $\alpha$ = 0.10, the critical value t$\alpha$ = –1.383 will be in the left tail and usually the test statistic will be a negative number. See Figure 8-23. Figure 8-23
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/08%3A_Hypothesis_Tests_for_One_Population/8.03%3A_Hypothesis_Test_for_One_Mean.txt
When you read a question, it is essential that you correctly identify the parameter of interest. The parameter determines which model to use. Make sure that you can recognize and distinguish between a question regarding a population mean and a question regarding a population proportion. The z-test is a statistical test for a population proportion. It can be used when np ≥ 10 and nq ≥ 10. Definition: z-Test The formula for the test statistic is: $Z=\dfrac{\hat{p}-p_{0}}{\sqrt{\left(\dfrac{p_{0} q_{0}}{n}\right)}}.$ where $n$ is the sample size $\hat{p}=\dfrac{x}{n}$ is the sample proportion (sometimes already given as a %) and $p_0$ is the hypothesized population proportion, $q_0 = 1 – p_0. \nonumber$ Use the phrases in Figure 8-24 to help with setting up the hypotheses. Figure 8-24 Note we will not be using the t-distribution with proportions. We will use a standard normal z distribution for testing a proportion since this test uses the normal approximation to the binomial distribution (never use the t-distribution). If you are doing a left-tailed z-test the critical value will be negative. If you are performing a right-tailed z-test the critical value will be positive. If you were performing a two-tailed z-test then your critical values would be ±critical value. The p-value will always be a positive number between 0 and 1. The most important step in any method you use is setting up your null and alternative hypotheses. The critical values and p-value can be found using a standard normal distribution the same way that we did for the one sample z-test. It has been found that 85.6% of all enrolled college and university students in the United States are undergraduates. A random sample of 500 enrolled college students in a particular state revealed that 420 of them were undergraduates. Is there sufficient evidence to conclude that the proportion differs from the national percentage? Use $\alpha$ = 0.05. Show that all three methods of hypothesis testing yield the same results. Solution At this point you should be more comfortable with the steps of a hypothesis test and not have to number each step, but know what each step means. Critical Value Method Step 1: State the hypotheses: The key words in this example, “proportion” and “differs,” give the hypotheses: H0: p = 0.856 H1: p ≠ 0.856 (claim) Step 2: Compute the test statistic. Before finding the test statistic, find the sample proportion $\hat{p}=\dfrac{420}{500}=0.84$ and q0 = 1 – 0.856 = 0.144. Next, compute the test statistic: $z=\dfrac{\hat{p}-p_{0}}{\sqrt{\left(\dfrac{p_{0} q_{0}}{n}\right)}}=\dfrac{0.84-0.856}{\sqrt{\left(\dfrac{0.856 \cdot 0.144}{500}\right)}}=-1.019.$ Step 3: Draw and label the curve with the critical values. See Figure 8-25. Use $\alpha$ = 0.05 and technology to compute the critical values $z_{\alpha / 2}$ and $z_{1-\alpha / 2}$. Excel: $z_{\alpha / 2}$ =NORM.S.INV(0.025) = –1.96 and $z_{1-\alpha / 2}$ =NORM.S.INV(0.975) = 1.96. TI-Calculator: $z_{\alpha / 2}$ = invNorm(0.025,0,1) = –1.96 and $z_{1-\alpha / 2}$ = invNorm(0.975,0,1) = 1.96. Figure 8-25 Step 4: State the decision. Since the test statistic is not in the shaded rejection area, do not reject H0. Step 5: State the summary. At the 5% level of significance, there is not enough evidence to conclude that the proportion of undergraduates in college for this state differs from the national average of 85.6%. P-value Method The hypotheses and test statistic stay the same. H0: p = 0.856 H1: p ≠ 0.856 (claim) $Z=\dfrac{\hat{p}-p_{0}}{\sqrt{\left(\dfrac{p_{0} q_{0}}{n}\right)}}=\dfrac{0.84-0.856}{\sqrt{\left(\dfrac{0.856 \cdot 0.144}{500}\right)}}=-1.019$ To find the p-value we need to find the P(Z > |1.019|) the area to the left of z = –1.019 and to the right of z = 1.019. First, find the area below (since the test statistic is negative) z = –1.019 using the normalcdf we get 0.1541. Then, double this area to get the p-value = 0.3082. Since the p-value > $\alpha$ the decision is to not reject H0. Summary: There is not enough evidence to conclude that the proportion of undergraduates in college for this state differs from the national average of 85.6%. There is a shortcut for this test on the TI Calculators, which will quickly find the test statistic and p-value. The rejection rule for the two methods are: • P-value method: reject H0 when the p-value ≤ $\alpha$. • Critical value method: reject H0 when the test statistic is in the critical region. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [5:1-PropZTest] and press the [ENTER] key. Type in the hypothesized proportion (p0), x, sample size, arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the z-test statistic and the p-value. Note: sometimes you are not given the x value but a percentage instead. To find the x to use in the calculator, multiply $\hat{p}$ by the sample size and round off to the nearest integer. The calculator will give you an error message if you put in a decimal for x or n. For example, if $\hat{p}$= 0.22 and n = 124 then 0.22*124 = 27.28, so use x = 27. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 5: 1-PropZ-Test. Type in the hypothesized proportion (p0), x, sample size, arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement then press the [ENTER] key to calculate. The calculator returns the z-test statistic and the pvalue. Note: sometimes you are not given the x value but a percentage instead. To find the x value to use in the calculator, multiply $\hat{p}$ by the sample size and round off to the nearest integer. The calculator will give you an error message if you put in a decimal for x or n. For example, if $\hat{p}$ = 0.22 and n = 124 then 0.22*124 = 27.28, so use x = 27.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/08%3A_Hypothesis_Tests_for_One_Population/8.04%3A_Hypothesis_Test_for_One_Proportion.txt
Chapter 8 Exercises 1. The plant-breeding department at a major university developed a new hybrid boysenberry plant called Stumptown Berry. Based on research data, the claim is made that from the time shoots are planted 90 days on average are required to obtain the first berry. A corporation that is interested in marketing the product tests 60 shoots by planting them and recording the number of days before each plant produces its first berry. The sample mean is 92.3 days. The corporation wants to know if the mean number of days is different from the 90 days claimed. Which one is the correct set of hypotheses? a) H0: p = 90% H1: p ≠ 90% b) H0: μ = 90 H1: μ ≠ 90 c) H0: p = 92.3% H1: p ≠ 92.3% d) H0: μ = 92.3 H1: μ ≠ 92.3 e) H0: μ ≠ 90 H1: μ = 90 2. Match the symbol with the correct phrase. 3. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints. Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? Which one is the correct set of hypotheses? Federal Trade Commission, (2008). Consumer fraud and identity theft complaint data: January-December 2007. Retrieved from website: http://www.ftc.gov/opa/2008/02/fraud.pdf. a) H0: p = 23% H1: p < 23% b) H0: μ = 23 H1: μ < 23 c) H0: p < 23% H1: p ≥ 23% d) H0: p = 0.224 H1: p < 0.224 e) H0: μ < 0.224 H1: μ ≥ 0.224 4. Compute the z critical value for a right-tailed test when $\alpha$ = 0.01. 5. Compute the z critical value for a two-tailed test when $\alpha$ = 0.01. 6. Compute the z critical value for a left-tailed test when $\alpha$ = 0.05. 7. Compute the z critical value for a two-tailed test when $\alpha$ = 0.05. 8. As of 2018, the Centers for Disease Control and Protection’s (CDC) national estimate that 1 in 68 $\approx$ 0.0147 children have been diagnosed with autism spectrum disorder (ASD). A researcher believes that the proportion of children in their county is different from the CDC estimate. Which one is the correct set of hypotheses? a) H0: p = 0.0147 H1: p ≠ 0.0147 b) H0: μ = 0.0147 H1: μ ≠ 0.0147 c) H0: p ≠ 0.0147 H1: p = 0.0147 d) H0: μ = 68 H1: μ ≠ 68 e) H0: = 0.0147 H1: ≠ 0.0147 9. Match the phrase with the correct symbol. a. Sample Size i. α b. Population Mean ii. n c. Sample Variance iii. σ² d. Sample Mean iv. s² e. Population Standard Deviation v. s f. P(Type I Error) vi. $\bar{x}$ g. Sample Standard Deviation vii. σ h. Population Variance viii. μ 10. The Food & Drug Administration (FDA) regulates that fresh albacore tuna fish contains at most 0.82 ppm of mercury. A scientist at the FDA believes the mean amount of mercury in tuna fish for a new company exceeds the ppm of mercury. Which one is the correct set of hypotheses? a) H0: p = 82% H1: p > 82% b) H0: μ = 0.82 H1: μ > 0.82 c) H0: p > 82% H1: p ≤ 82% d) H0: μ = 0.82 H1: μ ≠ 0.82 e) H0: μ > 0.82 H1: μ ≤ 0.82 11. Match the symbol with the correct phrase. 12. The plant-breeding department at a major university developed a new hybrid boysenberry plant called Stumptown Berry. Based on research data, the claim is made that from the time shoots are planted 90 days on average are required to obtain the first berry. A corporation that is interested in marketing the product tests 60 shoots by planting them and recording the number of days before each plant produces its first berry. The sample mean is 92.3 days. The corporation will not market the product if the mean number of days is more than the 90 days claimed. The hypotheses are H0: μ = 90 H1: μ > 90. Which answer is the correct type I error in the context of this problem? a) The corporation will not market the Stumptown Berry even though the berry does produce fruit within the 90 days. b) The corporation will market the Stumptown Berry even though the berry does produce fruit within the 90 days. c) The corporation will not market the Stumptown Berry even though the berry does produce fruit in more than 90 days. d) The corporation will market the Stumptown Berry even though the berry does produce fruit in more than 90 days. 13. The Food & Drug Administration (FDA) regulates that fresh albacore tuna fish contains at most 0.82 ppm of mercury. A scientist at the FDA believes the mean amount of mercury in tuna fish for a new company exceeds the ppm of mercury. The hypotheses are H0: μ = 0.82 H1: μ > 0.82. Which answer is the correct type II error in the context of this problem? a) The fish is rejected by the FDA when in fact it had less than 0.82 ppm of mercury. b) The fish is accepted by the FDA when in fact it had less than 0.82 ppm of mercury. c) The fish is rejected by the FDA when in fact it had more than 0.82 ppm of mercury. d) The fish is accepted by the FDA when in fact it had more than 0.82 ppm of mercury. 14. A two-tailed z-test found a test statistic of z = 2.153. At a 1% level of significance, which would the correct decision? a) Do not reject H0 b) Reject H0 c) Accept H0 d) Reject H1 e) Do not reject H1 15. A left-tailed z-test found a test statistic of z = -1.99. At a 5% level of significance, what would the correct decision be? a) Do not reject H0 b) Reject H0 c) Accept H0 d) Reject H1 e) Do not reject H1 16. A right-tailed z-test found a test statistic of z = 0.05. At a 5% level of significance, what would the correct decision be? a) Reject H0 b) Accept H0 c) Reject H1 d) Do not reject H0 e) Do not reject H1 17. A two-tailed z-test found a test statistic of z = -2.19. At a 1% level of significance, which would the correct decision? a) Do not reject H0 b) Reject H0 c) Accept H0 d) Reject H1 e) Do not reject H1 18. According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints. Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? The hypotheses are H0: p = 23% H1: p < 23%. Which answer is the correct type I error in the context of this problem? Federal Trade Commission, (2008). Consumer fraud and identity theft complaint data: January-December 2007. Retrieved from website: http://www.ftc.gov/opa/2008/02/fraud.pdf. a) It is believed that less than 23% of Alaskans had identity theft and there really was 23% or less that experienced identity theft. b) It is believed that more than 23% of Alaskans had identity theft and there really was 23% or more that experience identity theft. c) It is believed that less than 23% of Alaskans had identity theft even though there really was 23% or more that experienced identity theft. d) It is believed that more than 23% of Alaskans had identity theft even though there really was less than 23% that experienced identity theft 19. A hypothesis test was conducted during a clinical trial to see if a new COVID-19 vaccination reduces the risk of contracting the virus. What is the Type I and II errors in terms of approving the vaccine for use? 20. A manufacturer of rechargeable laptop batteries claims its batteries have, on average, 500 charges. A consumer group decides to test this claim by assessing the number of times 30 of their laptop batteries can be recharged and finds a p-value is 0.1111; thus, the null hypothesis is not rejected. What is the Type II error for this situation? 21. A commonly cited standard for one-way length (duration) of school bus rides for elementary school children is 30 minutes. A local government office in a rural area conducts a study to determine if elementary schoolers in their district have a longer average one-way commute time. If they determine that the average commute time of students in their district is significantly higher than the commonly cited standard they will invest in increasing the number of school buses to help shorten commute time. What would a Type II error mean in this context? 22. The Centers for Disease Control and Prevention (CDC) 2018 national estimate that 1 in 68 $\approx$ 0.0147 children have been diagnosed with autism spectrum disorder (ASD). A researcher believes that the proportion of children in their county is different from the CDC estimate. The hypotheses are H0: p = 0.0147 H1: p ≠ 0.0147. Which answer is the correct type II error in the context of this problem? a) The proportion of children diagnosed with ASD in the researcher’s county is believed to be different from the national estimate, even though the proportion is the same. b) The proportion of children diagnosed with ASD in the researcher’s county is believed to be different from the national estimate and the proportion is different. c) The proportion of children diagnosed with ASD in the researcher’s county is believed to be the same as the national estimate, even though the proportion is different. d) The proportion of children diagnosed with ASD in the researcher’s county is believed to be the same as the national estimate and the proportion is the same. 23. The Food & Drug Administration (FDA) regulates that fresh albacore tuna fish contains at most 0.82 ppm of mercury. A scientist at the FDA believes the mean amount of mercury in tuna fish for a new company exceeds the ppm of mercury. A test statistic was found to be 2.576 and a critical value was found to be 1.645, what is the correct decision and summary? a) Reject H0, there is enough evidence to support the claim that the amount of mercury in the new company’s tuna fish exceeds the FDA limit of 0.82 ppm. b) Accept H0, there is not enough evidence to reject the claim that the amount of mercury in the new company’s tuna fish exceeds the FDA limit of 0.82 ppm. c) Reject H1, there is not enough evidence to reject the claim that the amount of mercury in the new company’s tuna fish exceeds the FDA limit of 0.82 ppm. d) Reject H0, there is not enough evidence to support the claim that the amount of mercury in the new company’s tuna fish exceeds the FDA limit of 0.82 ppm. e) Do not reject H0, there is not enough evidence to support the claim that the amount of mercury in the new company’s tuna fish exceeds the FDA limit of 0.82 ppm. 24. The plant-breeding department at a major university developed a new hybrid boysenberry plant called Stumptown Berry. Based on research data, the claim is made that from the time shoots are planted 90 days on average are required to obtain the first berry. A corporation that is interested in marketing the product tests 60 shoots by planting them and recording the number of days before each plant produces its first berry. The corporation wants to know if the mean number of days is different from the 90 days claimed. A random sample was taken and the following test statistic was z = -2.15 and critical values of z = ±1.96 was found. What is the correct decision and summary? a) Do not reject H0, there is not enough evidence to support the corporation’s claim that the mean number of days until a berry is produced is different from the 90 days claimed by the university. b) Reject H0, there is enough evidence to support the corporation’s claim that the mean number of days until a berry is produced is different from the 90 days claimed by the university. c) Accept H0, there is enough evidence to support the corporation’s claim that the mean number of days until a berry is produced is different from the 90 days claimed by the university. d) Reject H1, there is not enough evidence to reject the corporation’s claim that the mean number of days until a berry is produced is different from the 90 days claimed by the university. e) Reject H0, there is not enough evidence to support the corporation’s claim that the mean number of days until a berry is produced is different from the 90 days claimed by the university. 25. You are conducting a study to see if the accuracy rate for fingerprint identification is significantly different from 0.34. Thus, you are performing a two-tailed test. Your sample data produce the test statistic z = 2.504. Use your calculator to find the p-value and state the correct decision and summary. 26. The SAT exam in previous years is normally distributed with an average score of 1,000 points and a standard deviation of 150 points. The test writers for this upcoming year want to make sure that the new test does not have a significantly different mean score. They have a random sample of 20 students take the SAT and their mean score was 1,050 points. a) Test to see if the mean time has significantly changed using a 5% level of significance. Show all your steps using the critical value method. b) What is a type I error for this problem? c) What is a type II error for this problem? 27. A sample of 45 body temperatures of athletes had a mean of 98.8˚F. Assume the population standard deviation is known to be 0.62˚F. Test the claim that the mean body temperature for all athletes is more than 98.6˚F. Use a 1% level of significance. Show all your steps using the p-value method. 28. Compute the t critical value for a left-tailed test when $\alpha$ = 0.10 and df = 10. 29. Compute the t critical value for a two-tailed test when $\alpha$ = 0.05 with a sample size of 18. 30. Using a t-distribution with df = 25, find the P(t ≥ 2.185). 31. A student is interested in becoming an actuary. They know that becoming an actuary takes a lot of schooling and they will have to take out student loans. They want to make sure the starting salary will be higher than \$55,000/year. They randomly sample 30 starting salaries for actuaries and find a p-value of 0.0392. Use $\alpha$ = 0.05. a) Choose the correct hypotheses. i. H0: μ = 55,000 H1: μ < 55,000 ii. H0: μ > 55,000 H1: μ ≤ 55,000 iii. H0: μ = 55,000 H1: μ > 55,000 iv. H0: μ < 55,000 H1: μ ≥ 55,000 v. H0: μ = 55,000 H1: μ ≠ 55,000 b) Should the student pursue an actuary career? i. Yes, since we reject the null hypothesis. ii. Yes, since we reject the claim. iii. No, since we reject the claim. iv. No, since we reject the null hypothesis. 32. The workweek for adults in the United States work full-time is normally distributed with a mean of 47 hours. A newly hired engineer at a start-up company believes that employees at start-up companies work more on average then working adults in the U.S. She asks 12 engineering friends at start-ups for the lengths in hours of their workweek. Their responses are shown in the table below. Test the claim using a 5% level of significance. Show all 5 steps using the p-value method. 33. The average number of calories from a fast food meal for adults in the United States is 842 calories. A nutritionist believes that the average is higher than reported. They sample 11 meals that adults ordered and measure the calories for each meal shown below. Test the claim using a 5% level of significance. Assume that fast food calories are normally distributed. Show all 5 steps using the p-value method. 34. Honda advertises the 2018 Honda Civic as getting 32 mpg for city driving. A skeptical consumer about to purchase this model believes the mpg is less than the advertised amount and randomly selects 35 2018 Honda Civic owners and asks them what their car’s mpg is. Use a 1% significance level. They find a p-value of 0.0436. a) Choose the correct hypotheses. i. H0: μ = 32 H1: μ < 32 ii. H0: μ < 32 H1: μ ≥ 32 iii. H0: μ = 32 H1: μ > 32 iv. H0: μ = 35 H1: μ ≠ 35 v. H0: μ = 32 H1: μ ≠ 32 b) Choose the correct decision based off the reported p-value. i. Reject H0 ii. Do not reject H0 iii. Do not reject H1 iv. Reject H1 For exercises 35-40, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 35. The total of individual pounds of garbage discarded by 17 households in one week is shown below. The current waste removal system company has a weekly maximum weight policy of 36 pounds. Test the claim that the average weekly household garbage weight is less than the company's weekly maximum. Use a 5% level of significance. 36. The world’s smallest mammal is the bumblebee bat (also known as Kitti’s hog-nosed bat or Craseonycteris thonglongyai). Such bats are roughly the size of a large bumblebee. A sample of 10 bats weighed in grams are shown below. Test the claim that mean weight for all bumblebee bats is not equal to 2.1 g using a 1% level of significance. Assume that the bat weights are normally distributed. 37. The average age of an adult's first vacation without a parent or guardian was reported to be 23 years old. A travel agent believes that the average age is different from what was reported. They sample 28 adults and they asked their age in years when they first vacationed as an adult without a parent or guardian, data shown below. Test the claim using a 10% level of significance. 38. Test the claim that the proportion of people who own dogs is less than 32%. A random sample of 1,000 people found that 28% owned dogs. Do the sample data provide convincing evidence to support the claim? Test the relevant hypotheses using a 10% level of significance. 39. The National Institute of Mental Health published an article stating that in any one-year period, approximately 9.3% of American adults suffer from depression or a depressive illness. Suppose that in a survey of 2,000 people in a certain city, 11.1% of them suffered from depression or a depressive illness. Conduct a hypothesis test to determine if the true proportion of people in that city suffering from depression or a depressive illness is more than the 9.3% in the general adult American population. Test the relevant hypotheses using a 5% level of significance. 40. The United States Department of Energy reported that 48% of homes were heated by natural gas. A random sample of 333 homes in Oregon found that 149 were heated by natural gas. Test the claim that the proportion of homes in Oregon that were heated by natural gas is different from what was reported. Use a 1% significance level. 41. A 2019 survey by the Bureau of Labor Statistics reported that 92% of Americans working in large companies have paid leave. In January 2021, a random survey of workers showed that 89% had paid leave. The resulting p-value is 0.009; thus, the null hypothesis is rejected. It is concluded that there has been a decrease in the proportion of people, who have paid leave from 2019 to January 2021. What type of error is possible in this situation? a) Type I Error b) Type II Error c) Standard Error d) Margin of Error e) No error was made. For exercises 42-44, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 42. Nationwide 40.1% of employed teachers are union members. A random sample of 250 Oregon teachers showed that 110 belonged to a union. At $\alpha$ = 0.10, is there sufficient evidence to conclude that the proportion of union membership for Oregon teachers is higher than the national proportion? 43. You are conducting a study to see if the proportion of men over the age of 50 who regularly have their prostate examined is significantly less than 0.31. A random sample of 735 men over the age of 50 found that 208 have their prostate regularly examined. Do the sample data provide convincing evidence to support the claim? Test the relevant hypotheses using a 5% level of significance. 44. Nationally the percentage of adults that have their teeth cleaned by a dentist yearly is 64%. A dentist in Portland, Oregon believes that regionally the percent is higher. A sample of 2,000 Portlanders found that 1,312 had their teeth cleaned by a dentist in the last year. Test the relevant hypotheses using a 10% level of significance. Answer to Odd Numbered Exercises 1) b 3) a 5) ±2.5758 7) ±1.96 9) a) ii. b) viii. c) iv. d) vi. e) vii. f) i. g) v. h) iii. 11) 100(1 – α)% = Confidence Level 1 – β = Power β = P(Type II Error) µ = Parameter α = Significance Level 13) d 15) b 17) a 19) The implication of a Type I error from the clinical trial is that the vaccination will be approved when it indeed does not reduce the risk of contracting the virus. The implication of a Type II error from the clinical trial is that the vaccination will not be approved when it indeed does reduce the risk of contracting the virus. 21) The local government decides that the data do not provide convincing evidence of an average commute time higher than 30 minutes, when the true average commute time is in fact higher than 30 minutes. 23) a 25) 0.0123 27) H0: µ = 98.6; H1: µ > 98.6; z = 2.1639; p-value = 0.0152; Do not reject H0. There is not enough evidence to support the claim that the mean body temperature for all athletes is more than 98.6˚F. 29) ±2.1098 31) a) iii. b) i. 33) H0: µ = 842; H1: µ > 842; t = 0.8218; p-value = 0.2152; Do not reject H0. We do not have evidence to support the claim the average calories from a fast food meal is higher than reported. 35) H0: µ = 36; H1: µ < 36; t = -1.9758; p-value = 0.0438; Reject H0. There is enough evidence to support the claim the average weekly household garbage weight is less than the company’s weekly 36 lb. maximum. 37) H0: µ = 23; H1: µ ≠ 23; t = 1.4224; p-value = 0.1664; Do not reject H0. We do not have enough evidence to support the claim that the mean age adults travel without a parent or guardian differs from 23. 39) H0: p = 0.093; H1: p > 0.093; t = 2.7116; p-value = 0.0027; Reject H0. There is enough evidence to support the claim the population proportion of American adults that suffer from depression or a depressive illness is more than 9.3%. 41) a 43) H0: p = 0.31; H1: p < 0.31; t = -1.5831; p-value = 0.0567; Do not reject H0. There is not enough evidence to support the claim the population proportion of men over the age of 50 who regularly have their prostate examined is significantly less than 0.31 8.06: Chapter 8 Formulas Hypothesis Test for One Mean Use z-test when σ is given. Use t-test when s is given. If n < 30, population needs to be normal. Type I Error- Reject H0 when H0 is true. Type II Error- Fail to reject H0 when H0 is false. Z-Test: H0: μ = μ0 H1: μ ≠ μ0 $Z=\frac{\bar{x}-\mu_{0}}{\left(\frac{\sigma}{\sqrt{n}}\right)}$ TI-84: Z-Test t-Test: H0: μ = μ0 H1: μ ≠ μ0 $t=\frac{\bar{x}-\mu_{0}}{\left(\frac{s}{\sqrt{n}}\right)}$ TI-84: T-Test z-Critical Values Excel: Two-tail: $z_{\alpha / 2}$ = NORM.INV(1–$\alpha$/2,0,1) Right-tail: $z_{1-\alpha}$ = NORM.INV(1–$\alpha$,0,1) Left-tail: $z_{\alpha}$ = NORM.INV($\alpha$,0,1) TI-84: Two-tail: $z_{\alpha / 2}$ = invNorm(1–$\alpha$/2,0,1) Right-tail: $z_{1-\alpha}$ = invNorm(1–$\alpha$,0,1) Left-tail: $z_{\alpha}$ = invNorm($\alpha$,0,1) t-Critical Values Excel: Two-tail: $t_{\alpha / 2}$ =T.INV(1–$\alpha$/2,df) Right-tail: $t_{1-\alpha}$ = T.INV(1–$\alpha$,df) Left-tail: $t_{\alpha}$= T.INV($\alpha$,df) TI-84: Two-tail: $t_{\alpha / 2}$ = invT(1–$\alpha$/2,df) Right-tail: $t_{1-\alpha}$ = invT(1–$\alpha$,df) Left-tail: $t_{\alpha}$ = invT($\alpha$,df) Hypothesis Test for One Proportion H0: p = p0 H1: pp0 $Z=\frac{\hat{p}-p_{0}}{\sqrt{\left(\frac{p_{0} q_{0}}{n}\right)}}$ TI-84: 1-PropZTest Rejection Rules: • P-value method: reject H0 when the p-value ≤ $\alpha$. • Critical value method: reject H0 when the test statistic is in the critical region (shaded tails).
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/08%3A_Hypothesis_Tests_for_One_Population/8.05%3A_Chapter_8_Exercises.txt
There are many instances where researchers wish to compare two groups. A clinical trial may want to use a control group and an experiment group to see if a new medication is effective. Identical twin studies help geneticists learn more about inherited traits. Educators may want to test to see if there is a difference between before and after test scores. A farmer may wish to see if there is a difference between two types of fertilizer. A marketing firm may want to see if there is a preference between two different bottle designs. Hypothesis testing for two groups takes on similar steps as one group. It is important to know if the two groups are dependent (related) or independent (not related) from one another. 09: Hypothesis Tests and Confidence Intervals for Two Populations Dependent samples or matched pairs, occur when the subjects are paired up, or matched in some way. Most often, this model is characterized by selection of a random sample where each member is observed under two different conditions, before/after some experiment, or subjects that are similar (matched) to each other are studied under two different conditions. There are 3 types of hypothesis tests for comparing two dependent population means µ1 and µ2, where, µD is the expected difference of the matched pairs. Note: If each pair were equal to one another then the mean of the differences would be zero. We could also use this model to test with a magnitude of a difference, but we rarely cover that scenario, therefore we are usually test against the difference of zero. The t-test for dependent samples is a statistical test for comparing the means from two dependent populations (or the difference between the means from two populations). The t-test is used when the differences are normally distributed. The samples also must be dependent. The formula for the t-test statistic is: $t=\frac{\bar{D}-\mu_{D}}{\left(\frac{S_{D}}{\sqrt{n}}\right)}$. Where the t-distribution with degrees of freedom, df = n – 1 Note we will usually only use the case where µD equals zero. The subscript “D” denotes the difference between population one and two. It is important to compute D = x1 – x2 for each pair of observations. However, this makes setting up the hypotheses more challenging for one-tailed tests. If we were looking for an increase in test scores from before to after, then we would expect the after score to be larger. When we take a smaller number minus a larger number then the difference would be negative. If we put the before group first and the after group second then we would need a left-tailed test μD < 0 to test the “increase” in test scores. This is opposite of the sign we associate for “increase.” If we swap the order and use the after group first, then the before group would have a larger number minus a smaller number which would be positive and we would do a right-tailed test μD > 0. Always subtract in the same order the data is presented in the question. An easier way to decide on the one-tailed test is to write down the two labels and then put a less than () symbol between them depending on the question. For example, if the research statement is a weight loss program significantly decreases the average weight, the sign of the test would change depending on which group came first. If we subtract before weight – after weight, then we would want to have before > after and use μD > 0. If we have the after weight as the first measurement then we would subtract the after weight – before weight and want after < before and use μD < 0. If you keep your labels in the same order as they appear in the question, compare them and carry this sign down to the alternative hypothesis. The traditional method (or critical value method), the p-value method, and the confidence interval method are performed with steps that are identical to those when performing hypothesis tests for one population. A dietician is testing to see if a new diet program reduces the average weight. They randomly sample 35 patients and measure them before they start the program and then weigh them again after 2 months on the program. What are the correct hypotheses? Solution Let x1 = weight before a weight-loss program and x2 = weight after the weight-loss program. We want to test if, on average, participants lose weight. Therefore, the difference D = x1 – x2. This gives D = before weight – after weight, thus if on average people do lose weight, then in general the before > after and the D’s are positive. How we define our differences determines that this example is a right-tailed test (carry the > sign down to the alternative hypothesis) and the correct hypotheses are: H0: µD = 0 H1: µD > 0 If we were to do the same problem but reverse the order and take D = after weight – before weight the correct alternative hypothesis is H1: µD < 0 since after weight < before weight. Just be consistent throughout your problem, and never switch the order of the groups in a problem. P-Value Method Example In an effort to increase production of an automobile part, the factory manager decides to play music in the manufacturing area. Eight workers are selected, and the number of items each produced for a specific day is recorded. After one week of music, the same workers are monitored again. The data are given in the table. At $\alpha$ = 0.05, can the manager conclude that the music has increased production? Assume production is normally distributed. Use the p-value method. Worker 1 2 3 4 5 6 7 8 Before 6 8 10 9 5 12 9 7 After 10 12 9 12 8 13 8 10 Solution Assumptions: We are comparing production rates before and after music is played in the manufacturing area. We are given that the production rates are normally distributed. Because these are consecutive times from the same population, they are dependent samples, so we must use the t-test for matched pairs. Let population 1 be the number of items before the music, and population 2 be after. The claim is that music increases production so before production < after production. Carry this same sign to the alternative hypothesis. The correct hypotheses are: H0: µD = 0; H1: µD < 0, this is a left-tailed test. In order to compute the t-test statistic, we must first compute the differences between each of the matched pairs. Before (x1) 6 8 10 9 5 12 9 7 After (x2) 10 12 9 12 8 13 8 10 D = x1-x2 –4 –4 1 –3 –3 –1 1 –3 Using the 1-var stats on the differences in your calculator, we compute $\bar{D}=\bar{x}=2$, sD = sx = 2.0702, n = 8. The test statistic is: $\t=\frac{\bar{D}-\mu_{D}}{\left(\frac{s_{D}}{\sqrt{n}}\right)}=\frac{-2-0}{\left(\frac{2.0702}{\sqrt{8}}\right)}=-2.7325$. The p-value for a two-tailed t-test with degrees of freedom = n – 1 = 7, is found by finding the area to the left of the test statistic –2.7325 using technology. Decision: Since the p-value = 0.0146 is less than $\alpha$ = 0.05, we reject H0. Summary: At the 5% level of significance, there is enough evidence to support the claim that the mean production rate increases when music is played in the manufacturing area. TI-84: Find the differences between the sample pairs (you can subtract two lists to do this). Press the [STAT] key and then the [EDIT] function, enter the difference column into list one. Press the [STAT] key, arrow over to the [TESTS] menu. Arrow down to the option [2:T -Test] and press the [ENTER]. Arrow over to the [Data] menu and press the [ENTER] key. Then type in the hypothesized mean as 0, List: L3, leave Freq:1 alone, arrow over to the $\neq$, <, >, sign that is the same in the problem’s alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the t-test statistic, the p-value, mean of the differences $\bar{D}=\bar{x}$ and standard deviation of the differences sD = sx. TI-89: Find the differences between the sample pairs (you can subtract two lists to do this). Go to the [Apps] Stat/List Editor, enter the two data sets in lists 1 and 2. Move the cursor so that it is highlighted on the header of list3. Press [2nd] Var-Link and move down to list1 and press [Enter]. This brings the name list1 back to the list3 at the bottom, select the minus [-] key, then select [2nd] Var-link and this time highlight list2 and press [Enter]. You should now see list1-list2 at the bottom of the window. Press [Enter] then the differences will be stored in list3. Press [2nd] then F6 [Tests], select 2: T-Test. Select the [Data] menu. Then type in the hypothesized mean as 0, List: list1, Freq:1, arrow over to the $\neq$, <, >, and select the sign that is the same in the problem’s alternative hypothesis, press the [ENTER] key to calculate. The calculator returns the t-test statistic, p-value, $\bar{D}=\bar{x}$ and sD = sx. Excel: Start by entering the data in two columns in the same order that they appear in the problem. Then select Data > Data Analysis > t-test: Paired Two Sample for Means, then select OK. Select the Before data (including the label) into the Variable 1 Range, and the After data (including the label) in the Variable 2 Range. Type in zero for the Hypothesized Mean Difference box. Select the box for Labels (do not select this if you do not have labels in the variable range selected). Change alpha to fit the problem. You can leave the default to open in a new worksheet or change output range to be one cell where you want the top left of the output table to start (make sure this cell does not overlap any existing data). Then select OK. See below for example. You get the following output: One nice feature in Excel is that you get the p-value and the critical value in the output. The critical value can be taken from the Excel output; however, Excel never gives negative critical values. Since we are doing a left-tailed test we will need to use the t-score = -1.8946. If we were to draw and shade the critical region for the sampling distribution, it would look like Figure 9 -2. The decision is made by comparing the test statistic t = -2.7325 with the critical value tα = -1.8946. Since the test statistic is in the shaded critical region, we would reject H0. At the 5% level of significance, there is enough evidence to support the claim that the mean production rate increases when music is played in the manufacturing area. The decision and summary should not change from using the p-value method. Confidence Interval Method A (1 – $\alpha$)*100% confidence interval for the difference between two population means with matched pairs: μD = mean of the differences. $\bar{D}-t_{\frac{\alpha}{2}}\left(\frac{s_{D}}{\sqrt{n}}\right)<\mu_{D}<\bar{D}+t_{\alpha / 2}\left(\frac{s_{D}}{\sqrt{n}}\right)\] Or more compactly as \(\bar{D} \pm t_{\alpha / 2}\left(\frac{s_{D}}{\sqrt{n}}\right)$ Where the t-distribution has degrees of freedom, df = n – 1, where n is the number of pairs. Hands-On Cafe records the number of online orders for eight randomly selected locations for two consecutive days. Assume the number of online orders is normally distributed. Find the 95% confidence interval for the mean difference. Is there evidence of a difference in mean number of orders for the two days? Location 1 2 3 4 5 6 7 8 Thursday 67 65 68 68 68 70 69 70 Friday 68 70 69 71 72 69 70 70 Solution First set up the hypotheses. We are testing to see if Thursday $\neq$ Friday orders. The hypotheses would be: H0: µD = 0 H1: µD ≠ 0 Next, compute the $\frac{t_ $ critical value for a 95% confidence interval and df = 7. Use the t-distribution with technology using confidence level 95%, lower tail area of $\alpha$/2 = 0.025 to get t$\alpha$/2 = t0.025 = ±2.36462. Compute the differences of Thursday – Friday for each pair. Thursday 67 65 68 68 68 70 69 70 Friday 68 70 69 71 72 69 70 70 D –1 –5 –1 –3 –4 1 –1 0 Use technology to compute the mean, standard deviation and sample size. Note if you use a TI calculator then $\bar{D}=\bar{x}$ and sD = sx. Find the interval estimate: $\bar{D} \pm t_{\frac{\alpha}{2}}\left(\frac{s_{D}}{\sqrt{n}}\right)$ \begin{aligned} &\Rightarrow-1.75 \pm 2.36462\left(\frac{2.05287}{\sqrt{8}}\right) \ &\Rightarrow-1.75 \pm 1.7162 . \end{aligned} Write the answer using standard notation –3.4662 < μD < –0.0335 or interval notation (–3.4662, –0.0338). For an interpretation of the interval, if we were to use the same sampling techniques, approximately 95 out of 100 times the confidence interval (–3.4662, –0.0338) would contain the population mean difference in the number of orders between Thursday and Friday. Since both endpoints are negative, we can be 95% confident that the population mean number of orders for Thursday is between 3.4662 and 0.0338 orders lower than Friday. Excel: Type in both samples in two adjacent columns, and then subtract each pair in a third column and label the column Difference. Thursday Friday Difference 67 68 =A2-B2 65 70 =A3-B3 68 69 =A4-B4 68 71 =A5-B5 68 72 =A6-B6 70 69 =A7-B7 69 70 =A8-B8 70 70 =A9-B9 Select Data > Data Analysis > Descriptive Statistics and click OK. Select the Difference column for the input range including the label, then check the box next to Labels in first row (do not select this box if you did not highlight a label in the input range). Use the default new worksheet or select a single cell for the Output Range where you want your top left-hand corner of the table to start. Check the boxes Summary Statistics and Confidence Level for Mean. Change the confidence level to fit the question, and then select OK. You get the following output: The confidence interval is the mean ± margin of error. In two different cells subtract and then add the margin of error from the mean to get the confidence interval limits and then put your answer in interval notation (–3.4662, – 0.0338). TI-84: First, find the differences between the samples. Then on the TI-83 press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the [8:TInterval] option and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. The defaults are List: L1, Freq:1. If this is set with a different list, arrow down and use [2nd] [1] to get L1. Then type in the confidence level. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval, $\bar{D}=\bar{x}$ and sD = sx. TI-89: First, find the differences between the samples. Go to the [Apps] Stat/List Editor, then enter the differences into list 1. Press [2nd] then F7 [Ints], then select 2: T-Interval. Select the [Data] menu. Enter in List: list1, Freq:1. Then type in the confidence level. Press the [ENTER] key to calculate. The calculator returns the confidence interval, $\bar{D}=\bar{x}$ and sD = sx.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/09%3A_Hypothesis_Tests_and_Confidence_Intervals_for_Two_Populations/9.01%3A_Two_Sample_Mean_T-Test_for_Dependent_Groups.txt
This section will look at how to analyze a difference in the mean for two independent samples. As with all other hypothesis tests and confidence intervals, the process is the same, though the formulas and assumptions are different. The symbol used for the population mean has been $μ$ up to this point. In order to use formulas that compare the means from two populations, we use subscripts to show which population statistic or parameter we are referencing. Parameters • µ1 = population mean of population 1 • µ2 = population mean of population 2 • σ1 = population standard deviation of population 1 • σ2 = population standard deviation of population 2 • $\sigma_{1}^{2}$ = population variance of population 1 • $\sigma_{1}^{2}$ = population variance of population 2 • p1 = population proportion of population 1 • p2 = population proportion of population 2 Statistics • $\bar{x}_{1}$ = mean of sample from population 1 • $\bar{x}_{2}$ = mean of sample from population 2 • s1 = standard deviation of sample from population 1 • s2 = standard deviation of sample from population 2 • $s_{1}^{2}$ = variance of sample from population 1 • $s_{2}^{2}$ = variance of sample from population 2 • $\hat{p}_{1}$ = proportion of sample from population 1 • $\hat{p}_{2}$ = proportion of sample from population 2 • n1 = sample size from population 1 • n2 = sample size from population 2 You do not need to use the subscripts 1 and 2. You can use a letter or symbol that helps you differentiate between the two groups. For instance, if you have two manufacturers labeled A and B, you may want to use µA and µB. When setting up the null hypothesis we are testing if there is a difference in the two means equal to some known difference. H0: µ1 – µ2 = (µ1 – µ2)0. We will focus on the case where (µ1 – µ2)0 = 0, which says that, tentatively, we assume that there is no difference in population means H0: µ1 – µ2 = 0. If we were to subtract μ2 from both sides of the equation µ1 – µ2 = 0 we would get µ1 = µ2. For instance, if the average age for group one was 25 and the average age for group two was also 25, then the difference between the two means would be 25 – 25 = 0. There are three ways to set up the hypotheses for comparing two independent population means µ1 and µ2. Figure 9-3 For a one-tailed test, one could alternatively write the null hypotheses as: Right-tailed test Left-tailed test H0: µ1 ≤ µ2 H0: µ1 ≥ µ2 H1: µ1 > µ2 H1: µ1 < µ2 This text mostly will use an = sign in the null hypothesis. Most of the time the groups are numbered from the order in which their statistics or data appear in the problem. To keep the correct sign of the test, make sure you do not switch the order of the groups. For instance, if we were comparing the mean SAT score between high school juniors and seniors and our hypothesis is that the mean for seniors is higher we could set up the alternative hypotheses as either µj < µs if we had the juniors be group 1 and µj > µs if we had the seniors be group 1. This change would switch the sign of both the test statistic and the critical value. When performing a one-tailed test the sign of the test statistic and critical value will match most of the time. For example, if your test statistic came out to be z = –1.567 and your critical value was z = 1.645 you most likely have the incorrect order in your hypotheses. When you are making a conjecture about a population mean, we have two different situations, depending on if we know that population standard deviation, or not, called the z-test and t-test, respectively. Use Figure 9-4 to help decide when to use the z-test and t-test. Figure 9-4 Note that you should never use the value of σx on your calculator since you would rarely ever have an entire population of raw data to input into a calculator. The problem may give you raw data, but σ or σ2 would be stated in the problem and you should be using a z-test, otherwise use the t-test with the sample standard deviation sx. Usually, σ is known from a previous year or similar study. In either case if the sample sizes are below 30 we need to check that the population is approximately normally distributed for the Central Limit Theorem to hold. We can do this with a normal probability plot. Most examples that we deal with just assume the population is normally distributed, but in practice, you should always check these assumptions. 9.3.1 Two Sample Mean Z-Test & Confidence Interval The two-sample z-test is a statistical test for comparing the means from two independent populations with σ1 and σ2 stated in the problem and using the formula for the test statistic $z=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}}$ Note that µ1 – µ2 is the hypothesized difference found in the null hypothesis and is usually zero. The traditional method (or critical value method), the p-value method, and the confidence interval method are performed with steps that are identical to those when performing hypothesis tests for one population. We will show an example of a two-sample z-test, but seldom in practice will we perform this type of test since we rarely have access to a population standard deviation. A university adviser wants to see whether there is a significant difference in ages of full-time students and part-time students. They select a random sample of 50 students from each group. The ages are shown below. At $\alpha$ = 0.05, decide if there is enough evidence to support the claim that there is a difference in the ages of the two groups. Assume the population standard deviation for full-time students is 3.68 years old and for part-time students is 4.7 years old. Use the p-value method. Full-time students 22 25 27 23 26 28 26 24 25 20 19 18 30 26 18 18 19 32 23 19 18 22 26 19 19 21 23 18 20 18 22 18 20 19 23 26 24 27 26 18 22 21 19 21 21 19 18 29 19 22 Part-time students 18 20 19 18 22 25 24 35 23 18 24 26 30 22 22 22 21 18 20 19 19 32 29 23 21 19 36 27 27 20 20 19 19 20 25 23 22 28 25 20 20 21 18 19 23 26 35 19 19 18 Solution Assumptions: The two populations we are sampling from are not necessarily normal, but the sample sizes are greater than 30, so the Central Limit Theorem holds. The population standard deviations σ1 and σ2 are known; therefore, we use the z-test for comparing two population means µ1 and µ2. The claim is that there is a difference in the ages of the two student groups. Let full-time students be population 1 and part-time students be population 2 (always go in the same order as the data are presented in the problem unless otherwise stated). Then µ1 would be the average age for full-time students and µ2 would be the average age for parttime students. The key phrase is difference: µ1 ≠ µ2. The correct hypotheses are H0: µ1 = µ2 H1: µ1 ≠ µ2. This is a two-tailed test and the claim is in the alternative hypothesis. Note that if we had decided to have population 1 be part-time students, the test statistic would be negated from that given below, but the p-value and result would be identical. In general, you should take population 1 as whatever group comes first in the problem. Using technology, we compute $\bar{x}_{1}$ = 22.12, $\bar{x}_{2}$ = 22.76, n1 = 50 and n2 = 50. From the problem we have σ1 = 3.68 and σ2 = 4.7. Since µ1 = µ2 then we know that µ1 – µ2 = 0, and that we do not use the sample standard deviations. The test statistic is: $Z=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}}=\frac{(22.12-22.76)-0}{\sqrt{\left(\frac{3.68^{2}}{50}+\frac{4.7^{2}}{50}\right)}}=-0.7581$. The p-value for a two-tailed z-test is found by finding the area to the left (since z is negative) of the test statistic using a normal distribution and multiplying the area by two. Using the normalcdf(–∞,–0.7581, 0,1) we get an area of 0.2242. Since this is a two-tailed test we need to double the area, which gives a p-value = 0.4484. Note that if the z-score was positive, find the area to the right of z, then double. Decision: Because the p-value = 0.4484 is larger than $\alpha$ = 0.05, we do not reject H0. Summary: At the 5% level of significance, there is not enough evidence to support the claim that there is a difference in the ages of full-time students and part-time students. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [3:2-SampZTest] and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. Then type in the population standard deviations, the first sample mean and sample size, then the second sample mean and sample size, arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement, then press the [ENTER]key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the test statistic z and the p-value. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 3: 2-SampZ-Test. Then type in the population standard deviations, the first sample mean and sample size, then the second sample mean and sample size (or list names (list3 & list4), and Freq1:1 & Freq2:1), arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement then press the [ENTER] key to calculate. The calculator returns the z-test statistic and the p-value. Excel: Start by entering the data in two columns in the same order that they appear in the problem. Then select Data > Data Analysis > z-test: Two Sample for Means, then select OK. Click into the box next to Variable 1 Range and select the cells where the first data set is, including the label. Click into the box next to Variable 2 Range and select the cells where the second data set is, including the label. Type in zero for the hypothesized mean; this comes from the null hypothesis that if µ1 = µ2 then µ1 – µ2 = 0. Type in the variance for each group, and be careful with this step: the variance is the standard deviation squared $\sigma_{1}^{2}$ = 3.682 = 13.5424 and $\sigma_{2}^{2}$ = 4.72 = 22.09. Select the Label box only if you highlighted the label in the variable range box. Change alpha to fit the significance level given in the problem. The output range is one cell reference number where you want the top left-hand corner of your output table to start, or you can use the default to have your output open in a new worksheet. Then select Ok. See Excel output below. You get the following output in Excel: Note you can only use the Excel shortcut if you have the raw data. If you have summarized data then you would need to do everything by hand. Two-Sample Z-Interval For independent samples, we take the mean of each sample, then take the difference in the means. If the means are equal, then the difference of the two means would be equal to zero. We can then compare the null hypothesis, that there is no difference in the means μ1 – μ2 = 0, with the confidence interval limits to decide whether to reject the null hypothesis. If zero is contained within the confidence interval, then we fail to reject H0. If zero is not contained within the confidence interval, then we reject H0. A (1 – $\alpha$)*100% confidence interval for the difference between two population means µ1 – µ2 : $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm z_{\alpha / 2} \sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}$ The requirements for the confidence interval are identical to the previous hypothesis test. Non-rechargeable alkaline batteries and nickel metal hydride (NiMH) batteries are tested, and their voltage is compared. The data follow. Test to see if there is a difference in the means using a 95% confidence interval. Assume that both variables are normally distributed. Solution First, set up the hypotheses H0: µ1 = µ2 H1: µ1 ≠ µ2. Next, find the $z_{\alpha / 2}$ critical value for a 95% confidence interval. Use technology to get $z_{\alpha / 2}$ = 1.96. Find the interval estimate (confidence interval): $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm z_{\alpha / 2} \sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}$ \begin{aligned} &\Rightarrow(9.2-8.8) \pm 1.96 \sqrt{\left(\frac{0.3^{2}}{27}+\frac{0.1^{2}}{30}\right)} \ &\Rightarrow \quad 0.4 \pm 0.1187 . \end{aligned} Use interval notation (0.2813, 0.5187) or standard notation 0.28 < µ1 – µ2 < 0.52. For an interpretation, if we were to use the same sampling techniques, approximately 95 out of 100 times the confidence interval (0.2813, 0.5187) would contain the population mean difference in voltage between alkaline and NiMH batteries. Since both endpoints are positive, we can reject H0. We can be 95% confident that the population mean voltage for alkaline batteries is between 0.28 and 0.52 volts higher than nickel metal hydride batteries. There is no shortcut option for a two-sample z confidence interval in Excel. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [9:2-SampZInt] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [ENTER] key. Then type in the population standard deviations, the first sample mean and sample size, then the second sample mean and sample size, then enter the confidence level. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F5 [Ints], then select 3: 2-SampZInt. Then type in the population standard deviations, the first sample mean and sample size, then the second sample mean and sample size (or list names (list3 & list4), and Freq1:1 & Freq2:1), then enter the confidence level. To calculate press the [ENTER] key. The calculator returns the confidence interval. 9.3.2 Two Sample Mean T-Test & Confidence Interval The t-test is a statistical test for comparing the means from two independent populations. The t-test is used when σ1 and/or σ2 are both unknown. The samples must be independent and if the sample sizes are less than 30 then the populations need to be normally distributed. The t-test, as opposed to the z-test, for two independent samples has two different versions depending on if a particular assumption that the unknown population variances are unequal or equal. Since we do not know the true value of the population variances, we usually will use the first version and assume that the population variances are not equal $\sigma_{1}^{2} \neq \sigma_{2}^{2}$. Both versions are presented, so make sure to check with your instructor if you are using both versions. 9.3.2.a Unequal Variance Method t-Test If we assume the variances are unequal ($\sigma_{1}^{2} \neq \sigma_{2}^{2}$), the formula for the t test statistic is $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}}$ Use the t-distribution where the degrees of freedom are $d f=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}$. Note that µ1 – µ2 is the hypothesized difference found in the null hypothesis and is usually zero. Some older calculators only accept the df as an integer, in this case round the df down to the nearest integer if needed. For most technology, you would want to keep the decimal df. Some textbooks use an approximation for the df as the smaller of n1 – 1 or n2 – 1, so you may find a different answer using your calculator compared to examples found elsewhere. The traditional method (or critical value method), the p-value method, and the confidence interval method are performed with steps that are identical to those when performing hypothesis tests for one population. The sample sizes both need to be 30 or more, or the populations need to be approximately normally distributed in order for the Central Limit Theorem to hold. Two-Sample T-Interval For independent samples, we take the mean of each sample, then take the difference in the means. If the means are equal, then the difference of the two means would be equal to zero. We can then compare the null hypothesis, that there is no difference in the means μ1 – μ2 = 0, with the confidence interval limits to decide whether to reject the null hypothesis. If zero is contained within the confidence interval, then we fail to reject H0. If zero is not contained within the confidence interval, then we reject H0. A (1 – $\alpha$)*100% confidence interval for the difference between two population means μ1 – μ2 for independent samples with unequal variances: $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm t_{\alpha / 2} \sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}$. The requirements and degrees of freedom are identical to the above hypothesis test. The general United States adult population volunteer an average of 4.2 hours per week. A random sample of 18 undergraduate college students and 20 graduate college students indicated the results below concerning the amount of time spent in volunteer service per week. At $\alpha$ = 0.01 level of significance, is there sufficient evidence to conclude that a difference exists between the mean number of volunteer hours per week for undergraduate and graduate college students? Assume that number of volunteer hours per week is normally distributed. UndergraduateGraduate Sample Mean 2.5 3.8 Sample Variance 2.2 3.5 Sample Size 18 20 Solution Assumptions: The two populations we are comparing are undergraduate and graduate college students. We are given that the number of volunteer hours per week is normally distributed. We are told that the samples were randomly selected and should therefore be independent. We do not know the two population standard deviations (we only have the sample standard deviations as the square root of the sample variances), so we must use the t-test. Using the critical value method steps, we get the following. The question is asking if there is a difference between the mean number of volunteer hours per week for undergraduate and graduate level college students. We let population 1 be undergraduate students, and population 2 be graduate students. The correct hypotheses for a two-tailed test are: H0: µ1 = µ2 H1: µ1 ≠ µ2. The test statistic is $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}}=\frac{(2.5-3.8)-0}{\sqrt{\left(\frac{2.2}{18}+\frac{3.5}{20}\right)}}=-2.3845$. The critical value for a two-tailed t-test with degrees of freedom is found by using tail area $\alpha$/2 = 0.005 with $df=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}=\frac{\left(\frac{2.2}{18}+\frac{3.5}{20}\right)^{2}}{\left(\left(\frac{2.2}{18}\right)^{2}\left(\frac{1}{17}\right)+\left(\frac{3.5}{20}\right)^{2}\left(\frac{1}{19}\right)\right)}=35.0753$. Draw the curve and label the critical values. Use the invT function on your calculator to compute the critical value invT(.005,35.0753) = –2.724 (older calculators may require you to use a whole number, round down to df = 35), or use Excel =T.INV(0.005,35.0753) to compute the critical value. The test statistic is between the critical values –2.724 and 2.724, therefore do not reject H0. Figure 9-5 There is not enough evidence to suggest a difference between the population mean number of volunteer hours per week for undergraduate and graduate college students. Note that if we had decided to have population 1 be graduate students, the test statistic would be positive 2.3845, but this would not change our decision for a two-tailed test. If you are doing a one-tailed test, then you need to be consistent on which sign your test statistic has. Most of the time for a left-tailed test both the critical value and the test statistic will be negative and for a right-tailed test both the critical value and test statistic will be positive. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [4:2-SampTTest] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [Enter] key. Enter the means, standard deviations, sample sizes, confidence level. Then arrow over to the not equal <, > sign that is the same in the problem’s alternative hypothesis statement, then press the [ENTER] key. Highlight the No option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the test statistic and the p-value. If you have raw data, press the [STAT] key and then the [EDIT] function, then enter the data into list one and list two. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [4:2- SampTTest] and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. The defaults are List1: L1, List2: L2, Freq1:1, Freq2:1. If these are set different arrow down and use [2nd] [1] to get L1 and [2nd] [2] to get L2. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 4: 2-SampT-Test. Enter the sample means, sample standard deviations, and sample sizes (or list names (list3 & list4), and Freq1:1 & Freq2:1). Then arrow over to the not equal, <, > and select the sign that is the same in the problem’s alternative hypothesis statement. Highlight the No option under Pooled. Press the [ENTER] key to calculate. The calculator returns the t-test statistic and the p-value. A researcher is studying how much electricity (in kilowatt hours) households from two different cities use in their homes. Random samples of 17 days in Sacramento and 16 days in Portland are given below. Test to see if there is a difference using all 3 methods (critical value, p-value and confidence interval). Assume that electricity use is normally distributed and the population variances are unequal. Use $\alpha$ = 0.10. Solution The populations are independent and normally distributed. The hypotheses for all 3 methods are: H0: µ1 = µ2 H1: µ1 ≠ µ2. Use technology to find the sample means, standard deviations and sample sizes. Enter the Sacramento data into list 1, then do 1-Var Stats L1 and you should get $\bar{x}_{1}$ = 596.2353, s1 = 163.2362, and n1 = 17. Enter the Portland data into list 2, then do 1-Var Stats L2 and you should get $\bar{x}_{2}$ = 481.5, s1 = 179.3957, and n1 = 16. The test statistic is $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}}=\frac{(596.2353-481.5)-0}{\sqrt{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)}}=1.9179$. The $df=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}=\frac{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)^{2}}{\left(\left(\frac{163.266^{2}}{17}\right)^{2}\left(\frac{1}{16}\right)+\left(\frac{179.3957^{2}}{16}\right)^{2}\left(\frac{1}{15}\right)\right)}=30.2598$. The p-value would be double the area to the right of t = 1.9179. Using the TI calculator or Excel we get the p-value = 0.0646. Stop and see if you can find this p-value using the same process from previous sections. Since the p-value is less than alpha, we would reject H0. At the 10% level of significance, there is a statistically significant difference between the mean electricity use in Sacramento and Portland. Excel When you have raw data, you can use Excel to find all this information using the Data Analysis tool. Enter the data into Excel, then choose Data > Data Analysis > t-Test: Two Sample Assuming Unequal Variances. Enter the necessary information as we did in previous sections (see output below) and select OK. You can use this Excel shortcut only if you have raw data given in the question. We get the following output, which has both p-values and critical values. Critical Value Method The hypotheses and test statistic steps do not change compared to the p-value method. Hypotheses: H0: µ1 = µ2 H1: µ1 ≠ µ2. Test Statistic: $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}}=\frac{(596.2353-481.5)-0}{\sqrt{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)}}=1.9179$ Compute the t critical values. The degrees of freedom stay the same: $df=\frac{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)^{2}}{\left(\left(\frac{163.2362^{2}}{17}\right)^{2}\left(\frac{1}{16}\right)+\left(\frac{179.3957^{2}}{16}\right)^{2}\left(\frac{1}{15}\right)\right)}=30.2598$ We can use the t Critical two-tail value given in the Excel output or use the TIcalculator invT(0.05,30.2598) = -1.697. Some older calculators do not let you use a decimal for df so round down and use invT(0.05,30). Figure 9-6. Figure 9-6 Since the test statistic is in the critical region, we would reject H0. This agrees with the same decision that we had using the p-value method. Summary: At the 10% level of significance, there is statistically significant difference between the mean electricity use between Sacramento and Portland. Confidence Interval Method The hypotheses are the same. The main difference is that we would find a confidence interval and compare H0: µ1 – µ2 = 0 with the endpoints to make the decision. Hypotheses: H0: µ1 = µ2 H1: µ1 ≠ µ2. Find the confidence interval. First, compute the $\mathrm{t}_{\alpha / 2}$ critical value for a 90% confidence interval since $\alpha$ = 0.10. Use $df=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}=\frac{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)^{2}}{\left(\left(\frac{163.236^{2}}{17}\right)^{2}\left(\frac{1}{16}\right)+\left(\frac{179.395^{2}}{16}\right)^{2}\left(\frac{1}{15}\right)\right)}=30.2598$. The critical value is $\mathrm{t}_{\alpha / 2}$ = invT(0.05,30.2598) = –1.697. The older TI-83 invT program only accepts integer df, use df =30. Alternatively, use the output from the Excel output under the t Critical two-tail row. Next, find the interval estimate $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm t_{\alpha / 2} \sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}$ \begin{aligned} &\Rightarrow(596.2353-481.5) \pm 1.697 \sqrt{\left(\frac{163.2362^{2}}{17}+\frac{179.3957^{2}}{16}\right)} \ &\Rightarrow \quad 114.7353 \pm 101.5203 . \end{aligned} Use interval notation (13.215, 216.2556) or standard notation 13.215 < μ1 – μ2 < 216.2556. Note the calculator does not round between steps and gives a more accurate answer of (13.23, 216.24). For an interpretation, if we were to use the same sampling techniques, approximately 90 out of 100 times a confidence interval with the same margin of error of (13.23, 216.24) would contain the population mean difference in electricity use between Sacramento and Portland. We are 90% confident that the population mean household electricity use for Sacramento is between 13.23 and 216.24 kilowatt hours more than Portland households. Since both endpoints are positive, zero would not be captured in the confidence interval so we would reject H0. Summary: At the 10% level of significance, there is statistically significant difference between the mean electricity use between Sacramento and Portland. All 3 methods should yield the same result. This text is only using the two-sided confidence interval. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [0:2-SampTInt] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [Enter] key. Enter the means, standard deviations, sample sizes, confidence level. Highlight the No option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. Or (if you have raw data in list one and list two) press the [STAT] key and then the [EDIT] function, type the data into list one for sample one and list two for sample two. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [0:2-SampTInt] and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. The defaults are List1: L1, List2: L2, Freq1:1, Freq2:1. If these are set different, arrow down and use [2nd] [1] to get L1 and [2nd] [2] to get L2. Then type in the confidence level. Highlight the No option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F5 [Ints], then select 4: 2-SampTInt. Enter the sample means, sample standard deviations, sample sizes (or list names (list3 & list4), and Freq1:1 & Freq2:1), confidence level. Highlight the No option under Pooled. Press the [ENTER] key to calculate. The calculator returns the confidence interval. If you have the raw data, select Data and enter the list names. Summary Use the z-test only if the population variances (or standard deviations) are given in the problem. Most of the time we do not know these values and will use the t-test. A t-test is used for many applications. We use the t-test for a hypothesis test to see if there is a change in the mean between the groups for dependent samples. We can also use the t-test for a hypothesis test to see if there is a change in the mean for independent samples. Be careful which t-test you use, paying attention to the assumption that the variances are equal or not. 9.3.2.b Equal Variance Method t-Test This method assumes that we know the population’s standard deviations have approximately the same spread. Be careful with this since both populations could be normally distributed and independent, but one population may be way more spread out (larger variance) then the other so you would want to use the unequal variance version. For this text, we will state in the problem whether or not the population’s variances (or standard deviations) are equal. Also, be careful when distinguishing between when to use the z-test versus t-test, just because we assume the population variances or standard deviations are equal does not mean we know their numeric values. We also need to assume the populations are normally distributed if either sample size is below 30. If we assume the variances are equal $\left(\sigma_{1}^{2}=\sigma_{2}^{2}\right)$, the formula for the t test statistic is $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}\right)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}}$ Use the t-distribution with pooled degrees of freedom df = n1 – n2 – 2. The value $s^{2}=\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}$ under the square root is called the pooled variance and is a weighted mean of the two sample variances, weighted on the corresponding sample sizes. In some textbooks, they may find the pooled variance first, then place into the formula as $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{s^{2}}{n_{1}}+\frac{s^{2}}{n_{2}}\right)}}$. Note: The df formula matches what your calculator gives you when you select Yes under the Pooled option. The traditional method (or critical value method), the p-value method, and the confidence interval method are performed with steps that are identical to those when performing hypothesis tests for one population. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [4:2-SampTTest] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [Enter] key. Enter the means, standard deviations, sample sizes, confidence level. Then arrow over to the not equal, <, > sign that is the same in the problem’s alternative hypothesis statement, then press the [ENTER] key. Highlight the Yes option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the test statistic and the p-value. If you have raw data, press the [STAT] key and then the [EDIT] function, enter the data into list one and list two. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [4:2-SampTTest] and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. The defaults are List1: L1, List2: L2, Freq1:1, Freq2:1. If these are set different arrow down and use [2nd] [1] to get L1 and [2nd] [2] to get L2. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 4: 2-SampT-Test. Enter the sample means, sample standard deviations, and sample sizes (or list names (list3 & list4), and Freq1:1 & Freq2:1). Then arrow over to the not equal, and select the sign that is the same in the problem’s alternative hypothesis statement. Highlight the Yes option under Pooled. Press the [ENTER] key to calculate. The calculator returns the ttest statistic and the p-value. Two-Sample t-Interval Assuming Equal Variances For independent samples, we take the mean of each sample, then take the difference in the means. If the means are equal, then the difference of the two means would be equal to zero. We can then compare the null hypothesis, that there is no difference in the means μ1 – μ2 = 0, with the confidence interval limits to decide whether to reject the null hypothesis. If zero is contained within the confidence interval, then we fail to reject H0. If zero is not contained within the confidence interval, then we reject H0. A (1 – $\alpha$)*100% confidence interval for the difference between two population means µ1 – µ2 for independent samples with unequal variances: $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm t_{\alpha / 2} \sqrt{\left(\left(\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}\right)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)\right)}$ The requirements and degrees of freedom are identical to the above hypothesis test. A manager believes that the average sales in coffee at their Portland store is more than the average sales at their Cannon Beach store. They take a random sample of weekly sales from the two stores over the last year. Assume that the sales are normally distributed with equal variances. Use the p-value method with α = 0.05 to test the manager’s claim. Solution Assumptions: The sample sizes are both less than 30, but the problem states that the populations are normally distributed. We are testing two means. We do not have population standard deviations or variances given in the problem so this will be a t-test not a z-test. The sales at each store are independent and the problem states that we are assuming, $\sigma_{1}^{2}=\sigma_{2}^{2}$. Set up the hypotheses, where group 1 is Portland, and group 2 is Cannon Beach. We want to test if the Portland mean > Cannon Beach mean, so carry this sign down to the alternative hypothesis to get a right-tailed test: H0: µ1 = µ2 H1: µ1 > µ2. Use technology to compute the sample means, standard deviations and sample sizes to get the following test statistic. $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}\right)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}}=\frac{(3776.9959-3384.0908)-0}{\sqrt{\left(\left(\frac{(16 * 2864304.884+21 * 1854752.617)}{(17+22-2)}\right)\left(\frac{1}{17}+\frac{1}{22}\right)\right)}}=0.8038$ The df = n1 + n2 – 1. To find the p-value using the TI calculator DIST menu with tcdf(0.8038,1E99,37) or in Excel using =1-T.DIST(0.8038,37,TRUE) = 0.2133. The p-value = 0.2133 is larger than $\alpha$ = 0.05, therefore we do not reject H0. There is not enough evidence to conclude that there is a significant difference in the average sales for Portland and Cannon Beach. Excel: Follow the same steps with the Data Analysis tool, except choose the t-Test: Two-Sample Assuming Equal Variances. Enter the necessary information as we did in previous sections (see output below) and select OK. You can only use this Excel shortcut if you have raw data given in the question. You get the following output: When reading the Excel output for a z or t-test, be careful with your signs. • For a left-tailed t-test the critical value will be negative. • For a right-tailed t-test the critical value will be positive. • For a two-tailed t-test then your critical values would be ±critical value. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [0:2-SampTInt] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [Enter] key. Enter the means, sample standard deviations, sample sizes, confidence level. Highlight the Yes option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. Or (if you have raw data in list one and list two) press the [STAT] key and then the [EDIT] function, type the data into list one for sample one and list two for sample two. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [0:2-SampTInt] and press the [ENTER] key. Arrow over to the [Data] menu and press the [ENTER] key. The defaults are List1: L1 , List2: L2 , Freq1:1, Freq2:1. If these are set different, arrow down and use [2nd] [1] to get L1 and [2nd] [2] to get L2 . Then type in the confidence level. Highlight the Yes option under Pooled for unequal variances. Arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F5 [Ints], then select 4: 2-SampTInt. Enter the sample means, sample standard deviations, sample sizes (or list names (list3 & list4) and Freq1:1 & Freq2:1), confidence level. Highlight the Yes option under Pooled. Press the [ENTER] key to calculate. The calculator returns the confidence interval. If you have the raw data, select Data and enter the list names as in the following example to the right.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/09%3A_Hypothesis_Tests_and_Confidence_Intervals_for_Two_Populations/9.02%3A_Two_Independent_Groups.txt
This section will look at how to analyze a difference in the proportions for two independent samples. As with all other hypothesis tests and confidence intervals, the process of testing is the same, though the formulas and assumptions are different. There are three types of hypothesis tests for comparing the difference in 2 population proportions p1p2, see Figure 9-7. Note that for our purposes, p1p2 = 0. We could also use a variant of this model to test for a magnitude difference for when p1p2 ≠ 0, but we will not cover that scenario. The z-test is a statistical test for comparing the proportions from two populations. It can be used when the samples are independent, $n_{1} \hat{p}_{1}$ ≥ 10, $n_{1} \hat{q}_{1}$ ≥ 10, $n_{2} \hat{p}_{2}$ ≥ 10, and $n_{2} \hat{q}_{2}$ ≥ 10. The formula for the z-test statistic is: $z=\frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\left(\hat{p} \cdot \hat{q}\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)\right)}}$ Where $\hat{p}=\frac{\left(x_{1}+x_{2}\right)}{\left(n_{1}+n_{2}\right)}=\frac{\left(\hat{p}_{1} \cdot n_{1}+\hat{p}_{2} \cdot n_{2}\right)}{\left(n_{1}+n_{2}\right)}, \quad \hat{q}=1-\hat{p}, \quad \hat{p}_{1}=\frac{x_{1}}{n_{1}}, \hat{p}_{2}=\frac{x_{2}}{n_{2}}$. The pooled proportion $\hat{p}$ is a weighted mean of the proportions and $\hat{q}$ is the complement of $\hat{p}$. Some texts or software may use different notation for the pooled proportion, note that $\hat{p}=\bar{p}$. A vice principal wants to see if there is a difference between the number of students who are late to class for the first class of the day compared to the student’s class right after lunch. To test their claim to see if there is a difference in the proportion of late students between first and after lunch classes, the vice-principal randomly selects 200 students from first class and records if they are late, then randomly selects 200 students in their class after lunch and records if they are late. At the 0.05 level of significance, can a difference be concluded? First Class After Lunch Class Sample Size 200 200 Number of late students 13 16 Solution Assumptions: We are comparing the proportion of late students’ first and after lunch classes. The number of “successes” and “failures” from each population must be greater than 10 ( = 13 ≥ 10, = 187 ≥ 10, = 16 ≥ 10, and = 184 ≥ 10). We must assume that the samples were independent. Using the Traditional Method The claim is that there is a difference between the proportion of late students. Let population 1 be the first class, and population 2 be the class after lunch. Our claim would then be p1p2. The correct hypotheses are: H0: p1 = p2 H1: p1p2. Compute the $z_{\alpha / 2}$ critical values. Draw and label the sampling distribution. Use the inverse normal function invNorm(0.025,0,1) to get $z_{\alpha / 2}$ = ±1.96. See Figure 9-8. In order to compute the test statistic, we must first compute the following proportions: $\begin{array}{ll} \hat{p}=\frac{\left(x_{1}+x_{2}\right)}{\left(n_{1}+n_{2}\right)}=\frac{(13+16)}{(200+200)}=0.0725 & \hat{q}=1-\hat{p}=1-0.0725=0.9275 \ \hat{p}_{1}=\frac{x_{1}}{n_{1}}=\frac{13}{200}=0.065 & \hat{p}_{2}=\frac{x_{2}}{n_{2}}=\frac{16}{200}=0.08 \end{array}$ The test statistic is, $z=\frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\left(\hat{p} \cdot \hat{q}\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)\right)}}=\frac{(0.065-0.08)}{\sqrt{\left(0.0725 \cdot 0.9275\left(\frac{1}{200}+\frac{1}{200}\right)\right)}}=-0.5784$. Decision: Because the test statistic is between the critical values, we do not reject H0. Summary: There is not enough evidence to support any difference in the proportion of students that are late for their first class compared to the class after lunch. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [6:2-PropZTest] and press the [ENTER] key. Type in the x1, n1, x2, and n2 arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement, then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the z-test statistic and the p-value. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F6 [Tests], then select 6: 2-PropZTest. Type in the x1, n1, x2, and n2 arrow over to the $\neq$, <, > and select the sign that is the same in the problem’s alternative hypothesis statement. Press the [ENTER] key to calculate. The calculator returns the z-test statistic, sample proportions, pooled proportion, and the p-value. Two Proportions Z-Interval A 100(1 – $\alpha$)% confidence interval for the difference between two population proportions p1 – p2: $\left(\hat{p}_{1}-\hat{p}_{2}\right)-z_{\alpha / 2} \sqrt{\left(\frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}\right)}<p_{1}-p_{2}<\left(\hat{p}_{1}-\hat{p}_{2}\right)+z_{\alpha / 2} \sqrt{\left(\frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}\right)}$ Or more compactly as $\left(\hat{p}_{1}-\hat{p}_{2}\right) \pm z_{\alpha / 2} \sqrt{\left(\frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}\right)}$ The requirements are identical to the 2-proportion hypothesis test. Note that the standard error does not rely on a hypothesized proportion so do not use a confidence interval to make decisions based on a hypothesis statement. Find the 95% confidence interval for the difference in the proportion of late students in their first class and the proportion who are late to their class after lunch. First ClassAfter Lunch Class Sample Size 200 200 Number of late students 13 16 Solution First, compute the following: $\hat{p}_{1}=\frac{x_{1}}{n_{1}}=\frac{13}{200}=0.065 \quad \hat{q}_{1}=1-\hat{p}_{1}=1-0.065=0.935$ $\hat{p}_{2}=\frac{x_{2}}{n_{2}}=\frac{16}{200} \quad=0.08 \quad \hat{q}_{2}=1-\hat{p}_{2}=1-0.08=0.92$ Find the $z_{\alpha / 2}$ critical value. Use the inverse normal to get $z_{\alpha / 2}$ = 1.96. Now substitute the numbers into the interval estimate: $\left(\hat{p}_{1}-\hat{p}_{2}\right) \pm z_{\frac{\alpha}{2}} \sqrt{\left(\frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}\right)}$ \begin{aligned} &\Rightarrow(0.065-0.08) \pm 1.96 \sqrt{\left(\frac{0.065 \cdot 0.935}{200}+\frac{0.08 \cdot 0.92}{200}\right)} \ &\Rightarrow \quad-0.015 \pm 0.0508 \ &\Rightarrow \quad(-0.0508,0.0358) . \end{aligned} Use interval notation (–0.0508, 0.0358) or standard notation –0.0508 < p1p2 < 0.0358. Note that we can have negative numbers here since we are taking the difference of two proportions. Since p1 – p2 = 0 is in the interval, we are 95% confident that there is no difference in the proportion of late students between their first class or those who are late for their class after lunch. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [2-PropZInterval] and press the [ENTER] key. Type in the x1, n1, x2, n2, the confidence level, then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the confidence interval. TI-89: Go to the [Apps] Stat/List Editor, then press [2nd] then F7 [Ints], then select 6: 2-PropZInt. Type in the x1, n1, x2, n2, the confidence level, then press the [ENTER] key to calculate. The calculator returns the confidence interval.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/09%3A_Hypothesis_Tests_and_Confidence_Intervals_for_Two_Populations/9.03%3A_Two_Proportion_Z-Test_and_Confidence_Interval.txt
9.5.1 The F-Distribution An F-distribution is another special type of distribution for a continuous random variable. Properties of the F-distribution density curve: • Right skewed. • F-scores cannot be negative. • The spread of an F-distribution is determined by the degrees of freedom of the numerator, and by the degrees of freedom of the denominator. The df are usually determined by the sample sizes of the two populations or number of groups. • The total area under the curve is equal to 1 or 100%. The shape of the distribution curve changes when the degrees of freedom change. Figure 9-9 shows examples of F-distributions with different degrees of freedom. Figure 9-9 We will use the F-distribution in several types of hypothesis testing. For now, we are just learning how to find the critical value and probability using the F-distribution. Use the TI-89 Distribution menu; or in Excel F.INV to find the critical values for the F-distribution for tail areas only, depending on the degrees of freedom. When finding a probability given an F-score, use the calculator Fcdf function under the DISTR menu or in Excel use F.DIST. Note that the TI-83 and TI-84 do not come with the INVF function, but you may be able to find the program online or from your instructor. Alternatively, use the calculator at https://homepage.divms.uiowa.edu/~mbognar/applets/f.html which will also graph the distribution for you and shade in one tail at a time. You will see the shape of the F-distribution change in the following examples depending on the degrees of freedom used. For your own sketch just make sure you have a positively skewed distribution starting at zero. The critical values F$\alpha$/2 and F1–$\alpha$/2 are for a two-tailed test on the F-distribution curve with area 1 – $\alpha$ between the critical values as shown in Figure 9-10. Note that the distribution starts at zero, is positively skewed, and never has negative F-scores. Figure 9-10 Compute the critical values F$\alpha$/2 and F1–$\alpha$/2 with df1 = 6 and df2 = 14 for a two-tailed test, $\alpha$ = 0.05. Solution Start by drawing the curve and finding the area in each tail. For this case, it would be an area of $\alpha$/2 in each tail. Then use technology to find the F-scores. Most technology only asks for the area to the left of the F-score you are trying to find. In Excel the function for F$\alpha$/2 is F.INV(area in left-tail,df1,df2). There is only one function, so use areas 0.025 and 0.975 in the left tail. For this example, we would have critical values F0.025 = F.INV(0.025,6,14) = 0.1888 and F0.975 = F.INV(0.975,6,14) = 3.5014. See Figure 9-11. Figure 9-11 We have to calculate two distinct F-scores unlike symmetric distribution where we could just do ±z-score or ±t-score. Note if you were doing a one-tailed test then do not divide alpha by two and use area = $\alpha$ for a left-tailed test and area = 1 – $\alpha$ for a right-tailed test. Find the critical value for a right-tailed test with denominator degrees of freedom of 12 and numerator degrees of freedom of 2 with a 5% level of significance. Solution Draw the curve and shade in the top 5% of the upper tail since $\alpha$ = 0.05, see Figure 9-12. When using technology, you will need the area to the left of the critical value that you are trying to find. This would be 1 – $\alpha$ = 0.95. Then identify the degrees of freedom. The first degrees of freedom are the numerator df, therefore df1 = 2. The second degrees of freedom are the denominator df, therefore df2 = 12. Using Excel, we would have =F.INV(0.95,2,12) = 3.8853. Figure 9-12 Compute P(F > 3.894), with df1 = 3 and df2 = 18 Solution In Excel, use the function F.DIST(x,deg_freedom1,deg_freedom2,cumulative). Always use TRUE for the cumulative. The F.DIST function will find the probability (area) below F. Since we want the area above F we would need to also use the complement rule. The formula would be =1-F.DIST(3.894,3,18,TRUE) = 0.0263. TI-84: The TI-84 calculator has a built in F-distribution. Press [2nd] [DISTR] (this is F5: DISTR in the STAT app in the TI-89), then arrow down until you get to the Fcdf and press [Enter]. Depending on your calculator, you may not get a prompt for the boundaries and df. If you just see Fcdf( then you will need to enter each the lower boundary, upper boundary, df1, and df2 with a comma between each argument. The lower boundary is the 3.394 and the upper boundary is infinity (TI-83 and 84 use a really large number instead of ∞), then enter the two degrees of freedom. Press [Paste] and then [Enter], this will put the Fcdf(3.894,1E99,3,18) on your screen and then press [Enter] again to calculate the value. Figure 9-13. Figure 9-13 9.5.2 Hypothesis Test for Two Variances Sometimes we will need to compare the variation or standard deviation between two groups. For example, let’s say that the average delivery time for two locations of the same company is the same but we hear complaint of inconsistent delivery times for one location. We can use an F-test to see if the standard deviations for the two locations was different. There are three types of hypothesis tests for comparing the ratio of two population variances , see Figure 9-14. Figure 9-14 If we take the square root of the variance, we get a standard deviation. Therefore, taking the square root of both sides of the hypotheses, we can also use the same test for standard deviations. We use the following notation for the hypotheses. There are 3 types of hypothesis tests for comparing the population standard deviations σ1/σ2, see Figure 9-15. Figure 9-15 The F-test is a statistical test for comparing the variances or standard deviations from two populations. The formula for the test statistic is $F=\frac{s_{1}^{2}}{s_{2}^{2}}$. With numerator degrees of freedom = Ndf = n1 – 1, and denominator degrees of freedom = Ddf = n2 – 1. This test may only be used when both populations are independent and normally distributed. Important: This F-test is not robust (a statistic is called “robust” if it still performs reasonably well even when the necessary conditions are not met). In particular, this F-test demands that both populations be normally distributed even for larger sample sizes. This F-test yields unreliable results when this condition is not met. The traditional method (or critical value method), and the p-value method are performed with steps that are identical to those when performing hypothesis tests from previous sections. A researcher claims that IQ scores of university students vary less than (have a smaller variance than) IQ scores of community college students. Based on a sample of 28 university students, the sample standard deviation 10, and for a sample of 25 community college students, the sample standard deviation 12. Test the claim using the traditional method of hypothesis testing with a level of significance $\alpha$ = 0.05. Assume that IQ scores are normally distributed. Solution 1. The claim is “IQ scores of university students (Group 1) have a smaller variance than IQ scores of community college students (Group 2).” This is a left-tailed test; therefore, the hypotheses are: \begin{aligned} &H_{0}: \sigma_{1}^{2}=\sigma_{2}^{2} \ &H_{1}: \sigma_{1}^{2}<\sigma_{2}^{2} \end{aligned}. 2. We are using the F-test because we are performing a test about two population variances. We can use the F-test only if we assume that both populations are normally distributed. We will assume that the selection of each of the student groups was independent. The problem gives us s1 = 10, n1 = 28, s2 = 12, and n2 = 25. The formula for the test statistic is $F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{10^{2}}{12^{2}}=0.6944$. 3. The critical value for a left-tailed test with a level of significance $\alpha$ = 0.05 is found using the invF program or Excel. See Figure 9-16. Using Excel: The critical value is F$\alpha$ =F.INV(0.05,27,24) = 0.5182. Figure 9-16 4. Decision: Compare the test statistic F = 0.6944 with the critical value F$\alpha$ = 0.5182, see Figure 9-16. Since the test statistic is not in the rejection region, we do not reject H0. 5. Summary: There is not enough evidence to support the claim that the IQ scores of university students have a smaller variance than IQ scores of community college students. A random sample of 20 graduate college students and 18 undergraduate college students indicated these results concerning the amount of time spent in volunteer service per week. At $\alpha$ = 0.01 level of significance, is there sufficient evidence to conclude that graduate students have a higher standard deviation of the number of volunteer hours per week compared to undergraduate students? Assume that number of volunteer hours per week is normally distributed. Graduate Undergraduate Sample Mean 3.8 2.5 Sample Variance 3.5 2.2 Sample Size 20 18 Solution Assumptions: The two populations we are comparing are graduate and undergraduate college students. We are given that the number of volunteer hours per week is normally distributed. We are told that the samples were randomly selected and should therefore be independent. Using the Traditional Method 1. We are trying to determine whether the standard deviation of the number of volunteer hours per week for graduate students (Group 1) is larger than undergraduate students (Group 2) or σ1 > σ2. Therefore, the hypotheses are: \begin{aligned} &\mathrm{H}_{0}: \sigma_{1}=\sigma_{2} \ &\mathrm{H}_{1}: \sigma_{1}>\sigma_{2} \end{aligned} 2. We are given that $s_{1}^{2}$ =3.5, $s_{2}^{2}$ = 2.3, n1 =20 and n2 =18. Note variances were given, so do not square the numbers again. The test statistic is, $F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{3.5}{2.2}=1.5909$. 3. Draw and label the distribution with the critical value for a right-tailed F-test with numerator degrees of freedom = n1 – 1 = 19, and with denominator degrees of freedom = n2 – 1 = 17. See Figure 9-17. Use right-tail area $\alpha$ = 0.01 in Excel F1–$\alpha$ =F.INV.RT(0.01,19,17) to find the critical value 3.1857. Figure 9-17 4. Decision: Since the test statistic is not in the rejection region, we do not reject H0. 5. Summary: There is not enough evidence to support the claim that the population standard deviation of the number of volunteer hours per week for graduate college students is higher than undergraduate college students. Using the p-value method 1. Step 1 remains the same. Therefore, the hypotheses are: \begin{aligned} &\mathrm{H}_{0}: \sigma_{1}=\sigma_{2} \ &\mathrm{H}_{1}: \sigma_{1}>\sigma_{2} \end{aligned} 2. Step 2 remains the same. The test statistic is, $F=\frac{s_{1}^{2}}{s_{2}^{2}}=\frac{3.5}{2.2}=1.5909$. 3. Compute the p-value using either the Fcdf on the calculator or Excel. If your test statistic is less than 1, then find the area to the left of the test statistic, if F is above 1 then find the area to the right of the test statistic. If you have a two-tailed test then double your tail area. TI: Fcdf(lower,upper,df1,df2) = Fcdf(1.5909,1E99,19,17). Excel: =F.DIST.RT(1.5909,19,17) = 0.1704. 4. Decision: Since the p-value = 0.1704 is greater than $\alpha$ = 0.01, we “Do Not Reject H0.” 5. Step 5, the summary remains the same. There is not enough evidence to support the claim that the population standard deviation of the number of volunteer hours per week for graduate college students is higher than undergraduate college students. Alternatively use the following 2-Sample F-test shortcut on the TI calculator. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [E:2-SampFTest] and press the [ENTER] key. Arrow over to the [Stats] menu and press the [Enter] key. Then type in the s1, n1, s2, n2, arrow over to the $\neq$, <, > sign that is the same in the problem’s alternative hypothesis statement, then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the test statistic F and the p-value. Note: You have to put the standard deviation in the calculator, not the variance. TI-89: Go to the [Apps] Stat/List Editor, then push 2nd then F6 [Tests], then select 9: 2-SampFTest. Then type in the s1, n1, s2, n2 (or list names list1 & list2), select the sign $\neq$, <, > that is the same in the problem’s alternative hypothesis statement, press the [ENTER] key to calculate. The calculator returns the F-test statistic and the p-value. A researcher is studying the variability in electricity (in kilowatt hours) people from two different cities use in their homes. Random samples of 17 days in Sacramento and 16 days in Portland are given below. Test to see if there is a difference in the variance of electricity use between the two cities at α = 0.10. Assume that electricity use is normally distributed, use the p-value method. Solution The populations are independent and normally distributed. The hypotheses are \begin{aligned} &\mathrm{H}_{0}: \sigma_{1}^{2}=\sigma_{2}^{2} \ &\mathrm{H}_{1}: \sigma_{1}^{2} \neq \sigma_{2}^{2} \end{aligned} Use technology to compute the standard deviations and sample sizes. Enter the Sacramento data into list 1, then do 1-Var Stats L1 and you should get s1 = 163.2362 and n1 = 17. Enter the Portland data into list 2, then do 1-Var Stats L2 and you should get s2 = 179.3957 and n2 = 16. Alternatively, use Excel’s descriptive statistics. The test statistic is The p-value would be double the area to the left of F = 0.82796 (Use double the area to the right if the test statistic is > 1). Using the TI calculator Fcdf(0,0.82796,16,15). In Excel we get the p-value =2*F.DIST(E8,E7,F7,TRUE) = 0.7106. Since the p-value is greater than alpha, we would fail to reject H0. There is no statistically significant difference between variance of electricity use between Sacramento and Portland. Excel: When you have raw data, you can use Excel to find all this information using the Data Analysis tool. Enter the data into Excel, then choose Data > Data Analysis > F-Test: Two Sample for Variances. Enter the necessary information as we did in previous sections (see below) and select OK. Note that Excel only does a one-tail F-test so use $\alpha$/2 = 0.10/2 = 0.05 in the Alpha box. We get the following output. Note you can only use the critical value in Excel for a left-tail test. Excel for some reason only does the smaller tail area for the F-test, so you will need to double the p-value for a two-tailed test, p-value = 0.355275877*2 = 0.7106.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/09%3A_Hypothesis_Tests_and_Confidence_Intervals_for_Two_Populations/9.04%3A_Two_Variance_or_Standard_Deviation_F-Test.txt
Chapter 9 Exercises For exercises 1-6, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 1. An adviser is testing out a new online learning module for a placement test. Test the claim that on average the new online learning module increased placement scores at a significance level of $\alpha$ = 0.05. For the context of this problem, μDBefore–μAfter where the first data set represents the after test scores and the second data set represents before test scores. Assume the population is normally distributed. You obtain the following paired sample of 19 students that took the placement test before and after the learning module. 2. A veterinary nutritionist developed a diet for overweight dogs. The total volume of food consumed remains the same, but half of the dog food is replaced with a low-calorie “filler” such as green beans. Ten overweight dogs were randomly selected from her practice and were put on this program. Their initial weights were recorded, and then the same dogs were weighed again after 4 weeks. At the 0.01 level of significance, can it be concluded that the dogs lost weight? Use the following computer output to answer the following questions. Assume the populations are normally distributed and the groups are dependent. 3. A manager wishes to see if the time (in minutes) it takes for their workers to complete a certain task will decrease when they are allowed to wear earbuds at work. A random sample of 20 workers' times was collected before and after. Test the claim that the time to complete the task has decreased at a significance level of $\alpha$ = 0.01. For the context of this problem, μDBefore–μAfter where the first data set represents before measurement and the second data set represents the after measurement. Assume the population is normally distributed. You obtain the following sample data. 4. A physician wants to see if there was a difference in the average smoker’s daily cigarette consumption after wearing a nicotine patch. The physician sets up a study to track daily smoking consumption. They give the patients a placebo patch that did not contain nicotine for 4 weeks, then a nicotine patch for the following 4 weeks. Use the following computer output to test to see if there was a difference in the average smoker’s daily cigarette consumption using $\alpha$ = 0.01. 5. A researcher is testing reaction times between the dominant and non-dominant hand. They randomly start with different hands for 20 subjects and their reaction times for both hands is recorded in milliseconds. Use the following computer output to test to see if the reaction time is faster for the dominant hand using a 5% level of significance. 6. A manager wants to see if it is worth going back for an MBA degree. They randomly sample 18 managers' salaries before and after undertaking an MBA degree and record their salaries in thousands of dollars. Assume salaries are normally distributed. Use the following computer output to test the claim that the MBA degree, on average, increases a manager’s salary. Use a 10% level of significance. 7. Doctors developed an intensive intervention program for obese patients with heart disease. Subjects with a BMI of 30 kg/m2 or more with heart disease were assigned to a three-month lifestyle change of diet and exercise. Patients’ Left Ventricle Ejection Fraction (LVEF) are measured before and after intervention. Assume that LVEF measurements are normally distributed. a) Find the 95% confidence interval for the mean of the differences. b) Using the confidence interval answer, did the intensive intervention program significantly increase the mean LVEF? Explain why. For exercises 8-14, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 8. In a study that followed a group of students who graduated from high school in 1997, each was monitored in progress made toward earning a bachelor’s degree. The group was divided in two – those who started at community college and later transferred to a four-year college, and those that started out in a four-year college as freshmen. That data below summarizes the findings. Is there evidence to suggest that community college transfer students take longer to earn a bachelor’s degree? Use $\alpha$ = 0.05. 9. A liberal arts college in New Hampshire implemented an online homework system for their introductory math courses and wanted to know whether the system improved test scores. In the Fall semester, homework was completed with pencil and paper, checking answers in the back of the book. In the Spring semester, homework was completed online – giving students instant feedback on their work. The results are summarized below. Population standard deviations were used from past studies. Is there evidence to suggest that the online system improves test scores? Use $\alpha$ = 0.05. 10. Researchers conducted a study to measure the effectiveness of the drug Adderall on patients diagnosed with ADHD. A total of 112 patients with ADHD were randomly split into two groups. Group 1 included 56 patients and they were each given a dose of 15 mg. of Adderall daily. The 56 patients in Group 2 were given a daily placebo. The effectiveness of the drug was measured by testing the patients score on a behavioral test. Higher scores indicate more ADHD symptoms. Group 1 was found to have a mean improvement of 9.3 points and Group 2 had a mean improvement of 11.7 points. From past studies, the population standard deviation of both groups is known to be 6.5 points. Is there evidence to suggest the patients taking Adderall have improved the mean ADHD symptoms? Test at the 0.01 level of significance. 11. In Major League Baseball, the American League (AL) allows a designated hitter (DH) to bat in place of the pitcher, but in the National League (NL), the pitcher has to bat. However, when an AL team is the visiting team for a game against an NL team, the AL team must abide by the home team’s rules and thus, the pitcher must bat. A researcher is curious if an AL team would score more runs for games in which the DH was used. She samples 20 games for an AL team for which the DH was used, and 20 games for which there was no DH. The data are below. Assume the population is normally distributed with a population standard deviation for runs scored of 2.54. Is there evidence to suggest that AL team would score more runs for games in which the DH was used? Use $\alpha$ = 0.10. 12. The mean speeds (mph) of fastball pitches from two different left-handed baseball pitchers are to be compared. A sample of 14 fastball pitches is measured from each pitcher. The populations have normal distributions. Scouts believe that Brandon Eisert pitches a speedier fastball. Test the scouts’ claim that Eisert’s mean speed is faster at the 5% level of significance? 13. A physical therapist believes that at 30 years old adults begin to decline in flexibility and agility. To test this, they randomly sample 35 of their patients who are less than 30 years old and 32 of their patients who are 30 or older and measure each patient’s flexibility in the Sit-and-Reach test. The results are below. Is there evidence to suggest that adults under the age of 30 are more flexible? Use $\alpha$ = 0.05. 14. Two groups of students are given a problem-solving test, and the results are compared. Test the hypotheses that there is a difference in the test scores using the p-value method with $\alpha$ = 0.05. Assume the populations are normally distributed. 15. A survey found that the average daily cost to rent a car in Los Angeles is $103.24 and in Las Vegas is$97.24. The data were collected from two random samples of 40 in each of the two cities and the population standard deviations are $5.98 for Los Angeles and$4.21 for Las Vegas. At the 0.05 level of significance, construct a confidence interval for the difference in the means and then decide if there is a significant difference in the rates between the two cities using the confidence interval method. For exercises 16-26, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 16. In a random sample of 50 Americans five years ago, the average credit card debt was $5,798 with a standard deviation of$1,154. In a random sample of 50 Americans in the present day, the average credit card debt is $6,511, with a standard deviation of$1,645. Using a 0.05 level of significance, test if there is a difference in credit card debt today versus five years ago. Assume the population variances are unequal. 17. A movie theater company wants to see if there is a difference in the average movie ticket sales in San Diego and Portland per week. They sample 20 sales from San Diego and 20 sales from Portland over a week. Test the claim using a 5% level of significance. Assume the population variances are unequal, the samples are independent, and that movie sales are normally distributed. 18. A researcher is curious what year in college students make use of the gym at a university. They take a random sample of 30 days and count the number of sophomores and seniors who use the gym each day. Is there evidence to suggest that a difference exists in gym usage based on year in college? Construct a confidence interval for the data below to decide. Use $\alpha$ = 0.10. Assume the population variances are unequal. 19. A national food product company believes that it sells more frozen pizza during the winter months than during the summer months. Weekly samples of sales found the following statistics in volume of sales (in hundreds of pounds). Use $\alpha$ = 0.10. Use the p-value method to test the company’s claim. Assume the populations are approximately normally distributed with unequal variances. 20. You are testing the claim that the mean GPA of students who take evening classes is less than the mean GPA of students who only take day classes. You sample 20 students who take evening classes, and the sample mean GPA is 2.74 with a standard deviation of 0.86. You sample 25 students who only take day classes, and the sample mean GPA is 2.86 with a standard deviation of 0.54. Test the claim using a 10% level of significance. Assume the population standard deviations are unequal and that GPAs are normally distributed. 21. "Durable press" cotton fabrics are treated to improve their recovery from wrinkles after washing. "Wrinkle recovery angle" measures how well a fabric recovers from wrinkles. Higher scores are better. Here are data on the wrinkle recovery angle (in degrees) for a random sample of fabric specimens. Assume the populations are approximately normally distributed with unequal variances. A manufacturer believes that the mean wrinkle recovery angle for Hylite is better. A random sample of 20 Permafresh (group 1) and 25 Hylite (group 2) were measured. Test the claim using a 10% level of significance. 22. A large fitness center manager wants to test the claim that the mean delivery time for REI is faster than the delivery time for Champs Sports. The manager randomly samples 30 REI delivery times and finds a mean of 3.05 days with a standard deviation of 0.75 days. The manager randomly selects 30 Champs Sports delivery times and finds a mean delivery time of 3.262 days with a standard deviation of 0.27 days. Test the claim using a 5% level of significance. Assume the populations variances are unequal. 23. Two competing fast food restaurants advertise that they have the fastest wait time from when you order to when you receive your meal. A curious critic takes a random sample of 40 customers at each restaurant to test the claim. They find that Restaurant A has a sample mean wait time of 2.25 minutes with a standard deviation of 0.35 minutes and Restaurant B has a sample mean wait time of 2.15 minutes with a standard deviation of 0.57 minutes in wait time. Can they conclude that the mean wait time is significantly different for the two restaurants? Test at $\alpha$ = 0.05. Assume the population variances are unequal. 24. The manager at a pizza place has been getting complaints that the auto-fill soda machine is either under filling or over filling their cups. The manager took a random sample of 20 fills from her machine, and a random sample of 20 fills from another branch of the restaurant that has not been having complaints. From her machine, she found a sample mean of 11.5 oz. with a standard deviation of 1.3 oz. and from the other restaurant’s machine she found a sample mean of 10.95 oz. with a standard deviation of 0.65 oz. At the 0.05 level of significance, does it seem her machine has a significantly different mean than the other machine? Use the confidence interval method. Assume the populations are normally distributed with unequal variances. 25. A new over-the-counter medicine to treat a sore throat is to be tested for effectiveness. The makers of the medicine take two random samples of 25 individuals showing symptoms of a sore throat. Group 1 receives the new medicine and Group 2 receives a placebo. After a few days on the medicine, each group is interviewed and asked how they would rate their comfort level 1-10 (1 being the most uncomfortable and 10 being no discomfort at all). The results are below. Is there sufficient evidence to conclude the mean scores from Group 1 is more than Group 2? Test at $\alpha$ = 0.01. Assume the populations are normally distributed and have unequal variances. 26. In a random sample of 60 pregnant women with preeclampsia, their systolic blood pressure was taken right before beginning to push during labor. The mean systolic blood pressure was 174 with a standard deviation of 12. In another random sample of 80 pregnant women without preeclampsia, there was a mean systolic blood pressure of 133 and a standard deviation of 8 when the blood pressure was also taken right before beginning to push. Is there sufficient evidence to conclude that women with preeclampsia have a higher mean blood pressure in the late stages of labor? Test at the 0.01 level of significance. Assume the population variances are unequal. 28. An employee at a large company is told that the mean starting salary at her company differs based on level of experience. The employee is skeptical and randomly samples 30 new employees with less than 5 years of experience and categorizes them as Group 1 and 30 new employees with 5 years of experience or more and categorizes them as Group 2. In Group 1, she finds the sample mean starting salary to be $50,352 with a standard deviation of$4,398.10. Group 2 has a sample mean starting salary of $52,391 with a standard deviation of$7,237.32. Test her claim at the 0.10 level of significance. Use the confidence interval method. Assume the populations are normally distributed with unequal variances. 29. Two random samples are taken from private and public universities (out-of-state tuition) around the nation. The yearly tuition is recorded from each sample and the results can be found below. Find the 95% confidence interval for the mean difference between private and public institutions. Assume the populations are normally distributed and have unequal variances. For exercises 30-33, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 30. A professor wants to know if there is a difference in comprehension of a lab assignment among students depending if the instructions are given all in text, or if they are given primarily with visual illustrations. She randomly divides her class into two groups of 15, gives one group instructions in text and the second group instructions with visual illustrations. The following data summarizes the scores the students received on a test given after the lab. Assume the populations are normally distributed with equal variances. Is there evidence to suggest that a difference exists in the comprehension of the lab based on the test scores? Use $\alpha$ = 0.10. 31. A large shoe company is interested in knowing if the amount of money a customer is willing to pay on a pair of shoes is different depending on location. They take a random sample of 50 single-pair purchases from Southern states and another random sample of 50 single-pair purchases from Midwestern states and record the cost for each. The results can be found below. At the 0.05 level of significance, is there evidence that the mean cost differs between the Midwest and the South? Assume the population variances are equal. 32. The manager at a local coffee shop is trying to decrease the time customers wait for their orders. He wants to find out if keeping multiple registers open will make a difference. He takes a random sample of 30 customers when only one register is open and finds that they wait an average of 6.4 minutes to reach the front with a standard deviation of 1.34 minutes. He takes another random sample of 35 customers when two registers are open and finds that they wait an average of 4.2 minutes to reach the front with a standard deviation of 1.21 minutes. He takes both his samples during peak hours to maintain consistency. Can it be concluded at the 0.05 level of significance that mean wait time is less with two registers open? Assume the population variances are equal. 33. The CEO of a large manufacturing company is curious if there is a difference in productivity level of her warehouse employees based on the region of the country the warehouse is located in. She randomly selects 35 employees who work in warehouses on the East Coast and 35 employees who work in warehouses in the Midwest and records the number of parts shipped out from each for a week. She finds that East Coast group ships an average of 1,287 parts and a standard deviation of 348. The Midwest group ships an average of 1,449 parts and a standard deviation of 298. Using a 0.01 level of significance, test if there is a difference in productivity level. Assume the population variances are equal. 34. In a random sample of 100 college students, 47 were sophomores and 53 were seniors. The sophomores reported spending an average of $37.03 per week going out for food and drinks with a standard deviation of$7.23, while the seniors reported spending an average of $52.94 per week going out for food and drinks with a standard deviation of$12.33. Find the 90% confidence interval for difference in the mean amount spent on food and drinks between sophomores and seniors? Assume the population variances are equal. 35. A pet store owner believes that dog owners, on average spend a different amount on their pets compared to cat owners. The owner randomly records the sales of 40 customers who said they only owned dogs and found the mean of the sales of $56.07 with a standard deviation of$24.50. The owner randomly records the sales of 40 customers who said they only owned cats and found a mean of the sales of $52.92 with a standard deviation of$23.53. Find the 95% confidence interval to test the pet store owner’s claim. Assume the population variances are equal. For exercises 36-42, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 36. A researcher wants to see if there is a difference in the proportion of on-time flights for two airlines. Test the claim using $\alpha$ = 0.10. 37. A random sample of 406 college freshmen found that 295 bought most of their textbooks from the college's bookstore. A random sample of 772 college seniors found that 537 bought their textbooks from the college's bookstore. You wish to test the claim that the proportion of all freshmen who purchase most of their textbooks from the college's bookstore is greater than the proportion of all seniors at a significance level of $\alpha$ = 0.01. 38. To determine whether various antismoking campaigns have been successful, annual surveys are conducted. Randomly selected individuals are asked whether they smoke. The responses for this year had 163 out of 662 who smoked. Ten years ago, the survey found 187 out of 695 who smoked. Can we infer that the proportion of smokers has declined from 10 years ago? Use $\alpha$ = 0.10. 39. TDaP is a booster shot that prevents Diphtheria, Tetanus, and Pertussis in adults and adolescents. The shot should be administered every 8 years in order for it to remain effective. A random sample of 500 people living in a town that experienced a pertussis outbreak this year were divided into two groups. Group 1 was made up of 132 individuals who had not had the TDaP booster in the past 8 years, and Group 2 consisted of 368 individuals who had. In Group 1, 15 individuals caught pertussis during the outbreak, and in Group 2, 11 individuals caught pertussis. Is there evidence to suggest that the proportion of individuals who caught pertussis and were not up to date on their booster shot is significantly higher than those that were? Test at the 0.05 level of significance. 40. The makers of a smartphone have received complaints that the facial recognition tool often does not work, or takes multiple attempts to finally unlock the phone. The company upgraded to a new version and are claiming the tool has improved. To test the claim, a critic takes a random sample of 75 users of the old version (Group 1) and 80 users of the new version (Group 2). They find that the facial recognition tool works on the first try 56% of the time in the old version and 70% of the time in the new version. Can it be concluded that the new version is performing better? Test at $\alpha$ = 0.10. 41. In a sample of 80 faculty from Portland State University, it was found that 90% were union members, while in a sample of 96 faculty at University of Oregon, 75% were union members. Find the 95% confidence interval for the difference in the proportions of faculty that belong to the union for the two universities. 42. A random sample of 54 people who live in a city were selected and 16 identified as a "dog person." A random sample of 84 people who live in a rural area were selected and 34 identified as a "dog person." Test the claim that the proportion of people who live in a city and identify as a "dog person" is significantly different from the proportion of people who live in a rural area and identify as a "dog person" at the 10% significance level. Use the confidence interval method. 43. What is the critical value for a right-tailed F-test with a 5% level of significance with df1 = 4 and df2 = 33? Round answer to 4 decimal places. 44. What is the critical value for a right-tailed F-test with a 1% level of significance with df1 = 3 and df2 = 55? Round answer to 4 decimal places. 45. What is the critical value for a left-tailed F-test with a 10% level of significance with df1 = 29 and df2 = 20? Round answer to 4 decimal places. 46. What are the critical values for a two-tailed F-test with a 1% level of significance with df1 = 31 and df2 = 10? Round answer to 4 decimal places. For exercises 47-61, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 47. A researcher wants to compare the variances of the heights (in inches) of four-year college basketball players with those of players in junior colleges. A sample of 30 players from each type of school is selected, and the variances of the heights for each type are 2.43 and 3.15 respectively. At $\alpha$ = 0.10, test to see if there a significant difference between the variances of the heights in the two types of schools. 48. The marketing manager for a minor league baseball team suspects that there is a greater variance in game attendance during the spring months (April and May) than in the summer months (June, July, August). They take a random sample of 15 games in the spring and find that there is a mean attendance of 7,543 with a standard deviation of 87.4. In another random sample of 20 games in the summer, they find a mean attendance of 8,093 with a standard deviation of 56.2. Can the manager conclude that there is a greater variance in attendance in the spring? Test at $\alpha$ = 0.05. Assume the populations are normally distributed. 49. A researcher takes sample temperatures in Fahrenheit of 17 days from New York City and 18 days from Phoenix. Test the claim that the standard deviation of temperatures in New York City is different from the standard deviation of temperatures in Phoenix. Use a significance level of $\alpha\=0.05. Assume the populations are approximately normally distributed. You obtain the following two samples of data. 50. Two random samples are taken from private and public universities (out-of-state tuition) around the nation. The yearly tuition is recorded from each sample and the results can be found below. Private colleges are typically more expensive than public schools, however, a student is curious if the variance is different between the two. Can it be concluded that the variances of private and public tuition differ at the 0.05 level of significance? Assume the populations are normally distributed. 51. Two competing fast food restaurants advertise that they have the fastest wait time from when you order to when you receive your meal. A curious critic takes a random sample of 40 customers at each restaurant and finds that there is no statistically significant difference in the average wait time between the two restaurants. Both restaurants are in fact advertising truthfully then. However, as a skeptical statistician, this critic knows that a high standard deviation may also keep a customer waiting for a long time on any given trip to the restaurant, so they test for the difference in standard deviation of wait time from this same sample. They find that Restaurant A has a sample standard deviation of 0.35 minutes and Restaurant B has a sample standard deviation of 0.57 minutes in wait time. Can they conclude that the standard deviation in wait time is significantly longer for Restaurant B? Test at \(\alpha$ = 0.05. 52. A new over-the-counter medicine to treat a sore throat is to be tested for effectiveness. The makers of the medicine take two random samples of 25 individuals showing symptoms of a sore throat. Group 1 receives the new medicine and Group 2 receives a placebo. After a few days on the medicine, each group is interviewed and asked how they would rate their comfort level 1-10 (1 being the most uncomfortable and 10 being no discomfort at all). The results are below. Is there sufficient evidence to conclude the variance in scores from Group 1 is less than the variance in scores from Group 2? Test at $\alpha$ = 0.01. Assume the populations are normally distributed. 53. The manager at a pizza place has been getting complaints that the auto-fill soda machine is either under filling or over filling their cups. The manager ran several tests on the machine before using it and knows that the average fill quantity is 12 oz. – exactly as she was hoping. However, she did not test the variance. She took a random sample of 20 fills from her machine, and a random sample of 20 fills from another branch of the restaurant that has not been having complaints. From her machine, she found a sample standard deviation of 1.3 oz. and from the other restaurant’s machine she found a sample standard deviation of 0.65 oz. At the 0.05 level of significance, does it seem her machine has a higher variance than the other machine? Assume the populations are normally distributed. 54. An employee at a large company believes that the variation in starting salary at her company differs based on level of experience. She randomly samples 30 new employees with less than 5 years of experience and categorizes them as Group 1 and 30 new employees with 5 years of experience or more and categorizes them as Group 2. In Group 1, she finds the sample standard deviation in starting salary to be $4,398.10 and in Group 2 she finds the sample standard deviation in starting salary to be$7,237.32. Test her claim at the 0.10 level of significance. Assume the populations are normally distributed. 55. In a random sample of 100 college students, 47 were sophomores and 53 were seniors. The sophomores reported spending an average of $37.03 per week going out for food and drinks with a standard deviation of$7.23, while the seniors reported spending an average of $52.94 per week going out for food and drinks with a standard deviation of$12.33. Can it be concluded that there is a difference in the standard deviation spent on food and drinks between sophomores and seniors? Test at $\alpha$ = 0.10. 56. A large shoe company is interested in knowing if the amount of money a customer is willing to pay on a pair of shoes varies differently depending on location. They take a random sample of 50 single-pair purchases from Southern states and another random sample of 50 single-pair purchases from Midwestern states and record the cost for each. The results can be found below. At the 0.05 level of significance, is there evidence that the variance in cost differs between the Midwest and the South? 57. The math department chair at a university is proud to boast an average satisfaction score of 8.4 out of 10 for her department’s courses. This year, the English department advertised an average of 8.5 out of 10. Not to be outdone, the math department chair decides to check if there is a difference in how the scores vary between the departments. She takes a random sample of 65 math department evaluations and finds a sample standard deviation of 0.75 and a random sample of 65 English department evaluations and finds a sample standard deviation of 1.04. Does she have sufficient evidence to claim that the English department may have a higher average, but also has a higher standard deviation – meaning that their scores are not as consistent as the math department’s? Test at $\alpha$ = 0.05. 58. An investor believes that investing in international stock is riskier because the variation in the rate of return is greater. She takes two random samples of 15 months over the past 30 years and finds the following rates of return from a selection of her own domestic and international investments. Can she conclude that the standard deviation in International Rate of Return is higher at the 0.10 level of significance? Assume the populations are normally distributed. 59. The manager at a local coffee shop is trying to decrease the time customers wait for their orders. He wants to find out if keeping multiple registers open will make a difference. He takes a random sample of 30 customers when only one register is open and finds that they wait an average of 6.4 minutes to reach the front with a standard deviation of 1.34 minutes. He takes another random sample of 35 customers when two registers are open and finds that they wait an average of 4.2 minutes to reach the front with a standard deviation of 1.21 minutes. He takes both his samples during peak hours to maintain consistency. Can it be concluded at the 0.05 level of significance that there is a smaller standard deviation in wait time with two registers open? 60. A movie theater company wants to see if there is a difference in the variance of movie ticket sales in San Diego and Portland per week. They sample 20 sales from San Diego and 20 sales from Portland and count the number of tickets sold over a week. Test the claim using a 5% level of significance. Assume that movie sales are normally distributed. 61. In a random sample of 60 pregnant women with preeclampsia, their systolic blood pressure was taken right before beginning to push during labor. The mean systolic blood pressure was 174 with a standard deviation of 12. In another random sample of 80 pregnant women without preeclampsia, there was a mean systolic blood pressure of 133 and a standard deviation of 8 when the blood pressure was also taken right before beginning to push. Is there sufficient evidence to conclude that women with preeclampsia have a larger variation in blood pressure in the late stages of labor? Test at the 0.01 level of significance. Trillian punched up the figures. They showed two‐to‐the power‐of-Infinity-minus‐one (an irrational number that only has a conventional meaning in Improbability physics). "... it's pretty low," continued Zaphod with a slight whistle. "Yes," agreed Trillian, and looked at him quizzically. "That's one big whack of Improbability to be accounted for. Something pretty improbable has got to show up on the balance sheet if it's all going to add up into a pretty sum." Zaphod scribbled a few sums, crossed them out and threw the pencil away. "Bat's dots, I can't work it out." "Well?" Zaphod knocked his two heads together in irritation and gritted his teeth. "OK," he said. "Computer!" (Adams, 2002) Answers to Odd Numbered Exercises 1) H0: µD = 0; H1: µD < 0; t = -0.7514; p-value = 0.2311; Do not reject H0. There is not enough evidence to support the claim on average the new online learning module increased placement scores. 3) H0: µD = 0; H1: µD > 0; t = 3.5598; p-value = 0.001; Reject H0. There is enough evidence to support the claim that the mean time to complete a task decreases when workers are allowed to wear their ear buds. 5) H0: µD = 0; H1: µD > 0; t = 4.7951; p-value = 0.0001; Reject H0. There is enough evidence to support the claim that the mean reaction time is significantly faster for a person’s dominant hand. 7) a) -11.9129 < µD < -8.4871 b) Yes, since µD = 0 is not captured in the interval (-11.9129, -8.4871). 9) H0: µ1 = µ2; H1: µ1 < µ2; z = -3.0908; pvalue = 0.001; Reject H0. There is enough evidence to support the claim that the online homework system for introductory math courses improved student’s average test scores. 11) H0: µ1 = µ2; H1: µ1 > µ2; z = 0.5602; p-value = 0.2877; Do not reject H0. There is not enough evidence to support the claim that the American League team would score on average more runs for games in which the designated hitter was used. 13) H0: µ1 = µ2; H1: µ1 > µ2; z = 3.0444; p-value = 0.0012; Reject H0. There is enough evidence to support the claim that adults under the age of 30 are more flexible. 15) H0: µ1 = µ2; H1: µ1 ≠ µ2; 3.7336 < µ1 - µ2 < 8.2664; Reject H0. There is a statistically significant difference in the mean daily car rental cost between Las Angeles and Las Vegas at the 5% level of significance. 17) H0: µ1 = µ2; H1: µ1 ≠ µ2; t = 1.0624; p-value = 0.2978; fail to reject H0. There is not enough evidence to support the claim that there is a difference in the average movie ticket sales in San Diego and Portland per week. 19) H0: µ1 = µ2; H1: µ1 > µ2; t = 2.6612; p-value = 0.0056; Reject H0. There is enough evidence to support the claim that the mean number of frozen pizzas sold during the winter months is more than during the summer months. 21) H0: µ1 = µ2; H1: µ1 < µ2; t = -1.2639; p-value = 0.1098; Do not reject H0. There is not enough evidence to support the claim that the mean wrinkle recovery angle for Hylite is better than Permafresh. 23) H0: µ1 = µ2; H1: µ1 ≠ µ2; t = 0.9455; p-value = 0.3479; Do not reject H0. There is not enough evidence to support the claim that the mean wait time for the two restaurants is different. 25) H0: µ1 = µ2; H1: µ1 ≠ µ2; t = 22.9197; Reject H0. There is enough evidence to support the claim that the soda machine is different from the other restaurants. 27) H0: µ1 = µ2; H1: µ1 ≠ µ2; -16.3925 < µ1 - µ2 < -1.1153; Reject H0. There is enough evidence to support the claim that women with preeclampsia have a higher mean blood pressure in the late stages of labor. 29) $5,070.33 < µ1 - µ2 <$14,049.47 31) H0: µ1 = µ2; H1: µ1 ≠ µ2; t = 2.0435; p-value = 0.0437; Reject H0. There is enough evidence to support the claim that the mean cost for a pair of shoes in the Midwest and the South are different. 33) H0: µ1 = µ2; H1: µ1 ≠ µ2; t = -2.0919; p-value = 0.0402; Do not reject H0. There is not enough evidence to support the claim that there is a statistically significant difference in the mean productivity level between the two locations. 35) H0: µ1 = µ2; H1: µ1 ≠ µ2; -7.5429 < µ1 - µ2 < 13.8429; Do not reject H0. There is not enough evidence to support the claim that dog owners spend more on average than cat owners on their pets. 37) H0: µ1 = µ2; H1: p1 > p2; z = 1.1104; p-value = 0.1334; Do not reject H0. There is not enough evidence to support the claim that the proportion of all freshman that purchase most of their textbooks from the college's bookstore is greater than the proportion of all seniors. 39) H0: µ1 = µ2; H1: p1 > p2; z = 3.7177; p-value = 0.0001; Reject H0. Yes, there is evidence that the proportion of those who caught pertussis is higher for those who were not up to date on their booster. 41) 0.04126 < p1 – p2 < 0.25874 43) 2.6589 45) 0.5967 47) H0: $\sigma_{1}^{2}=\sigma_{2}^{2}$ ; H1: $\sigma_{1}^{2} \neq \sigma_{2}^{2}$ ; F = 0.7714; pvalue = 0.4891; Do not reject H0. There is not enough evidence to support the claim that there a significant difference between the variances of the heights of four-year college basketball players with those of players in junior colleges. 49) H0: σ1 = σ2; H1: σ1 ≠ σ2; F = 0.4154; CV = 0.3652 & 2.6968; Do not reject H0. There is not enough evidence to support the claim that there a significant difference between the standard deviation of temperatures in New York City compared to Phoenix. 51) H0: σ1 = σ2; H1: σ1 < σ2; F = 0.377; p-value = 0.0015; Reject H0. There is enough evidence to support the claim that the standard deviation of wait times for Restaurant B is significantly longer than Restaurant A. 53) H0: $\sigma_{1}^{2}=\sigma_{2}^{2}$ ; H1: $\sigma_{1}^{2}=\sigma_{2}^{2}$ ; F = 4; p-value = 0.002; Reject H0. There is enough evidence to support the claim that the soda machine has a higher variance compared to the other restaurant. 55) H0: σ1 = σ2; H1: σ1 ≠ σ2; F = 0.3438; p-value = 0.0003; Reject H0. There is enough evidence to claim that the standard deviation in money spent on food and drinks differs between sophomores and seniors. 57) H0: σ1 = σ2; H1: σ1 < σ2; F = 0.5201; p-value = 0.0049; Reject H0. There is enough evidence to claim that the standard deviation in satisfaction scores is higher for the English department compared to the Math department. 59) H0: σ1 = σ2; H1: σ1 > σ2; F = 1.2264; p-value = 0.282; Do not reject H0. There is not enough evidence to claim that the standard deviation in wait time with two registers open is smaller. 61) H0: σ1 = σ2; H1: σ1 > σ2; F = 2.25; p-value = 0.0004; Reject H0. There is evidence to claim that the standard deviation in blood pressure for women with preeclampsia has a larger variation in the late stages of labor. 9.06: Chapter 9 Formulas Hypothesis Test for 2 Dependent Means $\mathrm{H}_{0}: \mu_{\mathrm{D}}=0$ $\mathrm{H}_{1}: \mu_{\mathrm{D}} \neq 0$ $t=\frac{\bar{D}-\mu_{D}}{\left(\frac{S_{D}}{\sqrt{n}}\right)}$ TI-84: T-Test Confidence Interval for 2 Dependent Means $\bar{D} \pm t_{\alpha / 2}\left(\frac{s_{D}}{\sqrt{n}}\right)$ TI-84: TInterval Hypothesis Test for 2 Independent Means Z-Test: \begin{aligned} \mathrm{H}_{0}: \mu_{1} &=\mu_{2} \ \mathrm{H}_{1}: \mu_{1} & \neq \mu_{2} \end{aligned} $z=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}}$ TI-84: 2-SampZTest Confidence Interval for 2 Independent Means Z-Interval $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm z_{\alpha / 2} \sqrt{\left(\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}\right)}$ TI-84: 2-SampZInt Hypothesis Test for 2 Independent Means \begin{aligned} &\mathrm{H}_{0}: \mu_{1}=\mu_{2} \ &\mathrm{H}_{1}: \mu_{1} \neq \mu_{2} \end{aligned} T-Test: Assume variances are unequal $t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)_{0}}{\sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}}$ TI-84: 2-SampTTest $df=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}$ T-Test: Assume variances are equal \begin{aligned} &t=\frac{\left(\bar{x}_{1}-\bar{x}_{2}\right)-\left(\mu_{1}-\mu_{2}\right)}{\sqrt{\left(\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}\right)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)}} \ &d f=\mathrm{n}_{1}-\mathrm{n}_{2}-2 \end{aligned} Confidence Interval for 2 Independent Means $\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm t_{\alpha / 2} \sqrt{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)}$ TI-84: 2-SampTInt $df=\frac{\left(\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}\right)^{2}}{\left(\left(\frac{s_{1}^{2}}{n_{1}}\right)^{2}\left(\frac{1}{n_{1}-1}\right)+\left(\frac{s_{2}^{2}}{n_{2}}\right)^{2}\left(\frac{1}{n_{2}-1}\right)\right)}$ T-Interval: Assume variances are equal \begin{aligned} &\left(\bar{x}_{1}-\bar{x}_{2}\right) \pm t_{\alpha / 2} \sqrt{\left(\left(\frac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{\left(n_{1}+n_{2}-2\right)}\right)\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)\right)} \ &d f=\mathrm{n}_{1}-\mathrm{n}_{2}-2 \end{aligned} Hypothesis Test for 2 Proportions \begin{aligned} &\mathrm{H}_{0}: p_{1}=p_{2} \ &\mathrm{H}_{1}: p_{1} \neq p_{2} \end{aligned} $Z=\frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)-\left(p_{1}-p_{2}\right)}{\sqrt{\left(\hat{p} \cdot \hat{q}\left(\frac{1}{n_{1}}+\frac{1}{n_{2}}\right)\right)}}$ $\hat{p}=\frac{\left(x_{1}+x_{2}\right)}{\left(n_{1}+n_{2}\right)}=\frac{\left(\hat{p}_{1} \cdot n_{1}+\hat{p}_{2} \cdot n_{2}\right)}{\left(n_{1}+n_{2}\right)}$ $\hat{q}=1-\hat{p} \quad \hat{p}_{1}=\frac{x_{1}}{n_{1}} \hat{p}_{2}=\frac{x_{2}}{n_{2}}$ TI-84: 2-PropZInt Confidence Interval for 2 Proportions $\left(\hat{p}_{1}-\hat{p}_{2}\right) \pm z_{\frac{\alpha}{2}} \sqrt{\left(\frac{\hat{p}_{1} \hat{q}_{1}}{n_{1}}+\frac{\hat{p}_{2} \hat{q}_{2}}{n_{2}}\right)}$ $\hat{p}_{1}=\frac{x_{1}}{n_{1}} \quad \hat{p}_{2}=\frac{x_{2}}{n_{2}}$ $\hat{q}_{1}=1-\hat{p}_{1} \quad \hat{q}_{2}=1-\hat{p}_{2}$ TI-84: 2-PropZInt Hypothesis Test for 2 Variances \begin{aligned} &H_{0}: \sigma_{1}^{2}=\sigma_{2}^{2} \ &H_{1}: \sigma_{1}^{2} \neq \sigma_{2}^{2} \end{aligned} \quad F=\frac{s_{1}^{2}}{s_{2}^{2}} $df \mathrm{~N}=\mathrm{n}_{1}-1, df \mathrm{D}=\mathrm{n}_{2}-1$ TI-84: 2-SampFTest Hypothesis Test for 2 Standard Deviations \begin{aligned} &H_{0}: \sigma_{1}=\sigma_{2} \ &H_{1}: \sigma_{1} \neq \sigma_{2} \end{aligned} \quad F=\frac{s_{1}^{2}}{s_{2}^{2}} $df \mathrm{~N}=\mathrm{n}_{1}-1, df \mathrm{D}=\mathrm{n}_{2}-1$ TI-84: 2-SampFTest The following flow chart in Figure 9-18 can help you decide which formula to use. Start on the left, ask yourself is the question about proportions (%), means (averages), standard deviations or variances? Are there 1 or 2 samples? Was the population standard deviation given? Are the samples dependent or independent? Are you asked to test a claim? If yes then use the test statistic (TS) formula. Are you asked to find a confidence interval? If yes then use the confidence interval (CI) formula. In each box is the null hypothesis and the corresponding TI calculator shortcut key. Figure 9-18 Download a .pdf version of the flowchart at: http://MostlyHarmlessStatistics.com. The same steps are used in hypothesis testing for a one sample test. Use technology to find the p-value or critical value. A clue with many of these questions of whether the samples are dependent is the term “paired” is used, or the same person was being measured before and after some applied experiment or treatment. The p-value will always be a positive number between 0 and 1. The same three methods to hypothesis testing, critical value method, p-value method and the confidence interval method are also used in this section. The p-value method is used more often than the other methods. The rejection rule for the three methods are: • P-value method: reject H0 when the p-value ≤ $\alpha$. • Critical value method: reject H0 when the test statistic is in the critical tail(s). • Confidence Interval method, reject H0 when the hypothesized value (0) found in H0 is outside the bounds of the confidence interval. The most important step in any method you use is setting up your null and alternative hypotheses.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/09%3A_Hypothesis_Tests_and_Confidence_Intervals_for_Two_Populations/9.05%3A_Chapter_9_Exercises.txt
A $\chi^{2}$ -distribution (chi-square, pronounced “ki-square”) is another special type of distribution for a continuous random variable. The sampling distribution for a variance and standard deviation follows a chi-square distribution. Properties of the $\chi^{2}$ -distribution density curve: 1. Right skewed starting at zero. 2. The center and spread of a $\chi^{2}$ -distribution are determined by the degrees of freedom with a mean = df and standard deviation = $\sqrt{2df}$. 3. Chi-square variables cannot be negative. 4. As the degrees of freedom increase, the $\chi^{2}$ -distribution becomes normally distributed for df > 50. Figure 10-1 shows $\chi^{2}$ -distributions for df of 2, 4, 10, and 30. 5. The total area under the curve is equal to 1, or 100%. We will use the $\chi^{2}$ -distribution for hypothesis testing later in this chapter. For now, we are just learning how to find a critical value $\chi_{\alpha}^{2}$. The symbol $\chi_{\alpha}^{2}$ is the critical value on the $\chi^{2}$ -distribution curve with area 1 – $\alpha$ below the critical value and area $\alpha$ above the critical value, as shown below in Figure 10-2. Use technology to compute the critical value for the $\chi^{2}$ -distribution. TI-84: Use the INVCHI2 program downloaded at Rachel Webb’s website: http://MostlyHarmlessStatistics.com. Start the program and enter the area $\alpha$ and the df when prompted. TI-89: Go to the [Apps] Stat/List Editor, then select F5 [DISTR]. This will get you a menu of probability distributions. Arrow down to Inverse > Inverse Chi-Square and press [ENTER]. Enter the area 1 – $\alpha$ to the left of the $\chi$ value and the df into each cell. Press [ENTER]. Excel: =CHISQ.INV(1 – $\alpha$, df) or =CHISQ.INV.RT($\alpha$, df) Alternatively, use the following online calculator: https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html. Compute the critical value $\chi_{\alpha}^{2}$ for a $\alpha$ = 0.05 and df = 6. Solution Start by drawing the curve and determining the area in the right-tail as shown in Figure 10-3. Then use technology to find the critical value. 10.02: Goodness of Fit Test The $\chi^{2}$ goodness-of-fit test can be used to test the distribution of three or more proportions within a single population. Definition: $\chi^{2}$ goodness-of-fit test The $\chi^{2}$-test is a statistical test for testing the goodness-of-fit of a variable. It can be used when the data are obtained from a random sample and when the expected frequency (E) from each category is 5 or more. The formula for the $\chi^{2}$ -test statistic is: $\chi^{2}=\sum \frac{(o-E)^{2}}{E}$ Use a right-tailed $\chi^{2}$ -distribution with $\text{df} = k-1$ where $k$ = the number of categories. with • $O$ = the observed frequency (what was observed in the sample) and • $E$ = the expected frequency (based on $H_{0}$ and the sample size). • $H_{0}: p_{1} = p_{0}, p_{2} = p_{0}, \cdots, p_{k} = p_{0}$ • $H_{1}:$ At least one proportion is different. An instructor claims that their students’ grade distribution is different than the department’s grade distribution. The department’s grades have the following proportion of students who get A’s is 35%, B’s is 23%, C’s is 25%, D’s is 10% and F’s is 7% in introductory statistics courses. For a sample of 250 introductory statistics students with this instructor, there were 80 A’s, 50 B’s, 58 C’s, 38 D’s, and 24 F’s. Test the instructor’s claim at the 5% level of significance. Solution This is a test for three or more proportions within a single population, so use the goodness-of-fit test. We will always use a right-tailed χ 2 -test. The hypotheses for this example would be: $H_{0}: p_{A} = 0.35, p_{B} = 0.23, p_{C} = 0.25, p_{D} = 0.10, p_{F} = 0.07$ $H_{1}:$ At least one proportion is different. Even though there is an inequality in $H_{1}$, the goodness-of-fit test is always a right-tailed test. This is because we are testing to see if there is a large variation between the observed versus the expected values. If the variance between the observed and expected values is large, then there is a difference in the proportions. Also note that we do not write the alternative hypothesis as $p_{A} \neq 0.35, p_{B} \neq 0.23, p_{C} \neq 0.25, p_{D} \neq 0.10, p_{F} \neq 0.07$ since it could be that any one of these proportions is different. All of the proportions not being equal to their hypothesized values is just one possible case. There are $k = 5$ categories that we are comparing: A’s, B’s, C’s, D’s and F’s. The observed counts are the actual number of A’s, B’s, C’s, D’s and F’s from the sample. We must compute the expected count for each of the five categories. Find the expected counts by multiplying the expected proportion of A’s, B’s, C’s, D’s and F’s by the sample size. It will be helpful to make a table to organize the work. The test statistic is the sum of this last row: $\chi^{2}=\sum \frac{(O-E)^{2}}{E}=0.6429+0.9783+0.324+6.76+2.4143=11.1195$ The critical value for a right-tailed $\chi^{2}$-test with $\text{df} = k - 1 = 5 - 1 = 4$ is found by finding the area in the $\chi^{2}$ -distribution using your calculator or Excel. Use $\alpha = 0.05$ area in the right-tail, to get the critical value of $\chi_{\alpha}^{2}$ =CHISQ.INV.RT(0.05,4) = 9.4877. Draw and label the curve as shown in Figure 10-4. The test statistic of $\chi^{2} = 11.1195 > \chi_{\alpha}^{2} = 9.4877$ and is in the rejection area, so our decision is: Reject $H_{0}$. There is sufficient evidence to support the claim that the proportion of students who get A’s, B’s, C’s, D’s and F’s in introductory statistics courses for this instructor is different than the department’s proportions of 35%, 23%, 25%, 10% and 7% respectively. If we were asked to find the p-value, you would just find the area to right of the test statistic (always a right-tailed test) using your calculator or Excel =CHISQ.DIST.RT(11.1195,4) = 0.0253. This gives a p-value = 0.0252 which is less than $\alpha = 0.05$, therefore reject $H_{0}$. You can use the GOF shortcut function on your calculator to get a p-value; see directions below. If you get the program from your instructor or the website for your TI-83, you can also have the calculator find the $\frac{(O-E)^{2}}{E}$ values. The TI-84 and 89 already does this. TI-84: Note: For the TI-83 download a GOF program from http://MostlyHarmlessStatistics.com. Only newer TI-84 operating systems have a calculator shortcut key for GOF. Use the same GOF program as for the TI-83 if your 84 does not have the $\chi^{2}$ GOF-Test. Before you start, write down your observed and expected values. Select Stat, then Calc. Type in the observed values into list 1, and the expected values into list 2. Select Stat, then Tests. Go down to option D: $\chi^{2}$ GOF-Test. Choose L1 for the Observed category and L2 for the Expected category, type in your degrees of freedom ($\text{df} = k-1$), and then select Calculate. The calculator returns the $\chi^{2}$ -test statistic and the p-value. Use the right arrow to see the rest of the $\frac{(O-E)^{2}}{E}$ values. TI-89: Go to the [Apps] Stat/List Editor, then type in the observed values into list 1, and the expected values into list 2. Press [2nd] then F6 [Tests], then select 7: Chi-2GOF. Type in the list names and the degrees of freedom ($\text{df} = k-1$). Then press the [ENTER] key to calculate. The calculator returns the $\chi^{2}$-test statistic and the p-value. The $\frac{(O-E)^{2}}{E}$ values are stored in the comp list. A research company is looking to see if the proportion of consumers who purchase a cereal is different based on shelf placement. They have four locations: Bottom Shelf, Middle Shelf, Top Shelf, and Aisle End Shelf. Test to see whether there is a preference among the four shelf placements. Use the p-value method with $\alpha=0.05$. Solution The hypotheses can be written as a sentence or as proportions. If you use proportions, note that there are no percentages given. We would expect that each shelf placement be the same if there was no preference. There are 4 categories, so $p_{0} = \frac{1}{4} = 0.25$ or 25% for each placement. • $H_{0}: p_{B} = 0.25, p_{M} = 0.25, p_{T} = 0.25, p_{E} = 0.25$ • $H_{1}:$ At least one proportion is different. It is also acceptable to write the hypotheses as a sentence. • $H_{0}:$ Proportion of cereal sales is equally distributed across the four shelf placements. • $H_{1}:$ Proportion of cereal sales is not equally distributed across the four shelf placements. Find the expected values. Total all the observed values to get the sample size: $n = 45 + 67 + 55 + 73 = 240$. Then take the sample size and divide by 4 to get $\frac{240}{4} = 60$. The expected value for each group is 60. Compute the test statistic: \begin{aligned} \chi^{2} &=\sum \frac{(O-E)^{2}}{E} = \frac{(45-60)^{2}}{60} + \frac{(67-60)^{2}}{60} + \frac{(55-60)^{2}}{60} + \frac{(73-60)^{2}}{60} \[4pt] &=3.75+0.816667+0.416667+2.816667=7.8 \end{aligned} Check your work using technology and find the p-value. The degrees of freedom are the number of groups minus one: $\text{df} = k - 1 = 3$. You can scroll right by selecting the right arrow button to see the rest of the contribution values. In Excel the p-value is found by the formula =CHISQ.DIST.RT(7.8,3). On the TI-Calculator, use the $\chi^{2}$ GOF-Test shortcut. The p-value = 0.05033 which is larger than $\alpha=0.05$; therefore, do not reject $H_{0}$. There is not enough evidence to support the claim that cereal shelf placement makes a statistically significant difference in the proportion of sales at the 5% level of significance.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/10%3A_Chi-Square_Tests/10.01%3A_Chi-Square_Distribution.txt
Use the chi-square test for independence to test the independence of two categorical variables. Remember, qualitative data is collected on individuals that are categories or names. Then you would count how many of the individuals had particular qualities. An example is that there is a theory that there is a relationship between breastfeeding and having autism spectrum disorder (ASD). To determine if there is a relationship, researchers could collect the time-period that a mother breastfed her child and if that child was diagnosed with ASD. Then you would have a table containing this information. Now you want to know if each cell is independent of each other cell. Remember, independence says that one event does not affect another event. Here it means that having ASD is independent of being breastfed. What you really want is to see if they are dependent (not independent). In other words, does one affect the other? If you were to do a hypothesis test, this is your alternative hypothesis and the null hypothesis is that they are independent. There is a hypothesis test for this and it is called the chi-square test for independence. There is only a right-tailed test for testing the independence between two variables: $H_{0}:$ Variable 1 and Variable 2 are independent (unrelated). $H_{1}:$ Variable 1 and Variable 2 are dependent (related). Finding the test statistic involves several steps. First, the data is collected, counted, and then organized into a contingency table. These values are known as the observed frequencies, and the symbol for an observed frequency is $O$. Total each row and column. The null hypothesis is that the two variables are independent. If two events are independent then $P(B) = P(B | A)$ and we can use the multiplication rule for independent events, to calculate the probability that variable $A$ and $B$ as the $P(A\, \text{and}\, B) = P(A) \cdot P(B)$. Remember in a hypothesis test, you assume that $H_{0}$ is true, the two variables are assumed to be independent. \begin{aligned} P(A\, \text{and}\, B) &= P(A) \cdot P(B) \text{ if } A \text{ and } B \text{ are independent} \[4pt] &=\dfrac{\text { Number of ways A can happen }}{\text { Total number of individuals }} \cdot \dfrac{\text { Number of ways B can happen }}{\text { Total number of individuals }} \[4pt] &=\dfrac{\text { Row Total }}{n} \cdot \dfrac{\text { Column Total }}{n} \end{aligned} Now you want to find out how many individuals you expect to be in a certain cell. To find the expected frequencies, you just need to multiply the probability of that cell times the total number of individuals. Do not round the expected frequencies. \begin{aligned} \text{Expected frequency (cell A and B)} &= E(A \text{ and } B) \ &= n \left(\dfrac{\text { Row Total }}{n} \cdot \dfrac{\text { Column Total }}{n}\right) = \dfrac{\text { Row Total } \cdot \text { Column Total }}{n} \end{aligned} If the variables are independent, the expected frequencies and the observed frequencies should be the same. The test statistic here will involve looking at the difference between the expected frequency and the observed frequency for each cell. Then you want to find the “total difference” of all of these differences. The larger the total, the smaller the chances that you could find that test statistic given that the assumption of independence is true. That means that the assumption of independence is not true. How do you find the test statistic? First, compute the differences between the observed and expected frequencies. Because some of these differences will be positive and some will be negative, you need to square these differences. These squares could be large just because the frequencies are large, so you need to divide by the expected frequencies to scale them. Then finally add up all of these fractional values. This process finds the variance, and we use a chi-square distribution to find the critical value or p-value. Hence, sometimes this test is called a chi-square test. The $\chi^{2}$-test is a statistical test for testing the independence between two variables. It can be used when the data are obtained from a random sample, and when the expected value $(E)$ from each cell is 5 or more. The formula for the $\chi^{2}$ -test statistic is: $\chi^{2} = \sum \frac{(O-E)^{2}}{E}$. Use $\chi^{2}$ -distribution with degrees of freedom $\text{df}$ = (the number of rows – 1) (the number of columns – 1), that is, $\text{df} = (R-1)(C-1)$. where $O$ = the observed frequency (sample results) and $E$ = the expected frequency (based on $H_{0}$ and the sample size). Is there a relationship between autism spectrum disorder (ASD) and breastfeeding? To determine if there is, a researcher asked mothers of ASD and non-ASD children to say what time-period they breastfed their children. Does the data provide enough evidence to show that breastfeeding and ASD are independent? Test at the 1% level. (Schultz, Klonoff-Cohen, Wingard, Askhoomoff, Macera, Ji & Bacher, 2006.) Solution The question is asking if breastfeeding and ASD are independent. The correct hypothesis is: $H_{0}:$ Autism spectrum disorder and length of breastfeeding are independent. $H_{1}:$ Autism spectrum disorder and length of breastfeeding are dependent. There are 2 rows and 4 columns of data. We must compute the Expected count for each of the $2 \times 4 = 8$ cells. The expected counts for each cell are found by the formula: $\text { Expected Value } = \dfrac{\text { Row Total } \cdot \text { Column Total }}{\text { Grand Total }}$ It will be helpful to make a table for the expected counts and another one for each of the $\frac{(O-E)^{2}}{E}$ values to aid in computing the test statistic. The test statistic is the sum of all eight $\frac{(O-E)^{2}}{E}$ values: $\chi^{2}=\sum \frac{(O-E)^{2}}{E} = 11.217$. The critical value for a right-tailed $\chi^{2}$-test with degrees of freedom $\text{df} = (R-1)(C-1) = (2-1)(4-1) = 3$ is found using a $\chi^{2}$ -distribution $\alpha=0.01$ right-tail area. The critical value is $\chi^{2}$ = CHISQ.INV.RT(0.01,3) = 11.3449. See Figure 10-5. Alternatively, use the online calculator: https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html. Since the test statistic $\chi^{2} = 11.217$ is not in the rejection area, our decision is to fail to reject $H_{0}$. There is not enough evidence to show a relationship between autism spectrum disorder and breastfeeding. If we were asked to find the p-value, you would just find the area to right of the test statistic (always a right-tailed test) using your calculator or Excel. This gives a p-value = 0.0106, which is more than $\alpha=0.01$; therefore, we do not reject H0. You can also use the $\chi^{2}$-Test shortcut keys on your calculator to get a p-value, see directions below. TI-84: Press the [2nd] then [MATRX] key. Arrow over to the EDIT menu and 1:[A] should be highlighted, press the [ENTER] key. For a $m \times n$ contingency table, type in the number of rows $(m)$ and the number of columns $(n)$ at the top of the screen so that it looks like this: MATRIX[A] $m \times n$. For a $2 \times 4$ contingency table, the top of the screen would look like this: MATRIX[A] $2 \times 4$. As you hit [ENTER], the table will automatically widen to the size you put in. Now enter all of the observed values in their proper positions. Then press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [C: $\chi^{2}$ -Test] and press the [ENTER] key. Leave the default as Observed:[A] and Expected:[B], arrow down to [Calculate] and press the [ENTER] key. The calculator returns the $\chi^{2}$-test statistic and the p-value. If you go back to the matrix menu [2nd] then [MATRX] key, arrow over to EDIT and choose 2:[B], you will see all of the expected values. TI-89: First you need to create the matrix for the observed values: Press [Home] to return to the Home screen, press [Apps] and select Data/Matrix Editor. A menu is displayed, select 3:New. The New dialog box is displayed. Press the right arrow key to highlight 2:Matrix, and press [ENTER] to choose Matrix type. Press the down arrow key to highlight 1:Main, and press [ENTER], to choose main folder. Press the down arrow key, and then enter the letter $o$ for the name in the Variable field. Enter 2 for Row dimension and 4 for Column dimension. Press [ENTER] to display the matrix editor. Enter the observed value (do not include total row or column). Important: Next time you use this test instead of option 3:New, choose 2: Open. The open dialog box is displayed. Press the right arrow key to highlight 2:Matrix, and press [ENTER] to choose Matrix type. Press the down arrow key to make sure you are in the Main folder and that your variable says $o$. Press [Apps], and then select Stats/List Editor. To display the Chi-square 2-Way dialog box, press 2nd then F6 [Tests], then select 8: Chi-2 2-way. Enter in in the Observed Mat: o; leave the other rows alone: Store Expected to: statvars\e; Store CompMat to: statvars\c. This will store the expected values in the matrix folder statvars with the name expmat, and the $(o-e)^{2}/e$ values in the matrix compmat. Press the [ENTER] key to calculate. The calculator returns the $\chi^{2}$-test statistic and the p-value. If you go back to the matrix menu, you will see some of the expected and $(o-e)^{2}/e$ values. To see all the expected values, select [APPS] and select Data/Matrix Editor. Select 2:Open, change the Type to Matrix, change the Folder to statvars, and change the Variable to expmat. To see all the $(o-e)^{2}/e$ values, select [APPS] and select Data/Matrix Editor. Select 2:Open, change the Type to Matrix, change the Folder to statvars, and change the Variable to compmat. If you need to delete a row or column, move the cursor to the row or column that you want to delete, then select F6 Util, then 2:Delete, then choose row or column, then enter. To add a row or column, just arrow over to the new row or column and type in the observed values. The sample data below show the number of companies providing dental insurance for small, medium and large companies. Test to see if there is a relationship between dental insurance coverage and company size. Use $\alpha=0.05$. Solution State the hypotheses. $H_{0}:$ Dental insurance coverage and company size are independent. $H_{1}:$ Dental insurance coverage and company size are dependent. Compute the expected values by taking each row total times column total, divided by grand total. For the small companies with dental insurance: $(65 \cdot 67)/160 = 27.21875$, small companies without dental insurance: $(95 \cdot 67)/160 = 39.78125$, medium companies with dental insurance: $(65 \cdot 64)/160 = 26$, etc. See table below. Compute the test statistic. Test statistic is $\chi^{2}=\sum \dfrac{(O-E)^{2}}{E}=1.42082+0.03846+4.42316+0.97214+0.02632+3.02637=9.9073$. Use technology to find the p-value using the chi-square cdf with $\text{df} = (R-1)(C-1) = (2-1)(3-1) = 2$. Using the TI-Calculator, we find the p-value = 0.0071. The p-value is less than $\alpha$; therefore, reject $H_{0}$. There is enough evidence to support the claim that there is a relationship between dental insurance coverage and company size. 10.04: Chapter 10 Formulas Goodness of Fit Test $H_{0}: p_{1} = p_{0}, p_{2}= p_{0}, \cdots, p_{k} = p_{0}$. $H_{1}:$ At least one proportion is different. $\chi^{2}=\sum \frac{(O-E)^{2}}{E}$ $\text{df} = k-1$, $p_{0} = \frac{1}{k}$ or given % TI-84: $\chi^{2}$ GOF-Test Test for Independence $H_{0}:$ Variable 1 and Variable 2 are independent. $H_{1}:$ Variable 1 and Variable 2 are dependent. $\chi^{2}=\sum \frac{(O-E)^{2}}{E}$ $\text{df} = (R-1)(C-1)$ TI-84: $\chi^{2}$-Test One of the major selling points of that wholly remarkable travel book, the Hitchhiker's Guide to the Galaxy, apart from its relative cheapness and the fact that it has the words DON'T PANIC written in large friendly letters on its cover, is its compendious and occasionally accurate glossary. The statistics relating to the geo‐social nature of the Universe, for instance, are deftly set out between pages nine hundred and thirty‐eight thousand and twenty-four and nine hundred and thirty‐eight thousand and twenty‐six; and the simplistic style in which they are written is partly explained by the fact that the editors, having to meet a publishing deadline, copied the information off the back of a packet of breakfast cereal, hastily embroidering it with a few footnotes in order to avoid prosecution under the incomprehensibly tortuous Galactic Copyright laws. (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/10%3A_Chi-Square_Tests/10.03%3A_Test_for_Independence.txt
Chapter 10 Exercises 1. The shape of the $\chi^{2}$ -distribution is usually: a) Normal b) Bell-shaped c) Skewed left d) Skewed right e) Uniform 2. Why are Goodness of Fit tests always right-tailed? a) Because the test checks for a large variance between observed and expected values. b) Because the $\chi^{2}$-distribution is skewed right. c) Because $\chi^{2}$ values can never be negative. d) Because they test a variance and variance is always positive. 3. What are the requirements to be satisfied before using a Goodness of Fit test? Check all that apply. a) The data are obtained using systematic sampling. b) The data are obtained from a simple random sample. c) The expected frequency from each category is 5 or more. d) The observed frequency from each category is organized from largest to smallest. e) The degrees of freedom are less than 30. For exercises 4-19, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 4. Pamplona, Spain, is the home of the festival of San Fermin – The Running of the Bulls. The town is in festival mode for a week and a half every year at the beginning of July. There is a running joke in the city that Pamplona has a baby boom every April – 9 months after San Fermin. To test this claim, a resident takes a random sample of 200 birthdays from native residents and finds the following. At the 0.05 level of significance, can it be concluded that births in Pamplona are not equally distributed throughout the 12 months of the year? 5. A professor using an open-source introductory statistics book predicts that 60% of the students will purchase a hard copy of the book, 25% will print it out from the web, and 15% will read it online. At the end of the term she asks her students to complete a survey where they indicate what format of the book they used. Of the 126 students, 45 said they bought a hard copy of the book, 25 said they printed it out from the web, and 56 said they read it online. Run a Goodness of Fit test at $\alpha$ = 0.05 to see if the distribution is different than expected. 6. The proportion of final grades for an anatomy class for the whole department are distributed as 10% A's, 23% B's, 45% C's, 14% D's, and 8% F's. A department chair is getting quite a few student complaints about a particular professor. The department chair wants to check to see if the professor’s students’ grades have a different distribution compared to the rest of the department. At the end of the term, the students have the following grades. Use $\alpha$ = 0.05. 7. You might think that if you looked at the first digit in randomly selected numbers that the distribution would be uniform. Actually, it is not! Simon Newcomb and later Frank Benford both discovered that the digits occur according to the following distribution. A forensic accountant can use Benford's Law to detect fraudulent tax data. Suppose you work for the IRS and are investigating an individual suspected of embezzling. The first digit of 192 checks to a supposed company are as follows. Run a complete Goodness of Fit test to see if the individual is likely to have committed tax fraud. Use $\alpha$ = 0.05. Should law enforcement officials pursue the case? Explain. 8. A college professor is curious if location of seat in class affects grade in the class. She is teaching in a lecture hall to 200 students. The lecture hall has 10 rows, so she splits it into 5 categories – Rows 1-2, Rows 3-4, Rows 5-6, Rows 7-8, and Rows 9-10. At the end of the course, she determines the top 25% of grades in the class, and if location of seat makes no difference, she would expect that these top 25% of students would be equally dispersed throughout the classroom. Her observations are recorded below. Run a Goodness of Fit test to determine whether location has an impact on grade. Let $\alpha$ = 0.05. 9. Consumer panel preferences for four store displays follow. Test to see whether there is a preference among the four display designs. Use $\alpha$ = 0.05. 10. The manager of a coffee shop wants to know if his customers’ drink preferences have changed in the past year. He knows that last year the preferences followed the following proportions – 34% Americano, 21% Cappuccino, 14% Espresso, 11% Latte, 10% Macchiato, 10% Other. In a random sample of 300 customers, he finds that 90 ordered Americanos, 65 ordered Cappuccinos, 52 ordered Espressos, 35 ordered Lattes, 34 ordered Macchiatos, and the rest ordered something in the Other category. Run a Goodness of Fit test to determine whether drink preferences have changed at his coffee shop. Use a 0.05 level of significance. 11. The director of a Driver’s Ed program is curious if the time of year has an impact on number of car accidents in the United States. They assume that weather may have a significant impact on the ability of drivers to control their vehicles. They take a random sample of 100 car accidents and record the season each occurred in. They found that 20 occurred in the spring, 31 in the summer, 23 in the fall, and 26 in the winter. Can it be concluded at the 0.05 level of significance that car accidents are not equally distributed throughout the year? 12. A college prep school advertises that their students are more prepared to succeed in college than other schools. To show this, they categorize GPAs into 4 groups and look up the proportion of students at a state college in each category. They find that 7% have a 0-0.99, 21% have a 1-1.99, 37% have a 2-2.99, and 35% have a 3-4.00 in GPA. They then take a random sample of 150 of their graduates at the state college and find that 5 graduates have a 0- 0.99, 18 have a 1-1.99, 67 have a 2-2.99, and 60 have a 3-4.00. Can they conclude that the grades of their graduates are distributed differently than the general population at the school? Test at the 0.05 level of significance. 13. The permanent residence of adults aged 18-25 in the United States was examined in a survey from the year 2000. The survey revealed that 27% of these adults lived alone, 32% lived with a roommate(s), and 41% lived with their parents/guardians. In 2008, during an economic recession in the country, another such survey of 1,500 people revealed that 378 lived alone, 452 lived with a roommate(s), and 670 lived with their parents. Is there a significant difference in where young adults lived in 2000 versus 2008? Test with a Goodness of Fit test at $\alpha$ = 0.05. 14. A color code personality test categorizes people into four colors – Red (Power), Blue (Intimacy), Green (Peace), and Yellow (Fun). In general, 25% of people are Red, 35% Blue, 20% Green, and 20% Yellow. An art class of 33 students is tested at a university and 4 are found to be Red, 14 Blue, 7 Green, and 8 Yellow. Can it be concluded that personality type has an impact on students’ areas of interest and talents, such as artistic students? Test at a 0.05 level of significance. 15. An urban economist is curious if the distribution in where Oregon residents live is different today than it was in 1990. She observes that today there are approximately 3,050 thousand residents in NW Oregon, 907 thousand residents in SW Oregon, 257 thousand in Central Oregon, and 106 thousand in Eastern Oregon. She knows that in 1990 the breakdown was as follows: 72.7% NW Oregon, 19.7% SW Oregon, 4.8% Central Oregon, and 2.8% Eastern Oregon. Can she conclude that the distribution in residence is different today at a 0.05 level of significance? 16. A large department store is curious what sections of the store make the most sales. The manager has data from 10 years prior that show 30% of sales come from Clothing, 25% Home Appliances, 18% Housewares, 13% Cosmetics, 12% Jewelry, and 2% Other. In a random sample of 500 current sales, 176 came from Clothing, 150 from Home Appliances, 75 from Housewares, 42 from Cosmetics, 51 from Jewelry, and 6 from Other. At $\alpha$ = 0.10, can the manager conclude that the distribution of sales among the departments has changed? 17. Students at a high school are asked to evaluate their experience in a class at the end of each school year. The courses are evaluated on a 1-4 scale – with 4 being the best experience possible. In the History Department, the courses typically are evaluated at 10% 1’s, 15% 2’s, 34% 3’s, and 41% 4’s. A new history teacher, Mr. Mendoza, sets a goal to outscore these numbers. At the end of the year, he takes a random sample of his evaluations and finds 11 1’s, 14 2’s, 47 3’s, and 53 4’s. At the 0.05 level of significance, can Mr. Mendoza claim that his evaluations are significantly different from the History Department’s? 18. A company manager believes that a person’s ability to be a leader is directly related to their zodiac sign. He never selects someone to chair a committee without first evaluating their zodiac sign. An irate employee sets out to show her manager is wrong. She claims that if zodiac sign truly makes a difference in leadership, then a random sample of 200 CEOs in our country would reveal a difference in zodiac sign distribution. She finds the following zodiac signs for her random sample of 200 CEOs. Can the employee conclude that there is a difference in the proportion of CEOs for the twelve zodiac signs? Use $\alpha$ = 0.05. 19. A company that develops over-the-counter medicines is working on a new product that is meant to shorten the length of time that sore throats persist. To test their product for effectiveness, they take a random sample of 100 people and record how long it took for their symptoms to completely disappear. The results are in the table below. The company knows that on average (without medication) it takes a sore throat 6 days or less to heal 42% of the time, 7-9 days 31% of the time, 10-12 days 16% of the time, and 13 days or more 11% of the time. Can it be concluded at the 0.01 level of significance that the patients who took the medicine healed at a different rate than these percentages? 20. What are the requirements to be satisfied before using the $\chi^{2}$ Independence Test? Check all that apply. a) The sample sizes must be greater than 30. b) The data are obtained from a random sample. c) The data are obtained using stratified sampling. d) The expected frequency from each category is 5 or more. e) The observed frequency from each category is organized from largest to smallest. f) The population is normally distributed. 21. The null hypothesis for the $\chi^{2}$ Independence Test always states that _____________________________. a) the two values are equal. b) one variable is dependent on another variable. c) one variable is independent of another variable. d) the expected values and observed values are the same. 22. What are the degrees of freedom used in the $\chi^{2}$ Independence Test? a) $n - 1$ b) $\text{Rows} + \text{Columns}$ c) $n$ d) $(\text{Rows} - 1) \times (\text{Columns} - 1)$ e) $n - 2$ For exercises 23-33, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 23. A manufacturing company knows that their machines produce parts that are defective on occasion. They have 4 machines producing parts and want to test if defective parts are dependent on the machine that produced it. They take a random sample of 300 parts and find the following results. Test at the 0.05 level of significance. 24. The sample data below show the number of companies providing health insurance for small, medium and large companies. Test to see whether health insurance coverage and company size are dependent. Use $\alpha$ = 0.01. 25. A restaurant chain that has 3 locations in Portland is trying to determine which of their 3 locations they should keep open on New Year’s Eve. They survey a random sample of customers at each location and ask each whether they plan to go out to eat on New Year’s Eve. The results are below. Run a test for independence to decide if the proportion of customers who will go out to eat on New Year’s Eve is dependent on location. Use $\alpha$ = 0.05. 26. The following sample was collected during registration at a large middle school. At the 0.05 level of significance, can it be concluded that level of math is dependent on grade level? 27. A high school offers math placement exams for incoming freshmen to place students into the appropriate math class during their freshman year. Three middle schools were sampled and the following pass/fail results were found. Test to see if the math placement exam result and the school that students attend are dependent at the 0.10 level of significance. 28. A public opinion poll surveyed a simple random sample of 500 voters in Oregon. The respondents were asked which political party they identified with most and were categorized by residence. Results are shown below. Decide if voting preference is dependent on location of residence. Let $\alpha$ = 0.05. 29. A university changed to a new learning management system (LMS) during the past school year. The school wants to find out how it is working for the different departments – the results in preference found from a survey are below. Test to see if the department and LMS preference are dependent at $\alpha$ = 0.05. 30. The medal count for the 2018 winter Olympics is recorded below. Run an independence test to find out if the medal won is dependent on country. Use $\alpha$ = 0.10. 31. An electronics store has 4 branches in a large city. They are curious if sales in any particular department are different depending on location. They take a random sample of purchases throughout the 4 branches – the results are recorded below. Test to see if the type of electronic device and store branch are dependent at the 0.05 level of significance. 32. A high school runs a survey asking students if they participate in sports. The results are found below. Test to see if there is a relationship between participating in sports and year in school at $\alpha$ = 0.05. 33. A 4-year college is curious which of their students hold down a job while also attending school. They poll the students and find the results below. Test to see if there is a relationship between college students having a job and year in school. Use $\alpha$ = 0.05. Answers to Odd Numbered Exercises 1) d 3) b & c 5) $H_{0}: p_{1} = 0.6, p_{2} = 0.25, p_{3} = 0.15$; $H_{1}:$ At least one proportion is different. $\chi^{2} = 86.5529$; p-value $=1.604 \times 10^{-19} = 0$; reject $H_{0}$. There is enough evidence to support the claim that the distribution is different than expected. There were more students than expected that would read the text online. 7) $H_{0}: p_{1} = 0.301, p_{2} = 0.176, p_{3} = 0.125, p_{4} = 0.097, p_{5} = 0.079, p_{6} = 0.067, p_{7} = 0.058, p_{8} = 0.051, p_{9} = 0.046$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ =11.8466; CV = 15.5073; do not reject $H_{0}$. There is no evidence of tax fraud so law enforcement officials should not pursue the case. 9) $H_{0}: p_{1} = 0.25, p_{2} = 0.25, p_{3} = 0.25, p_{4} = 0.25$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 3.16; p-value = 0.3676; do not eject $H_{0}$. There is not enough evidence to support the claim that preference among the four display designs. 11) $H_{0}: p_{1} = 0.25, p_{2} = 0.25, p_{3} = 0.25, p_{4} = 0.25$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 2.64; p-value = 0.4505; do not reject $H_{0}$. There is not enough evidence to support the claim that car accidents are not equally distributed throughout the year. 13) $H_{0}: p_{1} = 0.27, p_{2} = 0.32, p_{3} = 0.41$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 8.352; p-value = 0.0154; reject $H_{0}$. There is enough evidence to support the claim that there a significant difference in where young adults lived in 2000 versus 2008. There are fewer young adults living at home than expected. 15) $H_{0}: p_{1} = 0.727, p_{2} = 0.197, p_{3} = 0.048, p_{4} = 0.028$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 20.0291; p-value = 0.0002; reject H0. There is enough evidence to support the claim that the distribution of Oregon residents is different now compared to 1990. There were more Oregonians in central Oregon than expected. 17) $H_{0}: p_{1} = 0.1, p_{2} = 0.15, p_{3} = 0.34, p_{4} = 0.4$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 1.9196; p-value = 0.5893; do not reject $H_{0}$. There is not enough evidence to support the claim that the Mr. Mendoza’s course evaluation scores are different compared to the rest of the History Department’s evaluations. 19) $H_{0}: p_{1} = 0.42, p_{2} = 0.31, p_{3} = 0.16, p_{4} = 0.11$; $H_{1}:$ At least one proportion is different. $\chi^{2}$ = 7.6986; p-value = 0.05267; do not reject H0. There is not enough evidence to support the claim that the patients who took the medicine healed at a different rate than these percentages. 21) c 23) $H_{0}:$ The number of defective parts is independent on the machine that produced it. $H_{1}:$ The number of defective parts is dependent on the machine that produced it. $\chi^{2}$ = 2.3536; p-value = 0.5023; do not reject H0. There is not enough evidence to support the claim that the number of defective parts is dependent on the machine that produced it. 25) $H_{0}:$ The proportion of customers who will go out to eat on New Year’s Eve is independent of location. $H_{1}:$ The proportion of customers who will go out to eat on New Year’s Eve is dependent on location. $\chi^{2}$ = 2.2772; p-value = 0.3203; do not reject $H_{0}$. There is not enough evidence to support the claim that the proportion of customers who will go out to eat on New Year’s Eve is dependent on location. 27) $H_{0}:$ The math placement exam and where students are placed are independent. $H_{1}:$ The math placement exam and which middle school students attend are dependent. $\chi^{2}$ = 0.1642; p-value = 0.9212; do not reject $H_{0}$. There is not enough evidence to support the claim that the math placement exam and which middle school students attend are dependent. 29) $H_{0}:$ Department and LMS preference are independent. $H_{1}:$ Department and LMS preference are dependent. $\chi^{2}$ = 24.7778; p-value = 0.000056; reject $H_{0}$. There is enough evidence to support the claim that department and LMS preference are dependent. 31) $H_{0}:$ Type of electronic device and store branch are dependent. $H_{1}:$ Type of electronic device and store branch are dependent. $\chi^{2}$ = 7.4224; p-value = 0.8285; do not reject $H_{0}$. There is not enough evidence to support the claim that type of electronic device and store branch are dependent. 33) $H_{0}:$ Having a job in college and year in school are not related. $H_{1}:$ Having a job in college and year in school are related. $\chi^{2}$ = 20.8875; p-value = 0.0001; reject $H_{0}$. There is enough evidence to support the claim that having a job in college and year in school are related.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/10%3A_Chi-Square_Tests/10.05%3A_Chapter_10_Exercises.txt
• 11.1: One-Way ANOVA The one-way ANOVA F-test is a statistical test for testing the equality of \(k\) population means from 3 or more groups within one variable or factor. There are many different types of ANOVA; we will cover with what is commonly referred to as a one-way ANOVA, which has one main effect or factor that is split up into three or more independent treatment levels. In more advanced courses you would learn about dependent groups or two or more factors. • 11.2: Pairwise Comparisons of Means (Post-Hoc Tests) How to determine which means are significantly different from each other, if the ANOVA indicates rejecting the null hypothesis, using the Bonferroni Test. • 11.3: Two-Way ANOVA (Factorial Design) Two-way analysis of variance (two-way ANOVA) is an extension of one-way ANOVA that allows for testing the equality of \(k\)  population means from two independent variables, and to test for interaction between the two variables. • 11.4: Chapter 11 Formulas • 11.5: Chapter 11 Exercises 11: Analysis of Variance The $z$- and $t$-tests can be used to test the equality between two population means $\mu_{1}$ and $\mu_{2}$. When we have more than two groups, we would inflate the probability of making a type I error if we were to compare just two at a time and make a conclusion about all the groups together. To account for this P(Type I Error) inflation, we instead will do an analysis of variance (ANOVA) to test the equality between 3 or more population means $\mu_{1}, \mu_{2}, \mu_{3}, \ldots, \mu_{k}$. The F-test (for ANOVA) is a statistical test for testing the equality of $k$ population means. The one-way ANOVA F-test is a statistical test for testing the equality of $k$ population means from 3 or more groups within one variable or factor. There are many different types of ANOVA; for now, we are going to start with what is commonly referred to as a one-way ANOVA, which has one main effect or factor that is split up into three or more independent treatment levels. In more advanced courses you would learn about dependent groups or two or more factors. Assumptions: • The populations are normally distributed continuous variables with equal variances. • The observations are independent. The hypotheses for testing the equality of $k$ population means (ANOVA) are set up with all the means equal to one another in the null hypothesis and at least one mean is different in the alternative hypothesis. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \ldots = \mu_{k}$ $H_{1}:$ At least one mean is different. Even though there is equality in $H_{0}$, the ANOVA test is testing if the variance between groups is significantly greater than the variance within groups; hence, this will always be set up as a right-tailed test. We will be using abbreviations for many of the numbers found in this section. B = Between, W = Within MS = Mean Square (This is a variance) MSB = Mean Square (Variance) Between groups. MSW = Mean Square (Variance) Within groups. The formula for the F-test statistic is $F = \frac{MSB}{MSW}$. Use the F-distribution with degrees of freedom from the between and within groups. The numerator degrees of freedom are equal to the number of groups minus one, that is numerator degrees of freedom are $df_{B} = k - 1$. The denominator degrees of freedom are equal to the total of all the sample sizes minus the number of groups, that is denominator degrees of freedom are $df_{W} = N - k$. The sum of squares, degrees of freedom and mean squares are organized in a table called an ANOVA table. Figure 11-1 below is a template for an ANOVA table. Where: $\bar{\chi}_{i}$ = sample mean from the $i^{th}$ group $s_{i}^{2}$ = sample variance from the $i^{th}$ group $n_{i}$ = sample size from the $i^{th}$ group $k$ = number of groups $N = n_{1} + n_{2} + \cdots + n_{k}$ = sum of the individual sample sizes for groups Grand mean from all groups = $\bar{\chi}_{GM} = \frac{\sum \chi_{i}}{N}$ Sum of squares between groups = SSB = $\sum n_{i} \left(\bar{\chi}_{i} - \bar{\chi}_{GM}\right)^{2}$ Sum of squares within groups = SSW = $\sum \left(n_{i} - 1\right) s_{i}^{2}$ Mean squares between groups (or the between-groups variance $s_{B}^{2}$) = $MSB = \frac{SSB}{k-1}$ Mean squares within-group (or error within-groups variance $s_{W}^{2}$) = $MSW = \frac{SSW}{N-k}$ $F = \frac{MSB}{MSW}$ is the test statistic. These calculations can be time-consuming to do by hand, so use technology to find the ANOVA table values, critical value and/or p-value. Different textbooks and computer software programs use different labels in the ANOVA tables. • The TI-calculators use the word Factor for Between Groups and Error for Within Groups. • Some software packages use Treatment instead of Between groups. You may see different notation depending on which textbook, software or video you are using. • For between groups SSB = SSB = SSTR = SST = SSF and for within groups SSW = SSW = SSE. • One thing that is consistent within the ANOVA table is that the between = factor = treatment always appears on the first row of the ANOVA table, and the within = error always is in the second row of the ANOVA table. • The $df$ column usually is in the second column since we divide the sum of squares by the $df$ to find the mean squares. However, some software packages will put the $df$ column before the sum of squares column. • The test statistic is under the column labeled F. • Many software packages will give an extra column for the p-value and some software packages give a critical value too. Assumption: The population we are sampling from must be approximately normal with equal variances. If these assumptions are not met there are more advanced statistical methods that should be used. An educator wants to see if there is a difference in the average grades given to students for the 4 different instructors who teach intro statistics courses. They randomly choose courses that each of the 4 instructors taught over the last few years and perform an ANOVA test. What would be the correct hypotheses for this test? Solution There are 4 groups (the 4 instructors) so there will be 4 means in the null hypothesis, $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = mu_{4}$. It is tempting to write $\mu_{1} \neq \mu_{2} \neq \mu_{3} \neq mu_{4}$ for the alternative hypothesis, but the test is testing the opposite of all equal is that at least one mean is different. We could for instance have just the mean for group 2 be different and groups 1, 3 and 4 have equal means. If we wanted to write out all the way to have unequal groups, you would have a combination problem with ${}_{4} C_{3} + {}_{4} C_{2} + {}_{4} C_{1}$ ways of getting unequal means. Instead of all of these possibilities, we just write a sentence: “at least one mean is different.” The hypotheses are: $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = mu_{4}$ $H_{1}:$ At least one mean is different. A researcher claims that there is a difference in the average age of assistant professors, associate professors, and full professors at her university. Faculty members are selected randomly and their ages are recorded. Assume faculty ages are normally distributed. Test the claim at the $\alpha$ = 0.01 significance level. The data are listed below. Solution The claim is that there is a difference in the average age of assistant professors $(\mu_{1})$, associate professors $(\mu_{2})$, and full professors $(\mu_{3})$ at her university. The correct hypotheses are: $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$ $H_{1}:$ At least one mean differs. We need to compute all of the necessary parts for the ANOVA table and F-test. Compute the descriptive stats for each group with your calculator using 1–Var Stats L1. Record the sample size, sample mean, sum of $\chi$, and sample variances. Take the standard deviation $s_{\chi}$ and square it to find the variance $s_{\chi}^{2}$ for each group. Assistant Prof $n_{1} = 7$ $\bar{\chi}_{1} = 37$ $\sum \chi_{1} = 259$ $s_{1}^{2} = 53$ Associate Prof $n_{2} = 7$ $\bar{\chi}_{2} = 52$ $\sum \chi_{2} = 364$ $s_{2}^{2} = 55.6667$ Prof $n_{3} = 7$ $\bar{\chi}_{1} = 54$ $\sum \chi_{3} = 378$ $s_{3}^{2} = 35$ Compute the grand mean: $N = n_{1} + n_{2} + n_{3} = 7 + 7 + 7 = 21$, $\bar{\chi}_{GM} = \frac{\sum \chi_{i}}{N} = \frac{(259 + 364 + 378)}{21} = 47.66667$. Compute the sum of squares for between groups: SSB = $\sum n_{i} \left(\bar{\chi}_{i} - \bar{\chi}_{GM}\right)^{2} = n_{1} \left(\bar{\chi}_{1} - \bar{\chi}_{GM}\right)^{2} + n_{2} \left(\bar{\chi}_{2} - \bar{\chi}_{GM}\right)^{2} + n_{3} \left(\bar{\chi}_{3} - \bar{\chi}_{GM}\right)^{2} = 7(37 - 47.66667)^{2} + 7(52 − 47.66667)^{2} + 7(54 − 47.66667)^{2} = 1208.66667$. Compute the sum of squares within groups: SSW = $\sum \left(n_{i} - 1\right) s_{i}^{2} = \left(n_{1} - 1\right) s_{1}^{2} + \left(n_{2} - 1\right) s_{2}^{2} + \left(n_{3} - 1\right) s_{3}^{2} = 6 \cdot 53 + 6 \cdot 55.66667 + 6 \cdot 35 = 862$. Place the sum of squares into your ANOVA table and add them up to get the total. Next, find the degrees of freedom: $k = 3$ since there are 3 groups, so $df_{B} = k-1 = 2$; $df_{W} = N-k = 21-3 = 18$. Add the degrees of freedom to the table to get the total $df$. Compute the mean squares by dividing the sum of squares by their corresponding $df$, then add these numbers to the table. $MSB = \frac{SSB}{k-1} = \frac{1208.66667}{2} = 604.3333 \quad MSW = \frac{SSW}{N-k} = \frac{862}{18} = 47.8889$ The test statistic is the ratio of these two mean squares: $F = \frac{MSB}{MSW} = \frac{604.3333}{47.8889} = 12.6195$. Add the test statistic to the table under F. All ANOVA tests are right-tailed tests, so the critical value for a right-tailed F-test is found with the F-distribution. Use $\alpha$ = 0.01 area in the right-tail. The degrees of freedom are $df_{N} = 2$, and $df_{D} = 18$. The critical value is 6.0129; see the sampling distribution curve in Figure 11-2. Note that the F-distribution starts at zero, and is skewed to the right. The test statistic of 12.6295 is larger than the critical value of 6.0129 so the decision would be to reject the null hypothesis. Decision: Reject $H_{0}$. Summary: There is enough evidence to support the claim that there is a difference in the average age of assistant professors, associate professors, and full professors at her university. If we were using the p-value method, then we would use the calculator or computer for a right-tailed test. The p-value = 0.0003755 is less than $\alpha$ = 0.01, which leads to the same decision of rejecting $H_{0}$ that we found using the critical value method. Alternatively, use technology to compute the ANOVA table and p-value. TI-84: ANOVA, hypothesis test for the equality of k population means. Note you have to have the actual raw data to do this test on the calculator. Press the [STAT] key and then the [EDIT] function, type the three lists of data into list one, two and three. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [F:ANOVA(] and press the [ENTER] key. This brings you back to the regular screen where you should now see ANOVA(. Now hit the [2nd] [L1] [,] [2nd] [L2] [,][2nd] [L3][)] keys in that order. You should now see ANOVA(L1,L2,L3); if you had 4 lists you would then have an additional list. Press the [ENTER] key. The calculator returns the F-test statistic, the p-value, Factor (Between) $df$, SS and MS, Error (Within) $df$, SS and MS. The last value, Sxp is the square root of the MSE. TI-89: ANOVA, hypothesis test for the equality of $k$ population means. Go to the [Apps] Stat/List Editor, then type in the data for each group into a separate list (or if you don’t have the raw data, enter the sample size, sample mean and sample variance for group 1 into list1 in that order, repeat for list2, etc.). Press [2nd] then F6 [Tests], then select C:ANOVA. Select the input method data or stats. Select the number of groups. Press the [ENTER] key to calculate. The calculator returns the F-test statistic, the p-value, Factor (Between) df, SS and MS, Error (Within) df, SS and MS. The last value, Sxp, is the square root of the MSE. Excel: Type the labels and data into adjacent columns (it is important not to have any blank columns, which would be an additional group counted as zeros). Select the Data tab, then Data Analysis, ANOVA: Single-Factor, then OK. Next, select all three columns of data at once for the input range. Check the box that says Labels in first row (only select this if you actually selected the labels in the input range). Change your value of alpha and output range will be one cell reference where you want your output to start, see below. You get the following output: Excel gives both the p-value and critical value so you can use either method when making your decision, but make sure you are comfortable with both. Summary The ANOVA test gives evidence that there is a difference between three or more means. The null hypothesis will always have the means equal to one another versus the alternative hypothesis that at least one mean is different. The F-test results are about the difference in means, but the test is actually testing if the variation between the groups is larger than the variation within the groups. If this between group variation is significantly larger than the within groups then we can say there is a statistically significant difference in the population means. Hence, we are always performing a right-tailed F-test for ANOVA. Make sure to only compare the p-value with $\alpha$ and the test statistic to the critical value.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/11%3A_Analysis_of_Variance/11.01%3A_One-Way_ANOVA.txt
If you do in fact reject $H_{0}$, then you know that at least two of the means are different. The ANOVA test does not tell which of those means are different, only that a difference exists. Most likely your sample means will be different from each other, but how different do they need to be for there to be a statistically significant difference? To determine which means are significantly different, you need to conduct further tests. These post-hoc tests include the range test, multiple comparison tests, Duncan test, Student-Newman-Keuls test, Tukey test, Scheffé test, Dunnett test, Fisher’s least significant different test, and the Bonferroni test, to name a few. There are more options, and there is no consensus on which test to use. These tests are available in statistical software packages such as R, Minitab and SPSS. One should never use two-sample $t$-tests from the previous chapter. This would inflate the type I error. The probability of at least one type I error increases exponentially with the number of groups you are comparing. Let us assume that $\alpha = 0.05$, then the probability that an observed difference between two groups that does not occur by chance is $1 - \alpha = 0.95$. If two comparisons are made, the probability that the observed difference is true is no longer 0.95. The probability is $(1 - \alpha)^{2} = 0.9025$, and the P(Type I Error) = $1 - 0.9025 = 0.0975$. Therefore, the P(Type I Error) occurs if $m$ comparisons are made is $1 - (1 - \alpha)m$. For instance, if we are comparing the means of four groups: There would be $m = {}_4 C_{2} = 6$ different ways to compare the 4 groups: groups (1,2), (1,3), (1,4), (2,3), (2,4), and (3,4). The P(Type I Error) = $1 - (1 - \alpha)6 = 0.2649$. This is why a researcher should use ANOVA for comparing means, instead of independent $t$-tests. There are many different methods to use. Many require special tables or software. We could actually just start with post-hoc tests, but they are a lot of work. If we run an ANOVA and we fail to reject the null hypothesis, then there is no need for further testing and it will save time if you were doing these steps by hand. Most statistical software packages give you the ANOVA table followed by the pairwise comparisons with just a change in the options menu. Keep in mind that Excel is not a statistical software and does not give pairwise comparisons. We will use the Bonferroni Test, named after the mathematician Carlo Bonferroni. The Bonferroni Test uses the t-distribution table and is similar to previous t-tests that we have used, but adjusts $\alpha$ to the number of comparisons being made. The Bonferroni test is a statistical test for testing the difference between two population means (only done after an ANOVA test shows not all means are equal). The formula for the Bonferroni test statistic is $t = \dfrac{\bar{x}_{i} - \bar{x}_{j}}{\sqrt{\left( MSW \left(\frac{1}{n_{i}} + \frac{1}{n_{j}}\right) \right)}}$. where $\bar{x}_{i}$ and $\bar{x}_{j}$ are the means of the samples being compared, $n_{i}$ and $n_{j}$ are the sample sizes, and $MSW$ is the within-group variance from the ANOVA table. The Bonferroni test critical value or p-value is found by using the t-distribution with within degrees of freedom $df_{W} = N-k$, using an adjusted $\frac{\alpha}{m}$ two-tail area under the t-distribution, where $k$ = number of groups and $m = {}_{k} C_{2}$, all the combinations of pairs out of $k$ groups. Critical Value Method According to the ANOVA test that we previously performed, there does appear to be a difference in the average age of assistant professors $(\mu_{1})$, associate professors $(\mu_{2})$, and full professors $(\mu_{3})$ at this university. The hypotheses were: $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$ $H_{1}:$ At least one mean differs. The decision was to reject $H_{0}$, which means there is a significant difference in the mean age. The ANOVA test does not tell us, though, where the differences are. Determine which of the difference between each pair of means is significant. That is, test if $\mu_{1} \neq \mu_{2}$, if $\mu_{1} \neq \mu_{3}$, and if $\mu_{2} \neq \mu_{3}$. Solution The alternative hypothesis for the ANOVA was “at least one mean is different.” There will be ${}_{3} C_{2} = 3$ subsequent hypothesis tests to compare all the combinations of pairs (Group 1 vs. Group 2, Group 1 vs. Group 3, and Group 2 vs. Group 3). Note that if you have 4 groups then you would have to do ${}_{4} C_{2} = 6$ comparisons, etc. Use the t-distribution to find the critical value for the Bonferroni test. The total of all the individual sample sizes $N = 21$ and $k = 3$, and $m = {}_{3} C_{2} = 3$, then the area for both tails would be $\frac{\alpha}{m} = \frac{0.01}{m} = 0.003333$. This is a two-tailed test so the area in one tail is $\frac{0.003333}{2}$ with $df_{W} = N-k = 21-3 = 18$ gives $\text{C.V.} = \pm 3.3804$. The critical values are really far out in the tail so it is hard to see the shaded area. See Figure 11-3. Compare $\mu_{1}$ and $\mu_{2}$: $H_{0}: \mu_{1} = \mu_{2}$ $H_{1}: \mu_{1} \neq \mu_{2}$ The test statistic is $t = \frac{\bar{x}_{1} - \bar{x}_{2}}{\sqrt{\left( MSW \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right)}} = \frac{37 - 52}{\sqrt{\left(47.8889 \left(\frac{1}{7} + \frac{1}{7}\right) \right)}} = -4.0552$. Compare the test statistic to the critical value. Since the test statistic $-4.0552 < \text{critical value} = -3.3804$, we reject $H_{0}$. There is enough evidence to conclude that there is a difference in the average age of assistant and associate professors. Compare $\mu_{1}$ and $\mu_{3}$: $H_{0}: \mu_{1} = \mu_{3}$ $H_{1}: \mu_{1} \neq \mu_{3}$ The test statistic is $t = \frac{\bar{x}_{1} - \bar{x}_{3}}{\sqrt{\left( MSW \left(\frac{1}{n_{1}} + \frac{1}{n_{3}}\right) \right)}} = \frac{37 - 54}{\sqrt{\left(47.8889 \left(\frac{1}{7} + \frac{1}{7}\right) \right)}} = -4.5958$. Compare the test statistic to the critical value. Since the test statistic $-4.5958 < \text{critical value} = -3.3804$, we reject $H_{0}$. Reject $H_{0}$, since the test statistic is in the lower tail. There is enough evidence to conclude that there is a difference in the average age of assistant and full professors. Compare $\mu_{2}$ and $\mu_{3}$: $H_{0}: \mu_{2} = \mu_{3}$ $H_{1}: \mu_{2} \neq \mu_{3}$ The test statistic is $t = \frac{\bar{x}_{2} - \bar{x}_{3}}{\sqrt{\left( MSW \left(\frac{1}{n_{2}} + \frac{1}{n_{3}}\right) \right)}} = \frac{52 - 54}{\sqrt{\left(47.8889 \left(\frac{1}{7} + \frac{1}{7}\right) \right)}} = -0.5407$ Compare the test statistic to the critical value. Since the test statistic is between the critical values $-3.3804 < -0.5407 < 3.3804$, we fail to reject $H_{0}$. Do not reject $H_{0}$, since the test statistic is between the two critical values. There is enough evidence to conclude that there is not a difference in the average age of associate and full professors. Note: you should get at least one group that has a reject $H_{0}$, since you only do the Bonferroni test if you reject $H_{0}$ for the ANOVA. Also, note that the transitive property does not apply. It could be that group 1 = group 2 and group 2 = group 3; this does not mean that group 1 = group 3. P-Value Method A research organization tested microwave ovens. At $\alpha$ = 0.10, is there a significant difference in the average prices of the three types of oven? Solution The ANOVA was run in Excel. To test if there is a significant difference in the average prices of the three types of oven, the hypotheses are: $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$ $H_{1}:$ At least one mean differs. Use the Excel output to find the p-value in the ANOVA table of 0.001019, which is less than $\alpha$ so reject $H_{0}$; there is at least one mean that is different in the average oven prices. There is a statistically significant difference in the average prices of the three types of oven. Use the Bonferroni test p-value method to see where the differences are. Compare $\mu_{1}$ and $\mu_{2}$: $H_{0}: \mu_{1} = \mu_{2}$ $H_{1}: \mu_{1} \neq \mu_{2}$ $t = \frac{\bar{x}_{1} - \bar{x}_{2}}{\sqrt{\left( MSW \left(\frac{1}{n_{1}} + \frac{1}{n_{2}}\right) \right)}} = \frac{233.3333-203.125}{\sqrt{\left((1073.794 \left(\frac{1}{6} + \frac{1}{8}\right) \right)}} = 1.7070$ To find the p-value, find the area in both tails and multiply this area by $m$. The area to the right of $t = 1.707$, using $df_{W} = 19$, is 0.0520563. Remember these are always two-tail tests, so multiply this area by 2, to get both tail areas of 0.104113. Then multiply this area by $m = {}_{3} C_{2} = 3$ to get a p-value = 0.3123. Since the p-value = $0.3123 > \alpha = 0.10$, we do not reject $H_{0}$. There is a statistically significant difference in the average price of the 1,000- and 900-watt ovens. Compare $\mu_{1}$ and $\mu_{3}$: $H_{0}: \mu_{1} = \mu_{3}$ $H_{1}: \mu_{1} \neq \mu_{3}$ $t = \frac{\bar{x}_{1} - \bar{x}_{3}}{\sqrt{\left( MSW \left(\frac{1}{n_{1}} + \frac{1}{n_{3}}\right) \right)}} = \frac{233.3333-155.625}{\sqrt{\left((1073.794 \left(\frac{1}{6} + \frac{1}{8}\right) \right)}} = 4.3910$ Use $df_{W}$ = 19 to find the p-value. Since the p-value = (tail areas)*3 = $0.00094 < \alpha = 0.10$, we reject $H_{0}$. There is a statistically significant difference in the average price of the 1,000- and 800-watt ovens. Compare $\mu_{2}$ and $\mu_{3}$: $H_{0}: \mu_{2} = \mu_{3}$ $H_{1}: \mu_{2} \neq \mu_{3}$ $t = \frac{\bar{x}_{2} - \bar{x}_{3}}{\sqrt{\left( MSW \left(\frac{1}{n_{2}} + \frac{1}{n_{3}}\right) \right)}} = \frac{203.125-155.625}{\sqrt{\left((1073.794 \left(\frac{1}{8} + \frac{1}{8}\right) \right)}} = 2.8991$ Use $df_{W} = 19$ to find the p-value (remember that these are always two-tail tests). Since the p-value = $0.0276 < \alpha = 0.10$, we reject $H_{0}$. There is a statistically significant difference in the average price of the 900- and 800-watt ovens. There is a chance that after we multiply the area by the number of comparisons, the p-value would be greater than one. However, since the p-value is a probability we would cap the probability at one. This is a lot of math! The calculators and Excel do not have post-hoc pairwise comparisons shortcuts, but we can use the statistical software called SPSS to get the following results. We will look specifically at interpreting the SPSS output for Example 11-4. The first table, labeled "Descriptives", gives descriptive statistics; the second table is the ANOVA table, and note that the p-value is in the column labeled Sig. The Multiple Comparisons table is where we want to look. There are repetitive pairs in the last table, just in a different order. The first two rows in Figure 11-4 are comparing group 1 with groups 2 and 3. If we follow the first row across under the Sig. column, this gives the p-value = 0.312 for comparing the 1,000- and 900-watt ovens. The second row in Figure 11-4 compares the 1,000- and 800-watt ovens, p-value = 0.001. The third row in Figure 11-4 compares the 900- and 1000-watt ovens in the reverse order as the first row; note that the difference in the means is negative but the p-value is the same. The fourth row in Figure 11-4 compares the 900- and 800-watt ovens, p-value = 0.028. The last set of rows in Figure 11-4 are again repetitive and give the 800-watt oven compared to the 900- and 1000-watt ovens. Keep in mind that post-hoc is defined as occurring after an event. A post-hoc test is done after an ANOVA test shows that there is a statistically significant difference. You should get at least one group that has a result of "reject $H_{0}$", since you only do the Bonferroni test if you reject $H_{0}$ for the ANOVA.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/11%3A_Analysis_of_Variance/11.02%3A_Pairwise_Comparisons_of_Means_%28Post-Hoc_Tests%29.txt
Two-way analysis of variance (two-way ANOVA) is an extension of one-way ANOVA. It can be used to compare the means of two independent variables or factors from two or more populations. It can also be used to test for interaction between the two independent variables. We will not be doing the sum of squares calculations by hand. These numbers will be given to you in a partially filled out ANOVA table or an Excel output will be given in the problem. There are three sets of hypotheses for testing the equality of $k$ population means from two independent variables, and to test for interaction between the two variables (two-way ANOVA): Row Effect (Factor A): $H_{0}:$ The row variable has no effect on the average ___________________. $H_{1}:$ The row variable has an effect on the average ___________________. Column Effect (Factor B): $H_{0}$: The column variable has no effect on the average ___________________. $H_{1}$: The column variable has an effect on the average ___________________. Interaction Effect (A×B): $H_{0}:$ There is no interaction effect between row variable and column variable on the average ___________________. $H_{1}:$ There is an interaction effect between row variable and column variable on the average ___________________. These ANOVA tests are always right-tailed F-tests. The F-test (for two-way ANOVA) is a statistical test for testing the equality of k independent quantitative population means from two nominal variables, called factors. The two-way ANOVA also tests for interaction between the two factors. Assumptions: 1. The populations are normal. 2. The observations are independent. 3. The variances from each population are equal. 4. The groups must have equal sample sizes. The formulas for the F-test statistics are: Factor 1: $F_{A} = \frac{MS_{A}}{MSE}$ with $df_{A} = a-1$ and $df_{\text{E}} = ab(n-1)$ Factor 2: $F_{B} = \frac{MS_{B}}{MSE}$ with $df_{B} = b-1$ and $df_{\text{E}} = ab(n-1)$ Interaction: $F_{A \times B} = \frac{MS_{A \times B}}{MSE}$ with $df_{A \times B} = (a-1)(b-1)$ and $df_{\text{E}} = ab(n-1)$ Where: $SS_{\text{A}}$ = sum of squares for factor A, the row variable $SS_{\text{B}}$ = sum of squares for factor B, the column variable $SS_{\text{A} \times \text{B}}$ = sum of squares for interaction between factor A and B $SSE$ = sum of squares of error, also called sum of squares within groups $a$ = number of levels of factor A $b$ = number of levels of factor B $n$ = number of subjects in each group It will be helpful to make a table. Figure 11-5 is called a two-way ANOVA table. Since the computations for the two-way ANOVA are tedious, this text will not cover performing the calculations by hand. Instead, we will concentrate on completing and interpreting the two-way ANOVA tables. A farmer wants to see if there is a difference in the average height for two new strains of hemp plants. They believe there also may be some interaction with different soil types so they plant 5 hemp plants of each strain in 4 types of soil: sandy, clay, loam and silt. At $\alpha$ = 0.01, analyze the data shown, using a two-way ANOVA as started below in Figure 11-6. See below for raw data. Rough drawings from memory were futile. He didn't even know how long it had been, beyond Ford Prefect's rough guess at the time that it was "a couple of million years" and he simply didn't have the maths. Still, in the end he worked out a method which would at least produce a result. He decided not to mind the fact that with the extraordinary jumble of rules of thumb, wild approximations and arcane guesswork he was using he would be lucky to hit the right galaxy, he just went ahead and got a result. He would call it the right result. Who would know? As it happened, through the myriad and unfathomable chances of fate, he got it exactly right, though he of course would never know that. He just went up to London and knocked on the appropriate door. "Oh. I thought you were going to phone me first." (Adams, 2002) 11.04: Chapter 11 Formulas One-Way ANOVA $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \ldots = \mu_{k}$ f$H_{1}:$ At least one mean is different. Source $SS$ = Sum of Squares $df$ $MS$ = Mean Square F Between (Factor) $\sum n_{i} \left(\bar{x}_{i} - \bar{x}_{GM}\right)^{2}$ $k-1$ $MSB = \frac{SSB}{k-1}$ $F = \frac{MSB}{MSW}$ Within (Error) $\sum \left(n_{i} - 1\right) s_{i}^{2}$ $N-k$ $MSW = \frac{SSW}{N-k}$ Total SST $N-1$ $\bar{x}_{i}$ = sample mean from the $i^{th}$ group $n_{i}$ = sample size of the $i^{th}$ group $k$ = number of groups $s_{i}^{2}$ = sample variance from the $i^{th}$ group $N = n_{1} + n_{2} + \ldots + n_{k}$ $\bar{x}_{GM} = \frac{\sum x_{i}}{N}$ Bonferroni Test $H_{0}: \mu_{i} = \mu_{j}$ $H_{1}: \mu_{i} \neq \mu_{j}$ Bonferroni test statistic: $t = \dfrac{\bar{x}_{i} - \bar{x}_{j}}{\sqrt{ \left(MSW \left(\frac{1}{n_{i}} + \frac{1}{n_{j}}\right) \right)}}$ Multiply p-value by $m = {}_k C_{2}$, divide area for critical value by $m = {}_k C_{2}$. Two-Way ANOVA Row Effect (Factor A): $H_{0}:$ The row variable has no effect on the average ___________________. $H_{1}:$ The row variable has an effect on the average ___________________. Column Effect (Factor B): $H_{0}$: The column variable has no effect on the average ___________________. $H_{1}$: The column variable has an effect on the average ___________________. Interaction Effect (A×B): $H_{0}:$ There is no interaction effect between row variable and column variable on the average ___________________. $H_{1}:$ There is an interaction effect between row variable and column variable on the average ___________________. Source $SS$ $df$ $MS$ F $A$ (row factor) $SS_{A}$ $a-1$ $MS_{A} = \frac{SS_{A}}{df_{A}}$ $F_{A} = \frac{MS_{A}}{MSE}$ $B$ (column factor) $SS_{B}$ $b-1$ $MS_{B} = \frac{SS_{B}}{df_{B}}$ $F_{B} = \frac{MS_{B}}{MSE}$ $A \times B$ (interaction) $SS_{A \times B}$ $(a-1)(b-1)$ $MS_{A \times B} = \frac{SS_{A \times B}}{df_{A \times B}}$ $F_{A \times B} = \frac{MS_{A \times B}}{MSE}$ Error (within) $SSE$ $ab(n-1)$ $MSE = \frac{SSE}{df_{E}}$ Total $SST$ $N-1$
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/11%3A_Analysis_of_Variance/11.03%3A_Two-Way_ANOVA_%28Factorial_Design%29.txt
Exercises 1. What does the acronym ANOVA stand for? a) Analysis of Variance b) Analysis of Means c) Analyzing Various Means d) Analysis of Variance e) Anticipatory Nausea and Vomiting f) Average Noise Variance 2. What would the test statistic equal if MSB = MSW? a) -1 b) 0 c) 1 d) 4 e) 1.96 3. A researcher would like to test to see if there is a difference in the average profit between 5 different stores. Which are the correct hypotheses for an ANOVA? a) $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$ $H_{1}:$ At least one mean is different. b) $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ $H_{1}: \mu_{1} \neq \mu_{2} \neq \mu_{3} \neq \mu_{4} \neq \mu_{5}$ c) $H_{0}: \mu_{1} \neq \mu_{2} \neq \mu_{3} \neq \mu_{4} \neq \mu_{5}$ $H_{1}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ d) $H_{0}: \sigma_{B}^{2} \neq \sigma_{W}^{2}$ $H_{1}: \sigma_{B}^{2} = \sigma_{W}^{2}$ e) $H_{0}: \mu_{1} = \mu_{2} = \mu_{3} = \mu_{4} = \mu_{5}$ $H_{1}:$ At least one mean is different 4. An ANOVA was run to test to see if there was a significant difference in the average cost between three different brands of snow skis. Random samples for each of the three brands were collected from different stores. Assume the costs are normally distributed. At $\alpha$ = 0.05, test to see if there is a difference in the means. State the hypotheses, fill in the ANOVA table to find the test statistic, compute the p-value, state the decision and summary. 5. An ANOVA test was run for the per-pupil costs for private school tuition for three counties in the Portland, Oregon, metro area. Assume tuition costs are normally distributed. At $\alpha$ = 0.05, test to see if there is a difference in the means. a) State the hypotheses. b) Fill out the ANOVA table to find the test statistic. c) Compute the p-value. d) State the correct decision and summary. 6. Cancer is a terrible disease. Surviving may depend on the type of cancer the person has. To see if the mean survival time for several types of cancer are different, data was collected on the survival time in days of patients with one of these cancers in advanced stage. The data is from "Cancer survival story," 2013. (Please realize that this data is from 1978. There have been many advances in cancer treatment, so do not use this data as an indication of survival rates from these cancers.) Does the data indicate that there is a difference in the mean survival time for these types of cancer? Use a 1% significance level. a) State the hypotheses. b) Fill in an ANOVA table to find the test statistic. c) Compute the p-value. d) State the correct decision and summary. 7. What does the Bonferroni comparison test for? a) The analysis of between and within variance. b) The difference between all the means at once. c) The difference between two pairs of mean. d) The sample size between the groups. 8. True or false: The Bonferroni test should only be done when you reject the null hypothesis F-test? 9. A manufacturing company wants to see if there is a significant difference in three types of plastic for a new product. They randomly sample prices for each of the three types of plastic and run an ANOVA. Use $\alpha$ = 0.05 to see if there is a statistically significant difference in the mean prices. Part of the computer output is shown below. a) State the hypotheses. b) Fill in the ANOVA table to find the test statistic. c) Compute the critical value. d) State the decision and summary. e) Which group(s) are significantly different based on the Bonferroni test? 10. A manager on an assembly line wants to see if they can speed up production by implementing a new switch for their conveyor belts. There are four switches to choose from and replacing all the switches along the assembly line will be quite costly. They test out each of the four designs and record assembly times. Use $\alpha$ = 0.05 to see if there is a statistically significant difference in the mean times. a) State the hypotheses. b) Fill in the ANOVA table to find the test statistic. c) Compute the critical value. d) State the correct decision and summary. e) Should a post-hoc Bonferroni test be done? Why? i. No, since the p-value > $\alpha$ there is no difference in the means. ii. Yes, we should always perform a post-hoc test after an ANOVA iii. No, since we already know that there is a difference in the means. iv. Yes, since the p-value < $\alpha$ we need to see where the differences are. f) All four new switches are significantly faster than the current switch method. Of the four new types of switches, switch 3 cost the least amount to implement. Which of the 4, if any, should the manager choose? i. The manager should stay with the old switch method since we failed to reject the null hypothesis. ii. The manager should switch to any of the four new switches since we rejected the null hypothesis. iii. The manager should randomly pick from switch types 1, 2 or 4. iv. Since there is no statistically significant difference in the mean time they should choose switch 3 since it is the least expensive. For exercises 11-16, Assume that all distributions are normal with equal population standard deviations, and the data was collected independently and randomly. Show all 5 steps for hypothesis testing. If there is a significant difference is found, run a Bonferroni test to see which means are different. a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 11. Is a statistics class's delivery type a factor in how well students do on the final exam? The table below shows the average percent on final exams from several randomly selected classes that used the different delivery types. Use a level of significance of $\alpha$ = 0.10. 12. The dependent variable is the number of times a photo gets a like on social media. The independent variable is the subject matter, selfie or people, landscape, meme, or a cute animal. The researcher is exploring whether the type of photo makes a difference on the mean number of likes. A random sample of photos were taken from social media. Test to see if there is a significant difference in the means using $\alpha$ = 0.05. 13. The dependent variable is movie ticket prices, and the groups are the geographical regions where the theaters are located (suburban, rural, urban). A random sample of ticket prices were taken from randomly chosen states. Test to see if there is a significant difference in the means using $\alpha$ = 0.05. 14. Recent research indicates that the effectiveness of antidepressant medication is directly related to the severity of the depression (Khan, Brodhead, Kolts & Brown, 2005). Based on pre-treatment depression scores, patients were divided into four groups based on their level of depression. After receiving the antidepressant medication, depression scores were measured again and the amount of improvement was recorded for each patient. The following data are similar to the results of the study. Use a significance level of $\alpha$ = 0.05. Test to see if there is a difference in the mean scores. 15. An ANOVA was run to test to see if there was a significant difference in the average cost between three different types of fabric for a new clothing company. Random samples for each of the three fabric types was collected from different manufacturers. At $\alpha$ = 0.10, run an ANOVA test to see if there is a difference in the means. 16. Three students, Linda, Tuan, and Javier, are given laboratory rats for a nutritional experiment. Each rat's weight is recorded in grams. Linda feeds her 9 rats Formula A, Tuan feeds his 9 rats Formula B, and Javier feeds his 9 rats Formula C. At the end of a specified time-period, each rat is weighed again, and the net gain in grams is recorded. Using a significance level of 0.10, test to see if there is a difference in the mean weight gain for the three formulas. 17. For a two-way ANOVA, a row factor has 3 different levels, a column factor has 4 different levels. There are 15 data values in each group. Find the following. a) The degrees of freedom for the row effect. b) The degrees of freedom for the column effect. c) The degrees of freedom for the interaction effect. 18. Fill out the following two-way ANOVA table. 19. Fill out the following two-way ANOVA table. 20. Fill out the following two-way ANOVA table. 21. Fill out the following two-way ANOVA table. 22. Fill out the following two-way ANOVA table. 23. A professor is curious if class size and format for which homework is administered has an impact on students’ test grades. In a particular semester, she samples 4 students in each category below and records their grade on the department-wide final exam. The data are recorded below. Assume the variables are normally distributed. Run a two-way ANOVA using $\alpha$ = 0.05. 24. A study was conducted to observe the impact of young adults eating a diet that is high in healthy fats. A random sample of young adults was instructed to eat a particular menu for a month. They were then tested to check a combination of recall skills, reflexes, and physical fitness and scored from 1-10 based on performance. They were divided into two groups, one eating a menu that is high in healthy fats, and the other low in healthy fats. They were also divided based on age. The data are recorded below. Assume the variables are normally distributed. Run a two-way ANOVA using $\alpha$ = 0.05. 25. A door-to-door sales company sells three types of vacuums. The company manager is interested to find out if the type of vacuum sold has an effect on whether a sale is made, as well as what time of day the sale is made. She samples 36 sales representatives and divides them into the following categories, then records their sales (in hundreds of dollars) for a week. Assume the variables are normally distributed. Run a two-way ANOVA using $\alpha$ = 0.05. 26. A customer shopping for a used car is curious if the price of a vehicle varies based on type of vehicle and location of used car dealership. She samples 5 vehicles in each category below and records the price of each vehicle. Each vehicle is in similar shape in regards to age, mileage, and condition. A two-way ANOVA test was run and the information from the test is summarized in the table below. State all 3 hypotheses, critical values, decisions and summaries using $\alpha$ = 0.05. 27. A sample of patients are tested for cholesterol level and divided into categories by age and by location of residence in the United States. The data are recorded below. Assume the variables are normally distributed. A two-way ANOVA test was run and the information from the test is summarized in the table below. State all 3 hypotheses, critical values, decisions and summaries using $\alpha$ = 0.05. 28. The employees at a local nursery swear by a certain variety of tomato seed and a certain variety of fertilizer. To test their instincts, they take a sample of 3 varieties of tomato seed and 4 varieties of fertilizer and find the following yield of tomatoes from each tomato plant. Assume the variables are normally distributed. Run a two-way ANOVA using $\alpha$ = 0.05. 29. An obstetrician feels that her patients who are taller and leaner before becoming pregnant typically have quicker deliveries. She samples 3 women in each of the following categories of Height and Body Mass Index and records the time they spent in the pushing phase of labor in minutes. All women in the sample had a natural vaginal delivery and it was their first childbirth. The data are recorded below. Assume the variables are normally distributed. Run a two-way ANOVA using $\alpha$ = 0.05. Solutions to Odd-Numbered Exercises 1. a 3. e 5. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$; $H_{1}:$ At least one mean is different. F = 0.5902; p-value = 0.5605. Do not reject $H_{0}$. There is not enough evidence to support the claim that there is a difference in the mean per-pupil costs for private school tuition for three counties in the Portland, Oregon, metro area. 7. c 9. a) $H_{0}: \mu_{A} = \mu_{B} = \mu_{C}$; $H_{1}:$ At least one mean is different. b) $F = 10.64046$ c) $F_{\alpha} = 3.0781$ d) Reject $H_{0}$. There is enough evidence to support the claim that there is a difference in the mean price of the three types of plastic. e) $H_{0}: \mu_{A} = \mu_{B}$; $H_{1} = \mu_{A} \neq \mu_{B}$; p-value=0; reject $H_{0}$. There is significant difference in price between plastics A and B. $H_{0}: \mu_{A} = \mu_{C}$; $H_{1} = \mu_{A} \neq \mu_{C}$; p-value = 1; do not reject $H_{0}$. There is not a significant difference in price between plastics A and C. $H_{0}: \mu_{B} = \mu_{C}$; $H_{1} = \mu_{B} \neq \mu_{C}$; p-value = 0.003; reject $H_{0}$. There is significant difference in price between plastics B and C. 11. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$; $H_{1}:$ At least one mean is different. F = 2.5459; p-value = 0.0904; do not reject $H_{0}$. There is not enough evidence to support the claim that there is a difference in the mean movie ticket prices by geographical regions. 13. $H_{0}: \mu_{1} = \mu_{2} = \mu_{3}$; $H_{1}:$ At least one mean is different. F = 2.7121; p-value = 0.0896; reject $H_{0}$. There is sufficient evidence to support the claim that course delivery type is a factor in final exam score. 15. $H_{0}: \mu_{A} = \mu_{B} = \mu_{C}$; $H_{1}:$ At least one mean is different. F = 2.895; p-value = 0.06; reject $H_{0}$. 0. There is sufficient evidence to support the claim that there is a difference in the mean cost between three different types of fabric. $H_{0}: \mu_{A} = \mu_{B}$; $H_{1}: \mu_{A} \neq \mu_{B}$; p-value = 1; do not reject $H_{0}$. There is not a significant difference in the mean cost of fabrics A and B. $H_{0}: \mu_{A} = \mu_{C}$; $H_{1}: \mu_{A} \neq \mu_{C}$; p-value = 0.222; do not reject $H_{0}$. There is not a significant difference in the mean cost of fabrics A and C. $H_{0}: \mu_{B} = \mu_{C}$; $H_{1}: \mu_{B} \neq \mu_{C}$; p-value = 0.07; reject $H_{0}$. There is significant difference in the mean cost of fabrics B and C. 17. a) $df_{\text{A}} = 2, df_{\text{E}} = 168$ b) $df_{\text{B}} = 3, df_{\text{E}} = 168$ c) $df_{\text{A} \times \text{B}} = 6, df_{\text{E}} = 168$ 19. 21. 23. $H_{0}:$ The format of the homework (paper vs. online) has no effect on the mean test grade. $H_{1}:$ The format of the homework (paper vs. online) has an effect on the mean test grade. F = 0.7185; CV = F.INV.RT(0.05,1,12) = 4.7472; do not reject $H_{0}$. There is not enough evidence to support the claim that the format of the homework (paper vs. online) has an effect on the mean test grade. $H_{0}:$ The class size has no effect on the mean test grade. $H_{1}:$ The class size has an effect on the mean test grade. F = 5.7064; CV = F.INV.RT(0.05,1,12) = 4.7472; reject $H_{0}$. There is enough evidence to support the claim that the class size has an effect on the mean test grade. $H_{0}:$ There is no interaction effect between the format of the homework (paper vs. online) and the class size on the mean test grade. $H_{1}:$ There is an interaction effect between the format of the homework (paper vs. online) and the class size on the mean test grade. F = 0.8082; CV = F.INV.RT(0.05,1,12) = 4.7472; do not reject $H_{0}$. There is not enough evidence to support the claim that there is an interaction effect between the format of the homework (paper vs. online) and the class size on the mean test grade. 25. $H_{0}:$ The time of day has no effect on the mean number of vacuum sales. $H_{1}:$ The time of day has an effect on the mean number of vacuum sales. F = 4.4179; CV = F.INV.RT(0.05,2,27) = 3.3541; reject $H_{0}$. There is enough evidence to support the claim that the time of day has an effect on the mean number of vacuum sales. $H_{0}:$ The type of vacuum has no effect on the mean number of vacuum sales. $H_{1}:$ The type of vacuum has an effect on the mean number of vacuum sales. F = 27.4172; CV = F.INV.RT(0.05,2,27) = 3.3541; reject $H_{0}$. There is enough evidence to support the claim that the type of vacuum has an effect on the mean number of vacuum sales. $H_{0}:$ There is no interaction effect between time of day and type of vacuum on the mean number of vacuum sales. $H_{1}:$ There is an interaction effect between time of day and type of vacuum on the mean number of vacuum sales. F = 0.9021; CV = F.INV.RT(0.05,4,27) = 2.7278; do not reject $H_{0}$. There is not enough evidence to support the claim that there is an interaction effect between time of day and type of vacuum on the mean number of vacuum sales. 27. $H_{0}:$ Age has no effect on the mean cholesterol level. $H_{1}:$ Age has an effect on the mean cholesterol level. F = 7.863; CV = F.INV.RT(0.05,2,24) = 3.4028; reject $H_{0}$. There is enough evidence to support the claim that age has an effect on the mean cholesterol level. $H_{0}:$ Location has no effect on the mean cholesterol level. $H_{1}:$ Location has no effect on the mean cholesterol level. F = 5.709; CV = F.INV.RT(0.05,3,24) = 3.0087; reject $H_{0}$. There is enough evidence to support the claim that the location has an effect on the mean cholesterol level. $H_{0}:$ There is no interaction effect between age and location on the mean cholesterol level. $H_{1}:$ There is an interaction effect between age and location on the mean cholesterol level. F = 1.455; CV = F.INV.RT(0.05,6,24) = 2.5082; do not reject $H_{0}$. There is not enough evidence to support the claim that there is an interaction effect between age and location on the mean cholesterol level. 29. $H_{0}:$ Height has no effect on the mean delivery time. $H_{1}:$ Height has an effect on the mean delivery time. F = 3.2798; CV = F.INV.RT(0.05,2,19) = 3.5219; do not reject $H_{0}$. There is not enough evidence to support the claim that height has an effect on the mean delivery time. $H_{0}:$ BMI has no effect on the mean delivery time. $H_{1}:$ BMI has an effect on the mean delivery time. F = 1.3763; CV = F.INV.RT(0.05,2,19) = 3.5219; do not reject $H_{0}$. There is not enough evidence to support the claim that BMI has an effect on the mean delivery time. $H_{0}:$ There is no interaction effect between the height and BMI on the mean delivery time. $H_{1}:$ There is an interaction effect between the height and BMI on the mean delivery time. F = 0.1125; CV = F.INV.RT(0.05,4,19) = 2.8951; do not reject $H_{0}$. There is not enough evidence to support the claim that there is an interaction effect between the height and BMI on the mean delivery time.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/11%3A_Analysis_of_Variance/11.05%3A_Chapter_11_Exercises.txt
• 12.1: Correlation Correlation as a means of measuring the relationship between variables. Subsections cover how to predict correlation from scatterplots of data, and how to perform a hypothesis test to determine if there is a statistically significant correlation between the independent and the dependent variables. • 12.2: Simple Linear Regression A linear regression is a straight line that describes how the values of a response variable \(y\) change as the predictor variable \(x\) changes. It should only be performed if you observe visually that there is a linear pattern in the scatterplot and that there is a statistically significant correlation between the independent and dependent variables. Subsections discuss residuals and the coefficient of determination, how to use the least squares equation for prediction, and how to identify out • 12.3: Multiple Linear Regression A multiple linear regression line describes how two or more predictor variables affect the response variable \(y\). When we add more predictor variables into the model, this inflates the coefficient of variation, \(R^{2}\), so we need to adjust for this inflation. • 12.4: Chapter 12 Formulas • 12.5: Chapter 12 Exercises 12: Correlation and Regression We are often interested in the relationship between two variables. This chapter determines whether a linear relationship exists between sets of quantitative data and making predictions for a population—for instance, the relationship between the number of hours of study time and an exam score, or smoking and heart disease. A predictor variable (also called the independent or explanatory variable; usually we use the letter \(x\)) explains or causes changes in the response variable. The predictor variable can be manipulated or changed by the researcher. A response variable (also called the dependent variable; usually we use the letter \(y\)) measures the outcome of a study. The different outcomes for a dependent variable are measured or observed by the researcher. For instance, suppose we are interested in how much time spent studying affects the scores on an exam. In this study, study time is the predictor variable, and exam score is the response variable. In data from an experiment, it is much easier to know which variable we should use for the independent and dependent variables. This can be harder to distinguish in observational data. Think of the dependent variable as the variable that you are trying to learn about. If we were observing the relationship between unemployment rate and economic growth rate, it may not be clear which variable should be \(x\) and \(y\). Do we want to predict the unemployment rate or the economic growth rate? One should never jump to a cause and effect reasoning with observational data. Just because there is a strong relationship between unemployment rate and economic growth rate does not mean that one causes the other to change directly. There may be many other contributing factors to both of these rates changing at the same time, such as retirements or pandemics. 12.01: Correlation A scatterplot shows the relationship between two quantitative variables measured on the same individuals. • The predictor variable is labeled on the horizontal or $x$-axis. • The response variable is labeled on the vertical or $y$-axis. How to Interpret a Scatterplot: • Look for the overall pattern and for deviations from that pattern. • Look for outliers, individual values that fall outside the overall pattern of the relationship. • A positive linear relation results when larger values of one variable are associated with larger values of the other. • A negative linear relation results when larger values of one variable are associated with smaller values of the other. • A scatterplot has no association if no obvious linear pattern is present. Use technology to make a scatterplot for the following sample data set: Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution TI-84: On the TI-84 press the [STAT] key and then the [EDIT] function; type the $x$ values into L1 and the $y$ values into L2. Press [Y=] and clear any equations that are in the $y$-editor. Press [2nd] then [STAT PLOT] (above the [Y=] button.) Press 4 or scroll down to PlotsOff and press enter. Press [ENTER] once more to turn off all of the existing plots. Press [2nd], then [STAT PLOT], then press 1 or hit [ENTER] and select Plot1. Select On and press [ENTER] to activate plot 1. For “Type” select the first graph that looks like a scatterplot and press [ENTER]. For “Xlist” enter the list where your explanatory variable data is stored. For our example, enter L1. For “Ylist” enter the list where your response variable data is stored. For our example, enter L2. Press [ZOOM] then press 9 or scroll down to ZoomStat and press [ENTER]. Press Trace and you can use your arrow keys to see the coordinates of each point. TI-89: Press [♦] then [F1] (the Y=) and clear any equations that are in the $y$-editor. Open the Stats/List Editor. Enter all $x$-values in one list. Enter all corresponding $y$-values in a second list. Double check that the data you entered is correct. In the Stats/List Editor select F2 for the Plots menu. Use cursor keys to highlight 1:Plot Setup. Make sure that the other graphs are turned off by pressing F4 button to remove the check marks. Under “Plot 1” press F1 for the Define menu. In the “Plot Type” menu select “Scatter.” In the “x” space type in the name of your list with the x variable without space: for our example, “list1.” In the “y” space type in the name of your list with the y variable without space: for our example, “list2.” Press [ENTER] twice and you will be returned to the Plot Setup menu. Press F5 ZoomData to display the graph. Press F3 Trace and use the arrow keys to scroll along the different points. Excel: Copy the data over to Excel in either two adjacent rows or columns. Select the data, select the Insert tab, then select Scatter, select the first scatter plot. Then add labels for your axis and change the title to produce the completed scatter plot. Correlation Coefficient The sample correlation coefficient measures the direction and strength of the linear relationship between two quantitative variables. There are several different types of correlations. We will be using the Pearson Product Moment Correlation Coefficient (PPMCC). The PPMCC is named after biostatistician Karl Pearson. We will just use the lower-case $r$ for short when we want to find the correlation coefficient, and the Greek letter $\rho$, pronounced “rho,” (rhymes with sew) when referring to the population correlation coefficient. Interpreting the Correlation: • A positive $r$ indicates a positive association (positive linear slope). • A negative $r$ indicates a negative association (negative linear slope). • $r$ is always between $-1$ and $1$, inclusive. • If $r$ is close to $1$ or $-1$, there is a strong linear relationship between $x$ and $y$. • If $r$ is close to $0$, there is a weak linear relationship between $x$ and $y$. There may be a non-linear relation or there may be no relation at all. • Like the mean, $r$ is strongly affected by outliers. Figure 12-1 gives examples of correlations with their corresponding scatterplots. When you have a correlation that is very close to $-1$ or $1$, then the points on the scatter plot will line up in an almost perfect line. The closer $r$ gets to $0$, the more scattered your points become. Take a moment and see if you can guess the approximate value of $r$ for the scatter plots below. Solution Scatterplot A: $r = 0.98$, Scatterplot B: $r = 0.85$, Scatterplot C: $r = -0.85$. When $r$ is equal to $-1$ or $1$ all the dots in the scatterplot line up in a straight line. As the points disperse, $r$ gets closer to zero. The correlation tells the direction of a linear relationship only. It does not tell you what the slope of the line is, nor does it recognize nonlinear relationships. For instance, in Figure 12-2, there are three scatterplots overlaid on the same set of axes. All three data sets would have $r = 1$ even though they all have different slopes. For the next example in Figure 12-3, $r = 0$ would indicate no linear relationship; however, there is clearly a non-linear pattern with the data. Figure 12-4 shows a correlation $r = 0.874$, which is pretty close to one, indicating a strong linear relationship. However, there is an outlier, called a leverage point, which is inflating the value of the slope. If you remove the outlier then $r = 0$, and there is no up or down trend to the data. Calculating Correlation To calculate the correlation coefficient by hand we would use the following formula. Sample Correlation Coefficient $r = \frac{\sum \left( \left(x_{i} - \bar{x}\right) \left(y_{i} - \bar{y}\right) \right)}{\sqrt{ \left( \left(\sum \left(x_{i} - \bar{x}\right)^{2}\right) \left(\sum \left(y_{i} - \bar{y}\right)^{2}\right) \right)} } = \frac{SS_{xy}}{\sqrt{ \left(SS_{xx} \cdot SS_{yy}\right) }}$ Instead of doing all of these sums by hand we can use the output from summary statistics. Recall that the formula for a variance of a sample is $s_{x}^{2} = \frac{\sum \left(x_{i} - \bar{x}\right)^{2}}{n-1}$. If we were to multiply both sides by the degrees of freedom, we would get $\sum \left(x_{i} - \bar{x}\right)^{2} = (n-1) s_{x}^{2}$. We use these sums of squares $\sum \left(x_{i} - \bar{x}\right)^{2}$ frequently, so for shorthand we will use the notation $SS_{xx} = \sum \left(x_{i} - \bar{x}\right)^{2}$. The same would hold true for the $y$ variable; just changing the letter, the variance of $y$ would be $s_{y}^{2} = \frac{\sum \left(y_{i} - \bar{y}\right)^{2}}{n-1}$, therefore $SS_{yy} = (n-1) s_{y}^{2}$. The numerator of the correlation formula is taking in the horizontal distance of each data point from the mean of the $x$ values, times the vertical distance of each point from the mean of the $y$ values. This is time-consuming to find so we will use an algebraically equivalent formula $\sum \left(\left(x_{i} - \bar{x}\right) \left(y_{i} - \bar{y}\right) \right) = \sum (xy) - n \cdot \bar{x} \bar{y}$, and for short we will use the notation $SS_{xy} = \sum (xy) - n \cdot \bar{x} \bar{y}$. To start each problem, use descriptive statistics to find the sum of squares. $SS_{xx} = (n-1) s_{x}^{2}$ $SS_{yy} = (n-1) s_{y}^{2}$ $SS_{xy} = sum (xy) - n \cdot \bar{x} \bar{y}$ Use the following data to calculate the correlation coefficient. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution We could show all the work the long way by hand using the shortcut formula. On the TI-83 press the [STAT] key and then the [EDIT] function, type the $x$ values into L1 and the y values into L2. Press the [STAT] key again and arrow over to highlight [CALC], select 2-Var Stats, then press [ENTER]. This will return the descriptive stats. The TI calculator can run descriptive statistics and quickly get everything we need to find the sum of squares. Go to STAT > CALC > 2-Var Stats. For TI-83, you may need to enter your list names separated by a comma, for example 2-Var Stats L1,L2 then hit enter. On the TI-89, open the Stats/List Editor. Enter all $x$-values in one list. Enter all corresponding $y$-values in a second list. Press F4, then select 2-Var Stats, then press [ENTER]. This will return the descriptive stats. Use the down arrow to see everything. Once you do this the statistics are stored in your calculator so you can use the VARS key, go to Statistics, then select the standard deviation for $x$, and repeat for the $y$-variable. This will reduce rounding errors by using exact values. For the $SS_{xy}$ you can also use the stored sum of $xy$ and means. This gives the following results: $SS_{xx} = (n-1) s_{x}^{2} = (15-1) 1.723783215^{2} = 41.6$ $SS_{yy} = (n-1) s_{x}^{2} = (15-1) )6.717425811^{2} = 631.7333$ $SS_{xy} = \sum (xy) - n \bar{x} \bar{y} = 20087 – (15 \cdot 16.6 \cdot 80.133333) = 133.8$ Note that both $SS_{xx}$ and $SS_{yy}$ will always be positive, but $SS_{xy}$ could be negative or positive. For the TI-89, you will see the sum of squares at the very bottom of the descriptive statistics: $\sum \left(x - \bar{x}\right)^{2} = 41.6$ and $\sum \left(y - \bar{y}\right)^{2} = 631.7333$. To find the correlation, substitute the three sums of squares into the formula to get: $r = \frac{SS_{xy}}{\sqrt{ \left(SS_{xx} \cdot SS_{yy}\right) }}= \frac{133.8}{\sqrt{ \left(41.6 \cdot 631.7333 \right) }} = 0.8524$. Try this now on your calculator to see if you are getting your order of operations correct. For our example, $r = 0.8254$ is close to 1; therefore it looks like there is positive linear relationship between the number of hours studying for an exam and the grade on the exam. Most software has a built-in correlation function. TI-84: On the TI-83 press the [STAT] key and then the [EDIT] function, type the $x$ values into L1 and the $y$ values into L2. Press the [STAT] key again and arrow over to highlight [TEST], select LinRegTTest, then press [ENTER]. The default is Xlist: L1, Ylist: L2, Freq:1, $\beta$ and $\rho: \neq 0$. Arrow down to Calculate and press the [ENTER] key. Scroll down to the bottom until see you $r$. TI-89: On the TI-89, open the Stats/List Editor. Enter all $x$-values in one list. Enter all corresponding $y$-values in a second list. Press F6, then select LinRegTTest, then press [ENTER]. Scroll down to the bottom of the output to see $r$. Excel: r = CORREL(array1,array2) = CORREL(B1:P1,B2:P2) = 0.8254 When is a correlation statistically significant? The next subsection shows how to run a hypothesis test for correlations.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.01%3A_Correlation/12.1.01%3A_Scatterplots.txt
One should perform a hypothesis test to determine if there is a statistically significant correlation between the independent and the dependent variables. The population correlation coefficient $\rho$ (this is the Greek letter rho, which sounds like “row” and is not a $p$) is the correlation among all possible pairs of data values $(x, y)$ taken from a population. We will only be using the two-tailed test for a population correlation coefficient $\rho$. The hypotheses are: $H_{0}: \rho = 0$ $H_{1}: \rho \neq 0$ The null-hypothesis of a two-tailed test states that there is no correlation (there is not a linear relation) between $x$ and $y$. The alternative-hypothesis states that there is a significant correlation (there is a linear relation) between $x$ and $y$. The t-test is a statistical test for the correlation coefficient. It can be used when $x$ and $y$ are linearly related, the variables are random variables, and when the population of the variable $y$ is normally distributed. The formula for the t-test statistic is $t = r \sqrt{\left( \dfrac{n-2}{1-r^{2}} \right)}$. Use the t-distribution with degrees of freedom equal to $df = n - 2$. Note the $df = n - 2$ since we have two variables, $x$ and $y$. Test to see if the correlation for hours studied on the exam and grade on the exam is statistically significant. Use $\alpha$ = 0.05. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution The hypotheses are: $H_{0}: \rho = 0$ $H_{1}: \rho \neq 0$ Find the critical value using $df = n -2 = 13$ for a two-tailed test $\alpha= 0.05$ inverse t-function to get the critical values $\pm 2.160$. Draw the sampling distribution and label the critical values as shown in Figure 12-5. Next, find the test statistic $t = r \sqrt{\left( \frac{n-2}{1-r^{2}} \right)} = 0.8254 \sqrt{\left( \frac{13}{1 - 0.8254^{2}} \right)} = 5.271$, which is greater than 2.160 and in the rejection region. Summary: At the 5% significance level, there is enough evidence to support the claim that there is a statistically significant linear relationship (correlation) between the number of hours studied for an exam and exam scores. The p-value method could also be used to find the same decision. We will use technology shortcuts for the p-value method. The p-value = $2 \cdot \text{P}(t \geq 5.271 | H_{0} \text{ is true}) = 0.000151$, which is less than $\alpha$ = 0.05; therefore we reject $H_{0}$. Alternatively, we could test to see if the slope was equal to zero. If the slope is zero then the correlation will also be zero. The setup of a test is a little different, but we get the same results. Most software packages report the test statistic and p-value for a slope. This test is introduced in the next section. TI-84: Enter the data in L1 and L2. Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [LinRegTTest] and press the [ENTER] key. The default is Xlist: L1, Ylist: L1, Freq:1, $\beta$ and $\rho: \neq 0$. Arrow down to Calculate and press the [ENTER] key. The calculator returns the t-test statistic, p-value and the correlation coefficient = $r$. Note the p-value = 0.0001513, is less than $\alpha$ = 0.05; therefore reject $H_{0}$, as there is a significant correlation. TI-89: Enter the data in List1 and List2. In the Stats/List Editor select F6 for the Tests menu. Use cursor keys to select A:LinRegTTest and press [Enter]. In the “X List” space type in the name of your list with the $x$ variable without space, for our example “list1” or use [2nd] [Var-Link] and highlight list1. In the “Y List” space type in the name of your list with the $y$ variable without space, for our example “list2” or use [2nd] [Var-Link] and highlight list2. Under the “Alternate Hyp” menu select the $\beta$ and $\rho: \neq 0$ option, which is the same as the question’s alternative hypothesis statement, then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the t-test statistic, p-value, and the correlation = $r$. Excel: Type the data into two columns in Excel. Select the Data tab, then Data Analysis, then choose Regression and select OK. Be careful here. The second column is the $y$ range, and the first column is the $x$ range. Only check the Labels box if you highlight the labels in the input range. The output range is one cell reference where you want the output to start, and then select OK. Figure 12-6 shows the regression output. When you reject $H_{0}$, the slope is significantly different from zero. This means there is a significant relationship (correlation) between $x$ and $y$, and you can then find a regression line to use for prediction which we explore in the next section, called Simple Linear Regression. Correlation is Not Causation Just because two variables are significantly correlated does not imply a cause and effect relationship. There are several relationships that are possible. It could be that $x$ causes $y$ to change. You can actually swap $x$ and $y$ in the fields and get the same $r$ value and $y$ could be causing $x$ to change. There could be other variables that are affecting the two variables of interest. For instance, you can usually show a high correlation between ice cream sales and home burglaries. Selling more ice cream does not “cause” burglars to rob homes. More home burglaries do not cause more ice cream sales. We would probably notice that the temperature outside may be causing both ice cream sales to increase and more people to leave their windows open. This third variable is called a lurking variable and causes both $x$ and $y$ to change, making it look like the relationship is just between $x$ and $y$. There are also highly correlated variables that seemingly have nothing to do with one another. These seemingly unrelated variables are called spurious correlations. The following website has some examples of spurious correlations (a slight caution that the author has some gloomy examples): http://www.tylervigen.com/spurious-correlations. Figure 12-7 is one of their examples: If we were to take out each pair of measurements by year from the time-series plot in Figure 12-7, we would get the following data. Year Engineering Doctorates Mozzarella Cheese Consumption 2000 480 9.3 2001 501 9.7 2002 540 9.7 2003 552 9.7 2004 547 9.9 2005 622 10.2 2006 655 10.5 2007 701 11 2008 712 10.6 2009 708 10.6 Using Excel to find a scatterplot and compute a correlation coefficient, we get the scatterplot shown in Figure 12-8 and a correlation of $r = 0.9586$. With $r = 0.9586$, there is strong correlation between the number of engineering doctorate degrees earned and mozzarella cheese consumption over time, but earning your doctorate degree does not cause one to go eat more cheese. Nor does eating more cheese cause people to earn a doctorate degree. Most likely these items are both increasing over time and therefore show a spurious correlation to one another. When two variables are correlated, it does not imply that one variable causes the other variable to change. “Correlation is causation” is an incorrect assumption that because something correlates, there is a causal relationship. Causality is the area of statistics that is most commonly misused, and misinterpreted, by people. Media, advertising, politicians and lobby groups often leap upon a perceived correlation and use it to “prove” their own agenda. They fail to understand that, just because results show a correlation, there is no proof of an underlying causality. Many people assume that because a poll, or a statistic, contains many numbers, it must be scientific, and therefore correct. The human brain is built to try and subconsciously establish links between many pieces of information at once. The brain often tries to construct patterns from randomness, and may jump to conclusions, and assume that a cause and effect relationship exists. Relationships may be accidental or due to other unmeasured variables. Overcoming this tendency to jump to a cause and effect relationship is part of academic training for students and in most fields, from statistics to the arts. Summary When looking at correlations, start with a scatterplot to see if there is a linear relationship prior to finding a correlation coefficient. If there is a linear relationship in the scatterplot, then we can find the correlation coefficient to tell the strength and direction of the relationship. Clusters of dots forming a linear uphill pattern from left to right will have a positive correlation. The closer the dots in the scatterplot are to a straight line, the closer $r$ will be to $1$. If the cluster of dots in the scatterplots go downhill from left to right in linear pattern, then there is a negative relationship. The closer those dots in the scatterplot are to a straight line going downhill, the closer $r$ will be to $-1$. Use a t-test to see if the correlation is statistically significant. As sample sizes get larger, smaller values of $r$ become statistically significant. Be careful with outliers, which can heavily influence correlations. Most importantly, correlation is not causation. When $x$ and $y$ are significantly correlated, this does not mean that $x$ causes $y$ to change.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.01%3A_Correlation/12.1.02%3A_Hypothesis_Test_for_a_Correlation.txt
A linear regression is a straight line that describes how the values of a response variable $y$ change as the predictor variable $x$ changes. The equation of a line, relating $x$ to $y$ uses the slope-intercept form of a line, but with different letters than what you may be used to in a math class. We let $b_{0}$ represent the sample $y$-intercept (the value of $y$ when $x = 0$), $b_{1}$ the sample slope (rise over run), and $\hat{y}$ the predicted value of $y$ for a specific value of $x$. The equation is written as $\hat{y} = b_{0} + b_{1}x$. Some textbooks and the TI calculators use the letter $a$ to represent the $y$-intercept and $b$ to represent the slope, and the equation is written as $\hat{y} = a + bx$. These letters are just symbols representing the placeholders for the numeric values for the $y$-intercept and slope. If we were to fit the best line that was closest to all the points on the scatterplot we would get what we call the “line of best fit,” also known as the “regression equation” or “least squares regression line.” Figure 12-9 is a scatterplot with just five points. Figure 12-10 shows the least-squares regression line of $y$ on $x$, which is the line that minimizes the squared vertical distance from all of the data. If we were to fit the line that best fits through the points, we would get the line pictured below. What we want to look for is the minimum of the squared vertical distance between each point and the regression equation, called a residual. This is where the name of the least squares regression line comes from. Figure 12-11 shows the squared residuals. To find the slope and $y$-intercept for the equation of the least-squares regression line $\hat{y} = b_{0} + b_{1} x$ we use the following formulas: slope $= b_{1} = \frac{SS_{xy}}{SS_{xx}}$, $y$-intercept: $b_{0} = \bar{y} - b_{1} \bar{x}$. To compute the least squares regression line, you will need to first find the slope. Then substitute the slope into the following equation of the $y$-intercept: $b_{0} = \bar{y} - b_{1} \bar{x}$, where $\bar{x}$ = the sample mean of the $x$’s and $\bar{y}$ = the sample mean of the $y$’s. Once we find the equation for the regression line, we can use it to estimate the response variable $y$ for a specific value of the predictor variable $x$. Note: we would only want to use the regression equation for prediction if we reject $H_{0}$ and find that there is a significant correlation between $x$ and $y$. Alternatively, we could start with the regression equation and then test to see if the slope is significantly different from zero. Use the following data to find the line of best fit. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution Start with finding the 2-Var Stats and sum of squares as shown in the steps for correlation. $SS_{xx} = (n-1) s_{x}^{2} = (15-1) 1.723783215^{2} = 41.6$ $SS_{yy} = (n-1) s_{y}^{2} = (15-1) 6.717425811^{2} = 631.7333$ $SS_{xy} = \sum (xy) - n \cdot \bar{x} \cdot \bar{y} = 20087 – (15 \cdot 16.6 \cdot 80.133333) = 133.8$ Calculate the slope: $b_{1} = \frac{SS_{xy}}{SS_{xx}} = \frac{133.8}{41.6} = 3.216346$. Calculate the $y$-intercept: $b_{0} = \bar{y} - b_{1} \cdot \bar{x} = 80.133333 - 3.216346 \cdot 16.6 = 26.742$. Put these numbers back into the regression equation and write your answer as: $\hat{y} = 26.742 + 3.216346 x$. Interpreting the $y$-intercept coefficient: When $x = 0$, note that $\hat{y} = 26.742$. This means that we would expect a failing midterm score of 26.742 for students who had studied zero hours. Interpreting the slope coefficient: For each additional hour studied for the exam, we would expect an increase in the midterm grade of 3.2163 points. In general, when interpreting the slope coefficient, for each additional 1 unit increase in $x$, the predicted $\hat{y}$ value will change by $b_{1}$ units. Adding the Regression Line to the Scatterplot TI-84: Make a scatterplot using the directions from the previous section. Turn your STAT scatter plot on. Press [Y=] and clear any equations that are in the $y$-editor. Into Y1, enter the least-squares regression equation manually as found above. Or, press the VARS key, go to option 5: Statistics, arrow over to EQ for equation, then choose the first option RegEQ. This will bring the equation over to the Y= menu without rounding error. Press [GRAPH]. You can press [TRACE] and use the arrow keys to scroll left or right. Pressing up or down on the arrow keys will change between tracing the scatterplot and the regression line. You can use the regression line to predict values of the response variable for a given value of the explanatory variable. While tracing the regression line type the value of the explanatory variable and press [ENTER]. For example, for $x = 19$ the value of $\hat{y} = 87.8526$. TI-89: Make a scatterplot and find the regression line using the directions in the previous section. If you press [♦] then [F1] (Y=) you will notice the regression equation has been stored into y1 in the y-editor. Press [F2] Trace and use the left and right arrow keys to trace along the plot. Use the up and down arrow keys to toggle between the regression line and the scatterplot. You can use the regression line to predict values of the response variable for a given value of the explanatory variable. While tracing the regression line type the value of the explanatory variable and press [ENTER]. For example, for $x = 19$ the value of $y = 87.8526$. 12.02: Simple Linear Regression To test to see if the slope is significant we will be doing a two-tailed test with hypotheses. The population least squares regression line would be $y = \beta_{0} + \beta_{1} + \varepsilon$ where $\beta_{0}$ (pronounced “beta-naught”) is the population $y$-intercept, $\beta_{1}$ (pronounced “beta-one”) is the population slope and $\varepsilon$ is called the error term. If the slope were horizontal (equal to zero), the regression line would give the same $y$-value for every input of $x$ and would be of no use. If there is a statistically significant linear relationship then the slope needs to be different from zero. We will only do the two-tailed test, but the same rules for hypothesis testing apply for a one-tailed test. We will only be using the two-tailed test for a population slope. The hypotheses are: $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ The null hypothesis of a two-tailed test states that there is not a linear relationship between $x$ and $y$. The alternative hypothesis of a two-tailed test states that there is a significant linear relationship between $x$ and $y$. Either a t-test or an F-test may be used to see if the slope is significantly different from zero. The population of the variable $y$ must be normally distributed. F-Test for Regression An F-test can be used instead of a t-test. Both tests will yield the same results, so it is a matter of preference and what technology is available. Figure 12-12 is a template for a regression ANOVA table, where $n$ is the number of pairs in the sample and $p$ is the number of predictor (independent) variables; for now this is just $p = 1$. Use the F-distribution with degrees of freedom for regression = $df_{R} = p$, and degrees of freedom for error = $df_{E} = n - p - 1$. This F-test is always a right-tailed test since ANOVA is testing the variation in the regression model is larger than the variation in the error. Use an F-test to see if there is a significant relationship between hours studied and grade on the exam. Use $\alpha$ = 0.05. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution The hypotheses are: $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ Compute the sum of squares. $SS_{xx} = 41.6$, $SS_{yy} = 631.7333$, $SS_{xy} = 133.8$, $n = 15$ and $p = 1$ $SSR = \frac{\left(SS_{xy}\right)^{2}}{SS_{xx}} = \frac{(133.8)^{2}}{41.6} = 430.3471154$ $SSE = SST - SSR = 631.7333 - 430.3471154 = 201.3862$ $SST = SS_{yy} = 631.7333$ Compute the degrees freedom. $df_{T} = n - 1 = 14 \quad\quad df_{E} = n - p - 1 = 15 - 1 - 1 = 13$ Compute the mean squares. $MSR = \frac{SSR}{p} = \frac{430.3471154}{1} = 430.3471154 \quad\quad MSE = \frac{SSE}{n-p-1} = \frac{201.3862}{13} = 15.4912$ Compute the Test Statistic $F = \frac{MSR}{MSE} = \frac{430.3471154}{15.4912} = 27.7801$ Substitute the numbers into the ANOVA table: This is a right-tailed F-test with $df = 1, 13$ and $\alpha$ = 0.05, which gives a critical value of 4.667. In Excel we can find the critical value by using the function =F.INV.RT(0.05,1,13) = 4.667. Or use the online calculator at https://homepage.divms.uiowa.edu/~mbognar/applets/f.html to visualize the critical value, as shown in Figure 12-13. It is hard to see the shaded tail in the following picture above the test statistic since the F-distribution is so close to the $x$-axis after 3, but it has the right-tail shaded from 4.667 and greater. The test statistic 27.78 is even further out in the tail than the critical value, so we would reject $H_{0}$. At the 5% level of significance, there is a statistically significant relationship between hours studied and grade on a student’s final exam. The p-value could also be used to make the decision. The p-value method would use the function =F.DIST.RT(27.78,1,13) = 0.00015 in Excel. The p-value is less than $\alpha$ = 0.05, which also verifies that we reject $H_{0}$. The following is the output from Excel and SPSS. Note the same ANOVA table information is shown but the columns are in a different order. Excel SPSS T-Test for Regression If the regression equation has a slope of zero, then every $x$ value will give the same $y$ value and the regression equation would be useless for prediction. We should perform a t-test to see if the slope is significantly different from zero before using the regression equation for prediction. The numeric value of t will be the same as the t-test for a correlation. The two test statistic formulas are algebraically equal; however, the formulas are different and we use a different parameter in the hypotheses. The formula for the t-test statistic is $t = \frac{b_{1}}{\sqrt{ \left(\frac{MSE}{SS_{xx}}\right) }}$ Use the t-distribution with degrees of freedom equal to $n - p - 1$. The t-test for slope has the same hypotheses as the F-test: $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ Use a t-test to see if there is a significant relationship between hours studied and grade on the exam, use $\alpha$ = 0.05. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution The hypotheses are: $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ Find the critical value using $df_{E} = n - p - 1 = 13$ for a two-tailed test $\alpha$ = 0.05 inverse t-distribution to get the critical values $\pm 2.160$. Draw the sampling distribution and label the critical values, as shown in Figure 12-14. The critical value is the same as we found using the t-test for correlation. Next, find the test statistic $t = \frac{b_{1}}{\sqrt{ \left(\frac{MSE}{SS_{xx}}\right) }} = \frac{3.216346}{\sqrt{ \left(\frac{15.4912}{41.6}\right) }}$ = 5.271\). The test statistic value is the same value of the t-test for correlation even though they used different formulas. We look in the same place using technology as the correlation test. The test statistic is greater than the critical value of 2.160 and in the rejection region. The decision is to reject $H_{0}$. Summary: At the 5% significance level, there is enough evidence to support the claim that there is a significant linear relationship (correlation) between the number of hours studied for an exam and exam scores. The p-value method could also be used to find the same decision. The p-value = 0.00015, the same as the previous tests. We will use technology for the p-value method. In the SPSS output, they use Sig. for the p-value. Excel SPSS
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.02%3A_Simple_Linear_Regression/12.2.01%3A_Hypothesis_Test_for_Linear_Regression.txt
When we overlay the regression equation on a scatterplot, most of the time, the points do not lie on the line itself. The vertical distance between the actual value of $y$ and the predicted value of $\hat{y}$ is called the residual. The numeric value of the residual is found by subtracting the predicted value of $y$ from the actual value of $y$: $y - \hat{y}$. When we find the line of best fit using least squares regression, this finds the regression equation with the smallest sum of the residuals $\sum y - \hat{y}$. When your residual is positive, then your data point is above the regression line, when the residual is negative, your data point is below the regression line. If you were to find the residuals for all the sample points and add them up you would get zero. The expected value of the residuals will always be zero. The regression equation is found so that there is just as much distance for the residuals above the line as there is below the line. Find the residual for the point $(15, 80)$ for the exam data. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution Figure 12-15 is a scatterplot with the regression equation $\hat{y} = 26.742 + 3.216346 x$ from the exam data. The blue diamonds represent the sample data points. The orange squares are the predicted $y$ for each value of $x$. If we connect the orange squares, we get the linear regression equation. The vertical distance between each data point and the regression equation is called the residual. The numeric value can be found by subtracting the observed $y$ with its corresponding predicted value, $y - \hat{y}$. We use $e_{i}$ to represent the $i^{th}$ residual where $e_{i} = y_{i} - \hat{y}_{i}$. The residual for the point $(15, 80)$ is drawn on the scatterplot vertically as a yellow double-sided arrow to visually show the size of the residual. If you were to predict a student’s exam grade when they studied 15 hours, you would get a predicted grade of $\hat{y} = 26.742 + 3.216346 \cdot 15 = 74.9865$. The residual for the point $(15, 80)$ then would be $y - \hat{y} = 80 - 74.9865 = 5.0135$. This is the length of the vertical yellow arrow connecting the point $(15, 80)$ to the point $(15, 74.9865)$. Standard Error of Estimate The standard deviation of the residuals is called the standard error of estimate or $s$. Some texts will use a subscript $s_{e}$ or $s_{est}$ to distinguish the different standard deviations from one another. When all of your data points line up in a perfectly straight line, $s = 0$ since none of your points deviate from the regression line. As your data points get more scattered away from a regression line, $s$ gets larger. When you are analyzing a regression model, you want $s$ to be as small as possible. Standard Error of Estimate $s_{est} = s = \sqrt{\frac{\sum \left(y_{i} - \hat{y}_{i}\right)^{2}}{n-2}} = \sqrt{MSE} \nonumber$ The standard error of estimate is the standard deviation of the residuals. The standard error of estimate measures the deviation in the vertical distance from data points to the regression equation. The units of $s$ are the same as the units of $y$. Use the exam data to find the standard error of estimate. Solution To find the $\sum \left(y_{i} - \hat{y}_{i}\right)^{2}$ you would need to find the residual for every data point, square the residuals, and sum them up. This is a lot of math. Recall the regression ANOVA table found earlier. The MSE = 15.4912. The mean square error is the variance of the residuals, if we take the square root of the MSE we find the standard deviation of the residuals, which is the standard error of estimate. $s = \sqrt{MSE} = \sqrt{15.4912} = 3.9359$ You can also use the technology to find $s$.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.02%3A_Simple_Linear_Regression/12.2.02%3A_Residuals.txt
The coefficient of determination $R^{2}$ (or $r^{2}$) is the fraction (or percent) of the variation in the values of $y$ that is explained by the least-squares regression of $y$ on $x$. $R^{2}$ is a measure of how well the values of $y$ are explained by $x$. For example, there is some variability in the dependent variable values, such as grade. Some of the variation in student’s grades is due to hours studied and some is due to other factors. How much of the variation in a student’s grade is due to hours studied? When considering this question, you want to look at how much of the variation in a student’s grade is explained by the number of hours they studied and how much is explained by other variables. Realize that some of the changes in grades have to do with other factors. You can have two students who study the same number of hours, but one student may have a higher grade. Some variability is explained by the model and some variability is not explained. Together, both of these give the total variability. \begin{aligned} \text{(Total Variation)} &= \text{(Explained Variation)} & + \quad \text{(Unexplained Variation)} \ \quad \sum \left(y - \bar{y}\right)^{2} &= \quad \sum \left(\hat{y} - \bar{y}\right)^{2} & + \sum \left(y - \bar{y}\right)^{2} \end{aligned} Coefficient of Determination The proportion of the variation that is explained by the model is $R^{2} = \frac{\text{Explained Variation}}{\text{Total Variation}} = \frac{SSR}{SST} \nonumber$ Find and interpret the coefficient of determination for the hours studied and exam grade data. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution The coefficient of determination is this correlation coefficient squared. Note: when $r$ is negative, then when you square $r$ the answer becomes positive. For the hours studied and exam grade $r = 0.825358$, so $r^{2} = R^{2} = 0.825358^{2} = 0.6812$. Approximately 68% of the variation in a student’s exam grade is explained by the least square regression equation and the number of hours a student studied. TI-84: Press the [STAT] key, arrow over to the [TESTS] menu, arrow down to the option [LinRegTTest] and press the [ENTER] key. The default is Xlist:L1, Ylist:L2, Freq:1, $\beta$ and $\rho: \neq 0$. Arrow down to Calculate and press the [ENTER] key. The calculator returns the $y$-intercept = $a = b_{0}$, slope = $b = b_{1}$, the standard error of estimate = $s$, the coefficient of determination = $r^{2} = R^{2}$, and the correlation coefficient = $r$. TI-89: In the Stats/List Editor select F6 for the Tests menu. Use cursor keys to select A:LinRegTTest and press [Enter]. In the “X List” space type in the name of your list with the $x$ variable without spaces: for our example, “list1.” In the “Y List” space type in the name of your list with the $y$ variable without spaces: for our example, “list2.” Under the “Alternate Hyp” menu, select the $\neq$ sign that is the same as the problem’s alternative hypothesis statement then press the [ENTER] key, arrow down to [Calculate] and press the [ENTER] key. The calculator returns the $y-$intercept of the regression line = $a = b_{0}$, the slope of the regression line = $b = b_{1}$, the correlation = $r$, and the coefficient of determination = $r^{2} = R^{2}$. The coefficient of determination can take on any value between 0 and 1, or 0% to 100%. The closer $R^{2}$ is to 100%, the better the regression equation models the data. Unlike $r$, which can only be used for simple linear regression, we can use $R^{2}$ for different types of regression. In more advanced courses, if we were to do non-linear or multiple linear regression we could compare different models and pick the one that has the highest $R^{2}$. For instance, if we ran a linear regression the scatterplot that showed an obvious curve pattern we would get a regression equation with a zero slope and $R^{2} = 0$. See Figure 12-16. If we were to fit a parabola through the data we get a perfect fit and $R^{2} = 1$. See Figure 12-17.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.02%3A_Simple_Linear_Regression/12.2.03%3A_Coefficient_of_Determination.txt
We can now use the least squares regression equation for prediction. Use the regression equation to predict the grade for a student who has studied for 18 hours for their exam using the previous data. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution We found the regression equation $\hat{y} = 26.742 + 3.216346 x$. The $x$-variable is the hours studied so let $x = 18$ hours. $\hat{y}$ is the symbol for the predicted $y$. Substitute $x = 18$ into the regression equation and you get: $\hat{y} = 26.742 + 3.216346 \cdot 18 = 84.636228$. We would estimate the student’s grade to be 84.6 when they studied 18 hours. This is a point estimate for the grade on the exam for a student that studied 18 hours. Prediction Interval We can find a special type of confidence interval to estimate the true value of $y$ called the prediction interval. The prediction interval is the confidence interval for the actual value of $y$: $\hat{y} \pm t_{\alpha / 2} \cdot s \sqrt{\left( 1 + \frac{1}{n} + \frac{(x - \bar{x})^{2}}{SS_{xx}}\right)} \nonumber$ where $\hat{y}$ is the predicted value of $y$ for the given value of $x$. Using the previous data, find and interpret the 95% prediction interval for a student who studies 18 hours. Solution From the question, $x = 18$. From previous examples we found that $\hat{y} = 26.742 + 3.216346 \cdot 18 = 84.636228$ and $s = 3.935892$. Find the critical value from the invT using $df = n - 2 = 13$; we get $t_{\alpha / 2} = 2.160369$. Make sure to go out at least 6 decimal places in between steps. Ideally, never round between steps. Use the 2-Var Stats from your calculator to find the sums and then substitute values back into the equation to get \begin{aligned} & 84.636228 \pm 2.1600369 \cdot 3.935892 \sqrt{\left( 1 + \frac{1}{15} + \frac{(18 - 16.6)^{2}}{41.6} \right)} \ & \Rightarrow \quad 84.636228 \pm 8.9723 \ & \Rightarrow \quad 75.6639 < y < 93.6085 \end{aligned} We are 95% confident that the predicted exam grade for a student that studies 18 hours is between 75.6639 and 93.6085. A confidence interval can be more accurate (narrower) when you increase the sample size. Note that in the last example, the predicted grade for an individual student could have been anywhere from a C to an A grade. If you wanted to predict $y$ with more accuracy, then you would want to sample more than 15 students to get a smaller margin of error. The confidence interval for a mean will have a smaller margin of error than for an individual’s predicted value. Excel, the TI-83 and 84 do not have built in prediction intervals. TI-89: Enter the $x$-values in list1 and the $y$-values in list2, select [F7] Intervals, then select option 7:LinRegTInt… Use the Var-Link button to enter in list1 and list2 for the X List and Y List. Select Response in the drop-down menu for Interval. Enter in the $x$-value given in the question. Change the confidence level (C-Level) to match what was in the question, the [Enter]. Scroll down to Pred Int for the prediction interval. The calculator does not round between steps so if you rounded $b_{0}$ and $b_{1}$, for instance, when doing hand calculations, your answer may be slightly different than the calculator results. Extrapolation is the use of a regression line for prediction far outside the range of values of the independent variable $x$. As a general rule, one should not use linear regression to estimate values too far from the given data values. The further away you move from the center of the data set, the more variable results become. For instance, we would not want to estimate a student’s grade for someone that studied way less than 14 hours or more than 20 hours. 12.2.05: Outliers A scatter plot should be checked for outliers. An outlier is a point that seems out of place when compared with the other points. Some of these points can affect the equation of the regression line. Should linear regression be used with this data set? $x$ 1 3 8 2 1 3 2 2 3 1 $y$ 2 3 8 2 3 1 3 1 2 1 Solution A regression analysis for the data set was run on Excel. If we test for a significant correlation: $H_{0}: \rho = 0$ $H_{1}: \rho \neq 0$ The correlation is $r = 0.844$ and the p-value is 0.002, which is less than $\alpha$ = 0.05, so we would reject $H_{0}$ and conclude there is a significant relationship between $x$ and $y$. However, if look at the scatterplot in Figure 12-18, with the regression equation we can clearly see that the point $(8,8)$ is an outlier. The outlier is pulling the slope up towards the point $(8,8)$. If we were to take out the outlier point $(8,8)$ and run the regression analysis again on the modified data set we get the following Excel output. See Figure 12-19: note the correlation is now 0 and the p-value is 1, so there is no relationship at all between $x$ and $y$. This type of outlier is called a leverage point. Leverage points are positioned far away from the main cluster of data points on the $x$-axis. There is another type of outlier called an influential point. Influential points are positioned far away from the main cluster of data points on the $y$-axis. There is an option in most software packages to get the “standardized” residuals. Standardized residuals are z-scores of the residuals. Any standardized residual that is not between $-2$ and $2$ may be an outlier. If it is not between $-3$ and $3$ then the point is an outlier. When this happens, the points are called influential points or influential observations. Use technology to compute the standardized residuals. Should linear regression be used with this data set? $x$ 1 3 2 2 4 5 7 9 6 8 $y$ 1 3 10 2 4 5 7 9 6 8 Solution A regression analysis for the given data set was run on Excel, producing the following results: The point $(2, 10)$ shown in Figure 12-20 is pulling the left side of the line up and away from the points that form a line. This influential point changes the $y$-intercept and slope.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.02%3A_Simple_Linear_Regression/12.2.04%3A_Prediction.txt
A lurking variable is a variable other than the independent or dependent variables that may influence the regression line. For instance, the highly correlated ice cream sales and home burglary rates probably have to do with the season. Hence, linear regression does not imply cause and effect. Two variables are confounded when their effects on the dependent variable cannot be distinguished from each other. For instance, if we are looking at diet predicting weight, a confounding variable would be age. As a person gets older, they can gain more weight with fewer calories compared to when they were younger. Another example would be predicting someone’s midterm score from hours studied for the exam. Some confounding variables would be GPA, IQ score, and teacher’s difficultly level. Assumptions for Linear Regression There are assumptions that need to be met when running simple linear regression. If these assumptions are not met, then one should use more advanced regression techniques. The assumptions for simple linear regression are: • The data need to follow a linear pattern. • The observations of the dependent variable y are independent of one another. • Residuals are approximately normally distributed. • The variance of the residuals is constant. Most software packages will plot the residuals for each $x$ on the $y$-axis against either the $x$-variable or $\hat{y}$ along the $x$-axis. This plot is called a residual plot. Residual plots help determine some of these assumptions. Use technology to compute the residuals and make a residual plot for the hours studied and exam grade data. Hours Studied for Exam 20 16 20 18 17 16 15 17 15 16 15 17 16 17 14 Grade on Exam 89 72 93 84 81 75 70 82 69 83 80 83 81 84 76 Solution Plot the residuals. TI-84: Find the least-squares regression line as described in the previous section. Press [Y=] and clear any equations that are in the $y$-editor. Press [2nd] then [STAT PLOT] then press 1 or hit [ENTER] to select Plot1. Select On and press [ENTER] to activate plot 1. For “Type” select the first graph that looks like a scatterplot and press [ENTER]. For “Xlist” enter whichever list where your explanatory variable data is stored. For our example, enter L1. For “Ylist” press [2nd] [LIST] then scroll down to RESID and press [ENTER]. The calculator automatically computes the residuals and stores them in a list called RESID. Press [ZOOM] then press 9 or scroll down to ZoomStat and press [ENTER]. TI-89: Find the least-squares regression line as described in the previous section. Press [♦] then [F1] (Y=) and clear any equations that are in the $y$-editor. In the Stats/List Editor select F2 for the Plots menu. Use cursor keys to highlight 1:Plot Setup. Make sure that the other graphs are turned off by pressing F4 button to remove the check marks. Under “Plot 2” press F1 for the Define menu. In the “Plot Type” menu select “Scatter.” In the “x” space type in the name of your list with the x variable without space, for our example “list1.” In the “y” space press [2ND] [-] for the VAR-LINK menu. Scroll down the list and find “resid” in the “STATVARS” menu. Press [ENTER] twice and you will be returned to the Plot Setup menu. Press F5 ZoomData to display the graph. Press F3 Trace and use the arrow keys to scroll along the different points. Excel: Run the regression the same as in the last section when testing to see if there is a significant correlation. Type the data into two columns in Excel. Select the Data tab, then Data Analysis, then choose Regression and select OK. Be careful here, the second column is the $y$ range, and the first column is the $x$ range. Only check the Labels box if you highlight the labels in the input range. The output range is one cell reference where you want the output to start. Check the residuals, residual plots and normal probability plots, then select OK. Figure 12-21 shows the Excel Output. Additional output from Excel gives the residuals, residual plot, and normal probability plot; see below. With this additional output, you can check the assumptions about the residuals. The residual plot is random and the normal probability plot forms an approximately straight line. Putting It All Together High levels of hydrogen sulfide $(\mathrm{H}_{2} \mathrm{S})$ in the ocean can be harmful to animal life. It is expensive to run tests to detect these levels. A scientist would like to see if there is a relationship between sulfate $(\mathrm{SO}_{4})$ and $\mathrm{H}_{2} \mathrm{S}$ levels, since $\mathrm{SO}_{4}$ is much easier and less expensive to test in ocean water. A sample of $\mathrm{SO}_{4}$ and $\mathrm{H}_{2} \mathrm{S}$ were recorded together at different depths in the ocean. The sample is reported below in millimolar (mM). If there were a significant relationship, the scientist would like to predict the $\mathrm{H}_{2} \mathrm{S}$ level when the ocean has an $\mathrm{SO}_{4}$ level of 25 mM. Run a complete regression analysis and check the assumptions. If the model is significant, then find the 95% prediction interval to predict the sulfide level in the ocean when the sulfate level is 25 mM. Sulfate 22.5 27.5 24.6 27.3 23.1 24 24.5 28.4 25.1 24.4 Sulfide 0.6 0.3 0.6 0.4 0.7 0.5 0.7 0.2 0.3 0.7 Solution Start with a scatterplot to see if a linear relation exists. The scatterplot in Figure 12-22 shows a negative linear relationship. Test to see if the linear relationship is statistically significant. Use $\alpha$ = 0.05. You could use an F- or a t-test. I would recommend the t-test if you are using a TI calculator and an F-test if you are using a computer program like Excel or SPSS. We will do the F-test for the following example. The hypotheses are: $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ Compute the sum of squares. $SS_{xx} = (n-1) s_{x}^{2} = (10 - 1)1.959138^{2} = 34.544$ $SS_{yy} = (n-1) s_{y}^{2} = (10 - 1)0.188561^{2} = 0.32$ $SS_{xy} = \sum (xy) - n \cdot \bar{x} \cdot \bar{y} = 123.04 - 10 \cdot 25.14 \cdot \cdot 0.5 = -2.66$ Next, compute the test statistic. $\begin{array}{l} & SSR = \frac{\left(SS_{xy}\right)^{2}}{SS_{xx}} = \frac{-2.66^{2}}{34.544} = 0.2048286 \quad\quad\quad\quad & SST = SS_{yy} = 0.32 \ & SSE = SST - SSR = 0.32 - 0.2048286 = 0.1151714 \ & df_{T} = n - 1 = 9 & df_{E} = n - p - 1 = 10 - 1 - 1 = 8 \ & MSR = \frac{SSR}{p} = \frac{0.24829}{1} = = 0.204829 & SE = \frac{SSE}{n-p-1} = \frac{0.115171}{8} = 0.014396 \ & F = \frac{MSR}{MSE} = \frac{0.204829}{0.014396} = 14.228 \end{array}$ Compute the p-value. This is a right-tailed F-test with $df = 1, 8$, which gives a p-value of =F.DIST.RT(14.2277,1,8) = 0.00545. We could also use Excel to generate the p-value. The p-value = 0.00545 < $\alpha$ = 0.05; therefore, reject $H_{0}$. There is a statistically significant linear relationship between hydrogen sulfide and sulfate levels in the ocean. From the linear regression, check the assumptions and make sure there are no outliers. The standardized residuals are between $-2$ and $2$, and the scatterplot does not indicate any outliers. The Normal Probability Plot in Figure 12-23 forms an approximately straight line. This indicates that the residuals are approximately normally distributed. The residual plot in Figure 12-24 has no unusual pattern. This indicates that a linear model would work well for this data. Now find and use the regression equation to calculate the 95% prediction interval to predict the sulfide level in the ocean when the sulfate level is 25 mM. Find the regression equation. Calculate the slope: $b_{1} = \frac{SS_{xy}}{SS_{xx}} = \frac{-2.66}{34.544} = -0.077$. Then calculate the $y$-intercept: $b_{0} = \bar{y} - b_{1} \cdot \bar{x} = 0.5 - (-0.077) \cdot 25.14 = 2.43586$. Put the numbers back into the regression equation and write your answer as: $\hat{y} = 2.4359 + (-0.077)x$ or as $\hat{y} = 2.4359 - 0.077x$. We can use technology to get the regression equation. Coefficients are found in the first column in the computer output. We would expect variation in our predicted value every time a new sample is used. Find the 95% prediction interval to estimate the sulfide level when the sulfate level is 25 mM. Use the prediction interval equation $\hat{y} \pm t_{\alpha / 2} \cdot s \sqrt{\left(1 + \frac{1}{n} + \frac{\left(x - \bar{x}\right)^{2}}{SS_{xx}}\right)}$. Substitute $x = 25$ into the equation to get $\hat{y} = 2.43586 - 0.0770032 \cdot 25 = 0.51078$. To find $t_{\alpha/2}$ use your calculator's invT with $df_{E} = n - 2 = 8$ and left-tail area $\frac{\alpha}{2} = \frac{0.05}{2} = 0.025$, gives $t_{0.025} = \pm 2.306004$. The standard error of estimate $s = \sqrt{MSE} = \sqrt{0.014396} = 0.11998$, which can also be found using technology. From the earlier descriptive statistics, we have $n = 10$, $\bar{x}= 25.14$, $SS_{xx} = 34.544$. Substitute each of these values into the prediction interval to get the following: $0.51078 \pm 2.306004 \cdot 0.119985 \sqrt{\left(1 + \frac{1}{10} + \frac{(25 - 25.14)^{2}}{34.544}\right)}$ $0.51078 \pm 0.290265$ $0.2205 < y < 0.8010$ We can be 95% confident that the true sulfide level in the ocean will be between 0.2205 and 0.801 mM when the sulfate level is 25 mM. Summary A simple linear regression should only be performed if you observe visually that there is a linear pattern in the scatterplot and that there is a statistically significant correlation between the independent and dependent variables. Use technology to find the numeric values for the $y$-intercept = $a = b_{0}$ and slope = $b = b_{1}$, then make sure to use the correct notation when substituting your numbers back in the regression equation $\hat{y} = b_{0} + b_{1} x$. Another measure of how well the line fits the data is called the coefficient of determination $R^{2}$. When $R^{2}$ is close to 1 (or 100%), then the line fits the data very closely. The advantage over using $R^{2}$ over $r$ is that we can use $R^{2}$ for nonlinear regression, whereas $r$ is only for linear regression. One should always check the assumptions for regression before using the regression equation for prediction. Make sure that the residual plots have a completely random horizontal band around zero. There should be no patterns in the residual plots such as a sideways V that may indicate a non-constant variance. A pattern like a slanted line, a U, or an upside-down U shape would suggest a non-linear model. Check that the residuals are normally distributed; this is not the same as the population being normally distributed. Check to make sure that there are no outliers. Be careful with lurking and confounding variables.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.02%3A_Simple_Linear_Regression/12.2.06%3A_Conclusion_-_Simple_Linear_Regression.txt
A multiple linear regression line describes how two or more predictor variables affect the response variable $y$. An equation of a line relating $p$ independent variables to $y$ is of the form for the population as: $y = \beta_{0} + \beta_{1} x_{1} + \beta_{2} x_{2} + \cdots + \beta_{p} x_{p} + \varepsilon$, where $\beta_{1}, \beta_{2}, \ldots, \beta_{p}$ are the slopes, $\beta_{0}$ is the $y$-intercept and $\varepsilon$ is called the error term. We use sample data to estimate this equation using the predicted value of $y$ as $\hat{y}$ with the regression equation (also called the line of best fit or least squares regression line) as: $y = b_{0} + b_{1} x_{1} + b_{2} x_{2} + \cdots + b_{p} x_{p} \nonumber$ where $b_{1}, b_{2}, \ldots, b_{p}$ are the slopes, and $b_{0}$ is the $y$-intercept For example, if we had two independent variables, we would have a 3-dimensional space as in Figure 12-25 where the red dots represent the sample data points and the equation would be a plane in the space represented by $y = b_{0} + b_{1} x_{1} + b_{2} x_{2}$. The calculations use matrix algebra, which is not a prerequisite for this course. We will instead rely on a computer to calculate the multiple regression model. If all the population slopes were equal to zero, the model $y = \beta_{0} + \beta_{1} x_{1} + \beta_{2} x_{2} + \cdots + \beta_{p} x_{p} + \varepsilon$ would not be significant and should not be used for prediction. If one or more of the population slopes are not equal to zero then the model will be significant, meaning there is a significant relationship between the independent variables and the dependent variable and we may want to use this model for prediction. There are other statistics to look at to decide if this would be the best model to use. Those methods are discussed in more advanced courses. The hypotheses will always have an equal sign in the null hypotheses. The hypotheses are: $H_{0}: \beta_{1} = \beta_{2} = \cdots = \beta_{p} = 0$ $H_{1}:$ At least one slope is not zero. Note that the alternative hypothesis is not written as $H_{1}: \beta_{1} \neq \beta_{2} \neq \cdots \neq \beta_{p} \neq 0$. This is because we just want one or more of the independent variables to be significantly different from zero, not necessarily all the slopes unequal to zero. Use the F-distribution with degrees of freedom for regression = $df_{R} = p$, where $p$ = the number of independent variables (predictors), and degrees of freedom for error = $df_{E} = n - p - 1$, where $n$ is the number of pairs. This is always a right-tailed ANOVA test, since we are testing if the variation in the regression model is larger than the variation in the error. The test statistic and p-value are the last two values on the right in the ANOVA table. The p-value rule is easiest to use since the p-value is part of the outcome, but a critical value can be found using the invF program on your calculator or in Excel using =F.INV.RT($\alpha, df_{R}, df_{E}$) We can also single out one independent variable at a time and use a t-test to see if the variable is significant by itself in predicting $y$. This would have hypotheses: $H_{0}: \beta_{i} = 0$ $H_{1}: \beta_{i} \neq 0$ where $i$ is a placeholder for whichever independent variable is being tested. This t-test is found in the same row as the coefficient that you are testing. Assumptions for Multiple Linear Regression When doing multiple regression, the following assumptions need to be met: 1. The residuals of the model are approximately normally distributed. 2. The residuals of the model are independent (not autocorrelated) and have a constant variance (homoscedasticity). 3. There is a liner relationship between the dependent variable and each independent variable. 4. Independent variables are uncorrelated with each other (no multicollinearity). The following is a schematic for the regression output for Microsoft Excel. Other software usually has a similar output but may have numbers in slightly different places. The blue spaces have the descriptions of the corresponding numbers. The coefficients column gives the numeric values to find the regression equation $y = b_{0} + b_{1} x_{1} + b_{2} x_{2} + \cdots + b_{p} x_{p}$. The p-values for $b_{i}$ should be investigated to see if the variable is statistically significant. One should also be careful that the independent variables are not significantly correlated amongst themselves. Correlated independent variables may give unexpected outcomes in the overall regression model and actually flip the sign on a coefficient. A sample of 30 homes that were recently on the market were selected. The listing price in $1,000’s of the home, the livable square feet of the home, the lot size in 1,000’s of square feet and the number of bathrooms in the home were recorded. A multiple linear regression was done in Excel with the following output. Test to see if there is a significant relationship between the listing price of a home with the livable square feet, lot size, and number of bathrooms. If there is a relationship, then use the regression model to predict the listing price for a home that has 2,350 square feet, 3 bathrooms and has a 5,000 square foot lot. Use $\alpha$ = 0.05. Solution First, we need to test to see if the overall model is significant. The hypotheses are: $H_{0}: \beta_{1} = \beta_{2} = \beta_{3} = 0$ $H_{1}:$ At least one slope is not zero. The test statistic is $F = 187.9217$ and the p-value = $9.74E-18 \sim 0$ We reject $H_{0}$, since the p-value is less than $\alpha$ = 0.05. There is enough evidence to support the claim that there is a significant relationship between the number of bathrooms and lot size of a home with its listing price. Since we reject $H_{0}$, we can use the regression model for prediction. The question asked to predict the listing price for a home that has 2,350 square feet, 3 bathrooms and has a 5,000 square foot lot. This gives us $x_{1} = 2350$, $x_{2} = 5$ (5,000 square feet), and $x_{3} = 3$. The coefficients column has the values that correspond to the y-intercept and slopes gives the regression equation: $\hat{y} = -28.8477 + 0.170908 \cdot x_{1} + 6.7705 \cdot x_{2} + 15.5347 \cdot x_{3}$. Substitute the three given x values into the equation in the correct order and you get $\hat{y} = -28.8477 + 0.170908 \cdot 2350 + 6.7705 \cdot 5 + 15.5347 \cdot 3 = 453.2787$. This then gives a predicted listing price of$453,278. Note that our sample size is very small and we really need to check assumptions in order to use this predicted value with any reliability. Is this the best model to use? Note that not all the p-values for each of the individual slope coefficients are significant. The number of bathrooms has a t-test statistic = 1.687038 and p-value = 0.10356, which is not statistically significant at the 5% level of significance. We may want to rerun the regression model without the number of bathroom variables and see if we get a higher $R^{2}$ and a lower standard error of estimate. Ideally, we would try all the different combinations of independent variables and see which combination gives the best model. This is a lot of work to do if you have many independent variables. Most statistical software packages have built in functions that find the best fit. Adjusted Coefficient of Determination When we add more predictor variables into the model, this inflates the coefficient of variation, $R^{2}$. In multiple regression, we adjust for this inflation using the following formula for adjusted coefficient of variation. Adjusted Coefficient of Determination $R_{adj}^{2} = 1 - \left( \frac{\left(1 - R^{2}\right) (n-1)}{(n - p - 1)} \right) \nonumber$ Use the previous example to verify the value of the adjusted coefficient of determination starting with the regular coefficient of determination $R^{2} = 0.955915$. Solution First identify in the Excel output $R^{2} = 0.955915$, $n - 1 = df_{T} = 29$, and $n - p - 1 = df_{E} = 26$. Substitute these values in and we get $R_{adj}^{2} = 1 - \left(\frac{(1-0.955915)(29)}{(26)}\right) = 0.950828$. This is the same value as the adjusted $R^{2}$ reported in the Excel output. The Excel output has both the adjusted coefficient of determination and the regular coefficient of determination. However, you may need the equation for the adjusted coefficient of determination depending on what information is given in a problem. There are more types of regression models and more that should be done for a complete regression analysis. Ideally, you would find several models and pick the one with no outliers, the smallest standard error of estimate, a good residual plot, and the highest adjusted $R^{2}$ and check the assumptions behind each model before using for prediction. More advanced techniques are discussed in a regression course. “Well, I was in fact, I was moving backwards in time. Hmmm. Well, I think we've sorted all that out now. If you'd like to know, I can tell you that in your universe you move freely in three dimensions that you call space. You move in a straight line in a fourth, which you call time, and stay rooted to one place in a fifth, which is the first fundamental of probability. After that it gets a bit complicated, and there's all sorts of stuff going on in dimensions 13 to 22 that you really wouldn't want to know about. All you really need to know for the moment is that the universe is a lot more complicated than you might think…” (Adams, 2002) 12.04: Chapter 12 Formulas $SS_{xx} = (n-1) s_{rx}^{2}$ $SS_{yy} = (n-1) s_{y}^{2}$ $SS_{xy} = \sum (xy) - n \cdot \bar{x} \cdot \bar{y}$ Correlation Coefficient $r = \frac{SS_{xy}}{\sqrt{\left(SS_{xx} \cdot SS_{yy}\right)}}$ Correlation t-test $H_{0}: \rho = 0$ $H_{1}: \rho \neq 0$ $t = r \sqrt{\left( \frac{n-2}{1-r^{2}} \right)}$ $df = n-2$ Regression Equation (Line of Best Fit) $\hat{y} = b_{0} + b_{1}x$ Slope $b_{1} = \frac{SS_{xy}}{SS_{xx}}$ y-Intercept $b_{0} = \bar{y} - b_{1} \bar{x}$ Slope t-test $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ $t = \frac{b_{1}}{\sqrt{ \left( \frac{MSE}{SS_{xx}} \right)}}$ $df = n - p - 1 = n - 2$ Slope/Model F-test $H_{0}: \beta_{1} = 0$ $H_{1}: \beta_{1} \neq 0$ Standard Error of Estimate $s_{est} = \sqrt{ \frac{\sum \left(y_{i} - \hat{y}_{i}\right)^{2}}{n-2}} = \sqrt{MSE}$ Residual $e_{i} = y_{i} - \hat{y}_{i}$ Prediction Interval $\hat{y} \pm t_{\alpha / 2} \cdot s_{est} \sqrt{\left( 1 + \frac{1}{n} + \frac{\left(x - \bar{x}\right)^{2}}{SS_{xx}} \right)}$ Coefficient of Determination $R^{2} = (r)^{2} = \frac{SSR}{SST}$ Multiple Linear Regression Equation $\hat{y} = b_{0} + b_{1} x_{1} + b_{2} x_{2} + \cdot + b_{p} x_{p}$ Model F-Test for Multiple Regression $H_{0}: \beta_{1} = \beta_{2} = \cdots = \beta_{p} = 0$ $H_{1}:$ At least one slope is not zero. Adjusted Coefficient of Determination $R_{adj}^{2} = 1 - \left( \frac{\left(1 - R^{2}\right) (n-1)}{(n - p - 1)} \right)$
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.03%3A_Multiple_Linear_Regression.txt
Chapter 12 Exercises 1. The correlation coefficient, $r$, is a number between _______________. a) -1 and 1 b) -10 and 10 c) 0 and 10 d) 0 and $\infty$ e) 0 and 1 f) $-\infty$ and $\infty$ 2. To test the significance of the correlation coefficient, we use the t-distribution with how many degrees of freedom? a) $n - 1$ b) $n$ c) $n + 1$ d) $n - 2$ e) $n_{1} + n_{2} - 2$ 3. What are the hypotheses for testing to see if a correlation is statistically significant? a) $H_{0}: r = 0 \quad \ \ \ H_{1}: r \neq 0$ b) $H_{0}: \rho = 0 \quad \ \ \ H_{1}: \rho \neq 0$ c) $H_{0}: \rho = \pm 1 \quad H_{1}: \rho \neq \pm 1$ d) $H_{0}: r = \pm 1 \quad H_{1}: r \neq \pm 1$ e) $H_{0}: \rho = 0 \quad \ \ \ H_{1}: \rho = 1$ 4. The coefficient of determination is a number between _______________. a) -1 and 1 b) -10 and 10 c) 0 and 10 d) 0 and $\infty$ e) 0 and 1 f) $-\infty$ and $\infty$ 5. Which of the following is not a valid linear regression equation? a) $\hat{y} = -5 + \frac{2}{9} x$ b) $\hat{y} = 3x + 2$ c) $\hat{y} = \frac{2}{9} - 5x$ d) $\hat{y} = 5 + 0.4 x$ 6. Body frame size is determined by a person's wrist circumference in relation to height. A researcher measures the wrist circumference and height of a random sample of individuals. The data is displayed below. Use $\alpha$ = 0.05. a) What is the value of the test statistic to see if the correlation is statistically significant? i. 6.0205 ii. 1.16E-06 iii. 3.55E-08 iv. 5.2538 v. 7.2673 vi. 0.7499 vii. 0.7938 b) What is the correct p-value and conclusion for testing if there is a significant correlation? i. 1.16E-06; there is a significant correlation. ii. 3.55E-08; there is a significant correlation. iii. 1.16E-06; There is not a significant correlation. iv. 3.55E-08, There is not a significant correlation. v. 0.7938, There is a significant correlation. vi. 0.7938, There is not a significant correlation. 7. Bone mineral density and cola consumption have been recorded for a sample of patients. Let $x$ represent the number of colas consumed per week and $y$ the bone mineral density in grams per cubic centimeter. Assume the data is normally distributed. Calculate the correlation coefficient. $x$ 1 2 3 4 5 6 7 8 9 10 11 $y$ 0.883 0.8734 0.8898 0.8852 0.8816 0.863 0.8634 0.8648 0.8552 0.8546 0.862 8. A teacher believes that the third homework assignment is a key predictor of how well students will do on the midterm. Let $x$ represent the third homework score and y the midterm exam score. A random sample of last term’s students were selected and their grades are shown below. Assume scores are normally distributed. Use $\alpha$ = 0.05. HW3 Midterm HW3 Midterm HW3 Midterm 13.1 59   6.4 43   20 86 21.9 87   20.2 79   15.4 73 8.8 53   21.8 84   25 93 24.3 95   23.1 92   9.7 52 5.4 39   22 87   15.1 70 13.2 66   11.4 54   15 65 20.9 89   14.9 71   16.8 77 18.5 78   18.4 76   20.1 78 a) State the hypotheses to test for a significant correlation. b) Compute the correlation coefficient. c) Compute the p-value to see if there is a significant correlation. d) State the correct decision. e) Is there a significant correlation? f) Compute the coefficient of determination. g) Write a sentence interpreting $R^{2}$. h) Does doing poorly on homework 3 cause a student to do poorly on the midterm exam? Explain. i) Find the standard error of estimate. 9. The sum of the residuals should be ________. a) a b) b c) 0 d) 1 e) $r$ 10. An object is thrown from the top of a building. The following data measure the height of the object from the ground for a five-second period. Calculate the correlation coefficient. Seconds 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Height 112.5 110.875 106.8 100.275 91.3 79.875 70.083 59.83 30.65 0 11. A teacher believes that the third homework assignment is a key predictor of how well students will do on the midterm. Let $x$ represent the third homework score and $y$ the midterm exam score. A random sample of last term’s students were selected and their grades are shown below. Assume scores are normally distributed. Use $\alpha$ = 0.05. HW3 Midterm HW3 Midterm HW3 Midterm 13.1 59   6.4 43   20 86 21.9 87   20.2 79   15.4 73 8.8 53   21.8 84   25 93 24.3 95   23.1 92   9.7 52 5.4 39   22 87   15.1 70 13.2 66   11.4 54   15 65 20.9 89   14.9 71   16.8 77 18.5 78   18.4 76   20.1 78 a) Compute the regression equation. b) Compute the predicted midterm score when the homework 3 score is 15. c) Compute the residual for the point $(15, 65)$. d) Find the 95% prediction interval for the midterm score when the homework 3 score is 15. 12. Body frame size is determined by a person's wrist circumference in relation to height. A researcher measures the wrist circumference and height of a random sample of individuals. The data is displayed below. a) Which is the correct regression equation? i. $\hat{y} = 31.6304 + 5.4496$ ii. $\hat{y} = 31.6304 + 5.4496 x$ iii. $\hat{y} = 5.4496 + 31.6304 x$ iv. $\hat{y} = 31.6304 + 5.2538 x$ v. $y = 31.6304 + 5.4496 x$ b) What is the predicted height (in inches) for a person with a wrist circumference of 7 inches? c) Which number is the standard error of estimate? d) Which number is the coefficient of determination? e) Compute the correlation coefficient. f) What is the correct test statistic for testing if the slope is significant $H_{1}: \beta_{1} \neq 0$? g) What is the correct p-value for testing if the slope is significant $H_{1}: \beta_{1} \neq 0$? h) At the 5% level of significance, is there a significant relationship between wrist circumference and height? 13. Bone mineral density and cola consumption have been recorded for a sample of patients. Let $x$ represent the number of colas consumed per week and $y$ the bone mineral density in grams per cubic centimeter. Assume the data is normally distributed. Calculate the coefficient of determination. $x$ 1 2 3 4 5 6 7 8 9 10 11 $y$ 0.883 0.8734 0.8898 0.8852 0.8816 0.863 0.8634 0.8648 0.8552 0.8546 0.862 14. Bone mineral density and cola consumption have been recorded for a sample of patients. Let $x$ represent the number of colas consumed per week and $y$ the bone mineral density in grams per cubic centimeter. Assume the data is normally distributed. A regression equation for the following data is $\hat{y} = 0.8893 - 0.0031x$. Which is the best interpretation of the slope coefficient? $x$ 1 2 3 4 5 6 7 8 9 10 11 $y$ 0.883 0.8734 0.8898 0.8852 0.8816 0.863 0.8634 0.8648 0.8552 0.8546 0.862 a) For every additional average weekly soda consumption, a person’s bone density increases by 0.0031 grams per cubic centimeter. b) For every additional average weekly soda consumption, a person’s bone density decreases by 0.0031 grams per cubic centimeter. c) For an increase of 0.8893 in the average weekly soda consumption, a person’s bone density decreases by 0.0031 grams per cubic centimeter. d) For every additional average weekly soda consumption, a person’s bone density decreases by 0.8893 grams per cubic centimeter. 15. Which residual plot has the best linear regression model? a) a b) b c) c d) d e) e f) f 16. An object is thrown from the top of a building. The following data measure the height of the object from the ground for a five-second period. Seconds 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Height 112.5 110.875 106.8 100.275 91.3 79.875 70.083 59.83 30.65 0 The following four plots were part of the regression analysis. There is a statistically significant correlation between time and height, $r = -0.942$, p-value = 0.0000454. Should linear regression be used for this data? Why or why not? Choose the correct answer. a) Yes, the p-value indicates that there is a significant correlation so we can use linear regression. b) Yes, the normal probability plot has a nice curve to it. c) Yes, there is a nice straight line in the line fit plot. d) No, there is a curve in the residual plot, normal plot and the scatterplot. 17. The following data represent the leaching rates (percent of lead extracted vs. time in minutes) for lead in solutions of magnesium chloride $(\mathrm{MgCl}_{2})$. Use $\alpha = 0.05$. Time $(x)$ 4 8 16 30 60 120 Percent Extracted $(y)$ 1.2 1.6 2.3 2.8 3.6 4.4 a) State the hypotheses to test for a significant correlation. b) Compute the correlation coefficient. c) Compute the p-value to see if there is a significant correlation. d) State the correct decision. e) Is there a significant correlation? f) Compute the coefficient of determination. g) Compute the regression equation. h) Compute the 95% prediction interval for 100 minutes. i) Write a sentence interpreting this interval using units and context. 18. Bone mineral density and cola consumption have been recorded for a sample of patients. Let x represent the number of colas consumed per week and y the bone mineral density in grams per cubic centimeter. Assume the data is normally distributed. What is the residual for the observed point $(7, 0.8634)$. $x$ 1 2 3 4 5 6 7 8 9 10 11 $y$ 0.883 0.8734 0.8898 0.8852 0.8816 0.863 0.8634 0.8648 0.8552 0.8546 0.862 19. A study was conducted to determine if there was a linear relationship between a person's age and their peak heart rate. Use $\alpha = 0.05$. Age $(x)$ 16 26 32 37 42 53 48 21 Peak Heart Rate $(y)$ 220 194 193 178 172 160 174 214 a) What is the estimated regression equation that relates number of hours worked and test scores for high school students. b) Interpret the slope coefficient for this problem. c) Compute and interpret the coefficient of determination. d) Compute the coefficient of nondetermination. e) Compute the standard error of estimate. f) Compute the correlation coefficient. g) Compute the 95% Prediction Interval for peak heart rate for someone who is 25 years old. 20. The following data represent the weight of a person riding a bike and the rolling distance achieved after going down a hill without pedaling. Weight (lbs) 59 84 97 56 103 87 88 92 53 66 71 100 Rolling distance (m) 26 43 48 20 59 44 48 46 28 32 39 49 a) Can it be concluded at a 0.05 level of significance that there is a linear correlation between the two variables? b) Using the regression line for this problem, find the predicted bike rolling distance for a person that weighs 110 lbs. c) Find the 99% prediction interval for bike rolling distance for a person that weighs 110 lbs. 21. Body frame size is determined by a person's wrist circumference in relation to height. A researcher measures the wrist circumference and height of a random sample of individuals. The Excel output and scatterplot are displayed below. Find the regression equation and predict the height (in inches) for a person with a wrist circumference of 7 inches. Then, compute the residual for the point $(7, 75)$. 22. It has long been thought that the length of one’s femur is positively correlated to the length of one’s tibia. The following are data for a classroom of students who measured each (approximately) in inches. A significant linear correlation was found between the two variables. Find the 90% prediction interval for the length of someone’s tibia when it is known that their femur is 23 inches long. Femur Length 18.7 20.5 16.2 15.0 19.0 21.3 21.0 14.3 15.8 18.8 18.7 Tibia Length 14.2 15.9 13.1 12.4 16.2 15.8 16.2 12.1 13.0 14.3 13.8 23. The data below represent the driving speed (mph) of a vehicle and the corresponding gas mileage (mpg) for several recorded instances. Driving Speed Gas Mileage Driving Speed Gas Mileage 57 21.8   62 21.5 66 20.9   66 20.5 42 25.0   67 23.0 34 26.2   52 19.4 44 24.3   49 25.3 44 26.3   48 24.3 25 26.1   41 28.4 20 27.2   38 29.6 24 23.5   26 32.5 42 22.6   24 30.8 52 19.4   21 28.8 54 23.9   19 33.5 60 24.8   24 25.1 a) Do a hypothesis test to see if there is a significant correlation. Use $\alpha = 0.10$. b) Compute the standard error of estimate. c) Compute the regression equation and use it to find the predicted gas mileage when a vehicle is driving at 77 mph. d) Compute the 90% prediction interval for gas mileage when a vehicle is driving at 77 mph. 24. The following data represent the age of a car and the average monthly cost for repairs. A significant linear correlation is found between the two variables. Use the data to find a 95% prediction interval for the monthly cost of repairs for a vehicle that is 15 years old. Age of Car (yrs) 1 2 3 4 5 6 7 8 9 10 Monthly Cost (\$) 25 34 42 45 55 71 82 88 87 90 25. In a sample of 20 football players for a college team, their weight and 40-yard-dash time in minutes were recorded. Weight (lbs) 40-Yard Dash (min) Weight (lbs) 40-Yard Dash (min) 285 5.95   195 4.85 185 4.99   254 5.12 165 4.92   140 4.87 188 4.77   212 5.05 160 4.52   158 4.75 156 4.67   188 4.87 256 5.22   134 4.53 169 4.95   205 4.92 210 5.06   178 4.88 165 4.83   159 4.79 a) Do a hypothesis test to see if there is a significant correlation. Use $\alpha = 0.01$. b) Compute the standard error of estimate. c) Compute the regression equation and use it to find the predicted 40-yard-dash time for a football player that is 200 lbs. d) Compute the 99% prediction interval for a football player that is 200 lbs. e) Write a sentence interpreting the prediction interval. 26. The following data represent the enrollment at a small college during its first ten years of existence. A significant linear relationship is found between the two variables. Find a 90% prediction interval for the enrollment after the college has been open for 14 years. Years 1 2 3 4 5 6 7 8 9 10 Enrollment 856 842 923 956 940 981 1025 996 1057 1088 27. A new fad diet called Trim-to-the-MAX is running some tests that they can use in advertisements. They sample 25 of their users and record the number of days each has been on the diet along with how much weight they have lost in pounds. The data is below. A significant linear correlation was found between the two variables. Find the 95% prediction interval for the weight lost when a person has been on the diet for 60 days. Days on Diet 7 12 16 19 25 34 39 43 44 49 Weight Loss 5 7 12 15 20 25 24 29 33 35 28. An elementary school uses the same system to test math skills at their school throughout the course of the 5 grades at their school. The age and score (out of 100) of several students is displayed below. A significant linear relationship is found between the student’s age and their math score. Find a 90% prediction interval for the score a student would earn given that they are 5 years old. Student Age 6 6 7 8 8 9 10 11 11 Math Score 54 42 50 61 67 65 71 72 79 29. The intensity (in candelas) of a 100-watt light bulb was measured by a sensing device at various distances (in meters) from the light source. A linear regression was run and the following residual plot was found. a) Is linear regression a good model to use? b) Write a sentence explaining your answer. 30. The table below shows the percentage of adults in the United States who were married before age 24 for the years shown. A significant linear relationship was found between the two variables. Year 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 % Married Before 24 Years Old 52.1 51.3 45.9 46.3 40.8 38.1 34.0 32.6 28.1 25.5 a) Compute the regression equation. b) Predict the percentage of adults who married before age 24 in the United States in 2015. c) Compute the 95% prediction interval for the percentage of adults who married before age 24 in the United States in 2015. 31. A nutritionist feels that what mothers eat during the months they are nursing their babies is important for healthy weight gain of their babies. She samples several of her clients and records their average daily caloric intake for the first three months of their babies’ lives and also records the amount of weight the babies gained in those three months. The data are below. Daily Calories 1523 1649 1677 1780 1852 2065 2096 2145 2378 Baby's Weight Gain (lbs) 4.62 4.77 4.62 5.12 5.81 5.34 5.89 5.96 6.05 a) Compute the regression equation. b) Test to see if the slope is significantly different from zero, using $\alpha = 0.05$. c) Predict the weight gain of a baby whose mother gets 2,500 calories per day. d) Compute the 95% prediction interval for the weight gain of a baby whose mother gets 2,500 calories per day. 32. The data below show the predicted average high temperature $({}^{\circ} \mathrm{F})$ per month by the Farmer’s Almanac in Portland, Oregon alongside the actual high temperature per month that occurred. Farmer's Almanac 45 50 57 62 69 72 81 90 78 64 51 48 Actual High 46 52 60 61 72 78 82 95 85 68 52 49 a) Compute the regression equation. b) Test to see if the slope is significantly different from zero, using $\alpha = 0.01$. c) Predict the high temperature in the coming year, given that the Farmer’s Almanac is predicting the high to be $58^{\circ} \mathrm{F}$. d) Compute the 99% prediction interval for the actual high temperature in the coming year, given that the Farmer’s Almanac is predicting the high to be $58^{\circ} \mathrm{F}$. 33. In a multiple linear regression problem, p represents: a) The number of dependent variables in the problem. b) The probability of success. c) The number of independent variables in the problem. d) The probability of failure. e) The population proportion. 34. The manager of a warehouse found a significant relationship exists between the number of weekly hours clocked $(x_{1}$), the age of the employee $(x_{2})$, and the productivity of the employee $(y)$ (measured in weekly orders assembled). A multiple regression was run and the following line of best fit was found: $\hat{y} = 66.238 + 2.7048 x_{1} - 0.7275 x_{2}$. Approximate the productivity level of an employee who clocked 61 hours in a given week and is 59 years old. 35. A multiple regression test concludes that there is a linear relationship and finds the following line of best fit: $\hat{y} = -53.247 + 12.594 x_{1} - 0.648 x_{2} + 4.677 x_{3}$. Use the line of best fit to approximate $y$ when $x_{1} = 5$, $x_{2} = 12$, $x_{3} = 2$. 36. A career counselor feels that the strongest predictors of a student’s success after college are class attendance $(x_{1})$ (recorded as a percent) and GPA $(x_{2})$. To test this, she samples clients after they have found a job and lets the dependent variable be the number of weeks it took them to find a job. A multiple regression was run and the following line of best fit was found $\hat{y} = 38.6609 - 0.3345 x_{1} - 1.0743 x_{2}$. Predict the number of weeks it will take a client to find a job, given that she had a 94% $(x_{1} = 94)$ attendance rate in college and a 3.82 GPA. 37. A study conducted by the American Heart Association provided data on how age, blood pressure and smoking relate to the risk of strokes. The following data is the SPSS output with Age in years $(x_{1})$, Blood Pressure in mmHg $(x_{2})$, Smoker (0 = Nonsmoker, 1 = Smoker) $(x_{3})$ and the Risk of a Stroke as a percent $(y)$. a) Use $\alpha = 0.05$ to test the claim that the regression model is significant. State the hypotheses, test statistic, p-value, decision and summary. b) Use the estimated regression equation to predict the stroke risk for a 70-year-old smoker with a blood pressure of 180. c) Interpret the slope coefficient for age. d) Find the adjusted coefficient of determination. 38. Use technology to run a multiple linear regression with a dependent variable of annual salary for adults with full-time jobs in San Francisco and independent variables of years of education completed and age. The data for the sample used is below. Annual Salary Years of Education Age 82,640 17 34 95,854 18 42 152,320 21 39 49,165 13 25 31,120 12 31 67,500 16 32 42,590 12 28 57,245 14 55 58,940 16 45 67,250 18 40 56,120 16 39 38,955 12 34 74,650 16 33 53,495 16 29 67,210 16 30 79,365 16 50 96,045 18 51 78,472 14 60 124,975 21 52 43,125 12 36 a) Test to see if the overall model is significant using $\alpha = 0.05$. b) Compute the predicted salary for a 35-year-old with 16 years of education. c) Compute the adjusted coefficient of determination. d) Are all the independent variables statistically significant? Explain. 39. A study was conducted to determine if there was a linear relationship between a person's weight in pounds with their gender, height and activity level. A person’s gender was recorded as a 0 for anyone who identified as male and 1 for those who did not identify as male. Height was measured in inches. Activity level was coded as 1, 2, or 3; the more active the person was, the higher the value. a) Interpret the slope coefficient for height. b) Predict the weight for a male who is 70 inches tall and has an activity level of 2. c) Calculate the adjusted coefficient of determination. 40. A professor claims that the number of hours a student studies for their final paired with the number of homework assignments completed throughout the semester (out of 20 total assignment opportunities) is a good predictor of a student’s final exam grade. She collects a sample of students and records the following data. Final Exam Score # Hours Studied HW Completed 85 5.2 19 74 5.1 18 79 4.9 16 62 3.7 12 96 9 17 52 1 12 73 2 15 81 2 16 90 7.25 18 79 4.5 14 83 3 20 76 3.2 17 92 6.8 20 84 6.2 18 50 4.1 10 a) Test the professor’s claim using $\alpha = 0.05$. b) Compute the multiple correlation coefficient. c) Interpret the slope coefficient for number of hours studied. d) Compute the adjusted coefficient of determination. Solutions to Odd-Numbered Exercises 1. a 3. b 5. d 7. $r = 0.8241$ 9. c 11. a) $\hat{y} = 25.6472 + 2.8212 x$ b) $67.9657$ c) $-2.9657$ d) $61.3248 < y < 74.6065$ 13. $R^{2} = 0.679$ 15. a 17. a) $H_{0}: \rho = 0; H_{1}: \rho \neq 0$ b) $r = 0.9403$ c) $0.0052$ d) Reject $H_{0}$ e) Yes f) $R^{2} = 0.8842$ g) $\hat{y} = 1.6307 - 0.0257x$ h) $2.6154 < y < 5.7852$ i) We can be 95% confident that the predicted percent of lead extracted in solutions of magnesium chloride at 100 minutes is anywhere between 2.6154 and 5.7852. 19. a) $\hat{y} = 241.8127 - 1.5618x$ b) Every year a person ages, their peak heart rate decreases by an average of 1.5618. c) $R^{2} = 0.93722$ d) $0.06278$ e) $5.6924$ f) $r = -0.9681$ 21. 5.2224 23. a) $H_{0}: \rho = 0; H_{1}: \rho \neq 0; t = -5.2514;$ p-value = $0.000022$. Reject $H_{0}$. There is a significant correlation between driving speed of a vehicle and the corresponding gas mileage. b) 2.49433 c) $\hat{y} = 32.40313 - 0.1662x$ d) $14.8697 < y < 24.3424$ 25. a) $H_{0}: \rho = 0; H_{1}: \rho \neq 0; t = 6.9083;$ p-value = $0.000007$. Reject $H_{0}$. There is a significant correlation between a football player’s weight and 40-yard-dash time. b) $s = 0.160595$ c) $\hat{y} = 3.722331 - 0.006396x; 5.0016$ d) $4.527 < y < 5.476$ e) We can be 99% confident that the predicted time for all 200-pound football player’s running time for the 40-yard-dash is between 4.527 and 5.476 minutes. 27. $36.605 < y < 47.740$ 29. a) No b) The p-value = 0.00000304 suggests that there is a significant linear relationship between intensity (in candelas) of a 100-watt light bulb was measured by a sensing device at various distances (in meters) from the light source. However, the residual plot clearly shows a nonlinear relationship. Even though we can fit a straight line through the points, we would get a better fit with a curve. 31. a) $\hat{y} = 1.76267 - 0.001883x$ b) $H_{0}: \beta_{1} = 0; H_{1}: \beta_{1} \neq 0; t = 5.0668;$ p-value = $0.0015$ Reject $H_{0}$. There is significant linear relationship between a nursing baby’s weight gain and the calorie intake of the mother. c) $6.4693 \mathrm{~lbs}$ d) $5.571 < y < 7.368$ 33. c 35. 11.301 37. a) $H_{0}: \beta_{1} = \beta_{2} = \beta_{3} = 0; H_{1}:$ At least one slope is not zero$; F = 36.823;$ p-value = $0.000000204$ Reject $H_{0}$. There is a significant relationship between a person’s age, blood pressure and smoking status and the risk of a stroke. b) $37.731%$ c) For each year a person ages, their chance of a stroke increases by 1.077%. d) $84.9%$ 39. a) For each additional inch in height, the predicted weight would increase 3.7 pounds. b) $171.34 \mathrm{~lbs}$ c) $65.56%$
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/12%3A_Correlation_and_Regression/12.05%3A_Chapter_12_Exercises.txt
The z-test, t-test, and F-test that we have used in the previous chapters are called parametric tests. These tests have many assumptions that have to be met for the hypothesis test results to be valid. This chapter gives alternative methods for a few of these tests when these assumptions are not met. Advantages for using nonparametric methods: • They can be used to test population parameters when the variable is not normally distributed. • They can be used when the data are nominal or ordinal. • They can be used to test hypotheses that do not involve population parameters. • In some cases, the computations are easier than those for the parametric counterparts. • They are easy to understand. Disadvantages for using nonparametric methods: • They are less sensitive than their parametric counterparts when the assumptions of the parametric methods are met. Therefore, larger differences are needed before the null hypothesis can be rejected. • They tend to use less information than the parametric tests. For example, the sign test requires the researcher to determine only whether the data values are above or below the median, not how much above or below the median each value is. • They are less efficient than their parametric counterparts when the assumptions of the parametric methods are met. That is, larger sample sizes are needed to overcome the loss of information. For example, the nonparametric sign test is about 60% as efficient as its parametric counterpart, the t-test. Thus, a sample size of 100 is needed for use of the sign test, compared with a sample size of 60 for use of the t-test to obtain the same results. 13.02: Sign Test The sign test can be used for both one sample or for two dependent groups. The sign test uses a Binomial Distribution and looks at the probability of a success as 50%. The median is the 50th percentile, so many times we will state our null hypothesis as the median is equal to a certain value. However, sometimes we will state the hypothesis in terms of a proportion. Two-Tailed Test Right-Tailed Test Left-Tailed Test $H_{0}:$ Median $= \text{MD}_{0}$ $H_{1}:$ Median $\neq \text{MD}_{0}$ $H_{0}:$ Median $= \text{MD}_{0}$ $H_{1}:$ Median $> \text{MD}_{0}$ $H_{0}:$ Median $= \text{MD}_{0}$ $H_{1}:$ Median $< \text{MD}_{0}$ $\text{MD}_{0}$ is a placeholder for the number for the hypothesized median. The Sign Test Procedure For the single-sample test, compare each value with the conjectured median. If a data value is larger than the hypothesized median, replace the value with a positive sign. If a data value is smaller than the hypothesized median, replace the value with a negative sign. If the data value equals the hypothesized median, replace the value with a 0. The sample size is the number of plus and minus signs added together (do not include data values that tie with the median). For the paired-sample sign test, subtract the group 2 values from the group 1 values and indicate the difference with a positive or negative sign, or 0 (if they tie) and $n$ = total number of positive and negative signs (do not include differences of zero). Use the binomial distribution to find the p-value using technology. • For a two-tailed test, the test statistic, $x$, is the smaller of the plus or minus signs. If $x$ is the test statistic, the p-value for a two-tailed test is the $2 \cdot \text{P} (X \leq x)$. • For a right-tailed test, the test statistic, $x$, is the number of plus signs. For a left-tailed test, the test statistic, $x$, is the number of minus signs. The p-value for a one-tailed test is the $\text{P} (X \geq x)$. The sign test is an alternative to the one sample t-test when you have a small sample size, but the population is not normally distributed. The sign test is also an alternative to the paired sample t-test when you have a small sample size and the difference in the pairs is not normally distributed. The sign test does not detect the magnitude of the difference between the hypothesized value and is not as efficient as the t-test. A student tells her parents that the median rental rate for a studio apartment in Portland is $700. Her parents are skeptical and believe the rent is different. A random sample of studio rentals is taken from the internet; prices are listed below. Test the claim that there is a difference using $\alpha$ = 0.10. Should the parents believe their daughter? 700 650 800 975 855 785 759 640 950 715 825 980 895 1025 850 915 740 985 770 785 700 925 Solution 1. The hypotheses for this example are: $H_{0}:$ Median $= 700$ $H_{1}:$ Median $\neq 700$ 2. Find the test statistic. Compare each value to the median. If the value is below the median, then give it a negative sign; if the value is above the mean, then give it a positive sign. If the value is tied with the median, then give it a zero. 700 0 650 - 800 + 975 + 855 + 785 + 759 + 640 - 950 + 715 + 825 + 980 + 895 + 1025 + 850 + 915 + 740 + 985 + 770 + 785 + 700 0 925 + Count the number of positive and negative signs. Positive signs = 18, Negative signs = 2. The sample size is then $18 + 2 = 20$. The test statistic is the smaller of the number of plus or minus signs. Therefore, in this case, the test statistic is 2. 3. Using the p-value method, the p-value is $2 \cdot \text{P} (X \leq \text{Test Statistic})$ using a binomial distribution with $p = 0.5$. With the sample size $n = 20$ and $p = 0.5$, then $q = 1 - p = 0.5$. The test statistic is $x = 2$, so find $2 \cdot \text{P} (X \leq 2)$. $\text{P} (X \leq 2) = \text{P}(X = 0) + \text{P}(X = 1) + \text{P}(X = 2) = {}_{20} C_{0} \cdot 0.5^{0} \cdot 0.5^{20} + {}_{20} C_{1} \cdot 0.5^{1} \cdot 0.5^{19} + {}_{20} C_{2} \cdot 0.5^{2} \cdot 0.5^{18} = 0.0000010 + 0.0000191 + 0.0001812 = 0.000201.$ Since this is a two-tailed test, we multiply the probability by 2 to get $2 \cdot 0.000201 = 0.000402$. We can also use the TI-84 calculator for a two-tailed test, to get $2* \text{binomcdf}(20,0.5,2) = 0.000402.$ The p-value = 0.000402. 4. The p-value is smaller than alpha; therefore reject $H_{0}$. 5. There is enough evidence to support the parents’ claim that the median rent for a studio apartment in Portland is not$700. The critical values for the sign test come from a binomial distribution when the probability of a success is 50% since the median is the 50th percentile, and the sample size is 20. If you were to calculate the discrete probability distribution for each possible value of $x$, you would get the following discrete probability distribution table (Figure 13-1) and corresponding graph (Figure 13-2). When we add up the highlighted probabilities we would get a probability of approximately $0.0206949 + 0.0206949 = 0.0414$, which is below our alpha 0.05 for a two-tailed test. If we were to add in the values of $x = 6$ and $x = 14$ we would get 0.1153, which is above our value for alpha. Figure 13-2 is a bar graph of showing the binomial distribution and shaded critical values. This means that $x = 5$ and $x = 15$ are the critical values for a two-tailed sign test with $n = 20$. If the test statistic is less than or equal to 5 or greater than or equal to 15, we would reject $H_{0}$. The test statistic is the smaller of plus or minus signs, which is 2. Since $2 \leq 5$, we would reject $H_{0}$, which agrees with the p-value method. If you were doing a one-tailed test you would use the probabilities for one of the tails. A professor believes that a new online learning curriculum is increasing the median final exam score from the previous year, which was 75. A random sample of final exam scores were collected for students that went through the new curriculum. Test to see if the new curriculum is effective using $\alpha = 0.05$. 78 100 75 64 87 80 72 91 89 70 82 76 Solution The hypotheses are: $H_{0}:$ Median $= 75$ $H_{1}:$ Median $> 75$ Find the test statistic. Compare each value to the median. If the value is below the median then give it a negative sign, if the value is above the mean then give it a positive sign. If the value is tied with the median then give it a zero. 78 + 100 + 75 0 64 - 87 + 80 + 72 - 91 + 89 + 70 - 82 + 76 + Count the number of positive and negative signs. Positive signs = 8, Negative signs = 3. The sample size is then $8 + 3 = 11$. The test statistic for a right tailed test is the number of plus signs. Therefore, in this case, the test statistic is 8. To find the critical value, use technology to find the probabilities for $x = 0$ to $x=11$ for a binomial distribution with $n = 11$ and $p = 0.5$. See Figure 13-3 for the results. Since $\alpha = 0.05$, add up the areas starting at the bottom at $x = 11$ until you get a sum of no more than 0.05. $\text{P} (9 \leq X \leq 11) = \text{P}(X=9) + \text{P}(X=10) + \text{P}(X=11) = {}_{11} C_{10} \cdot 0.5^{9} \cdot 0.5^{2} + {}_{11} C_{10} \cdot 0.5^{10} \cdot 0.5^{1} + {}_{11} C_{11} \cdot 0.5^{0} \cdot 0.5^{11} = 0.03272.$ If we add in the next value of $\text{P}(X = 8) = 0.08057$, the sum would exceed 0.05, so we would stop at a critical value of $X = 9$. See Figure 13-4. Since the test statistic $X = 8$ is not in the rejection area, we would fail to reject $H_{0}$. At the 5% significance level, there is not enough evidence to support the claim that there is a statistically significant difference in final exam scores for the new online curriculum. The median annual salary for high school teachers in the United States was $60,320. A teacher believes that the median high school salary in Oregon is significantly less than the national median. A sample of 100 high school teacher’s salaries found that 58 were below$60,320, 40 were above $60,320 and 2 were$60,320. Use $\alpha = 0.05$ to test their claim. Solution The hypotheses are: $H_{0}:$ Median = $60,320 $H_{1}:$ Median <$60,320 (claim) The sample size $n = 58 + 40 = 98$. We will use $x = 58$. The p-value is found by taking $\text{P}(X \geq 58)$. This is a lot of work by hand, so use technology. For a TI-84 calculator use $1 - \text{binomcdf}(98, 0.5, 58) = 0.0272.$ The p-value = 0.0427, which is less than $\alpha = 0.05$, so reject $H_{0}$. There is enough evidence to support the claim that the median high school salary in Oregon is significantly less than the national median of \$60,320. The sign test can also be used for dependent samples when the assumptions for a paired t-test are not met. A manufacturer believes that if routine maintenance (cleaning and oiling of machines) is increased to once a day rather than once a week, the number of defective parts produced by the machines will decrease. Nine machines are selected, and the number of defective parts produced over a 24-hour operating period is counted. Maintenance is then increased to once a day for a week, and the number of defective parts each machine produces is again counted over a 24-hour operating period. The data are shown here. At $\alpha = 0.05$, can the manufacturer conclude that the additional maintenance reduces the number of defective parts manufactured by the machines? Machine 1 2 3 4 5 6 7 8 9 Before 6 18 5 4 16 13 20 9 3 After 5 16 7 4 18 12 14 7 1 Solution $H_{0}:$ The additional maintenance does not reduce the number of defective parts manufactured by the machines. $H_{1}:$ The additional maintenance reduces the number of defective parts manufactured by the machines. Next, for each pair, take $\text{Before} - \text{After}$. If this difference is positive then put a $+$ sign next to it, if the difference is negative then put a $–$ sign next to it and if the difference is zero then put a 0 next to it. Machine 1 2 3 4 5 6 7 8 9 Before 6 18 5 4 16 13 20 9 3 After 5 16 7 4 18 12 14 7 1 Sign of Difference $+$ $+$ $-$ 0 $-$ $+$ $+$ $+$ $+$ Count the number of positive and negative signs. Positive signs = 6, Negative signs = 2. The sample size is then $6 + 2 = 8$. Note that this is a right-tailed test, since we are looking at “reducing” defective parts so that $\text{Before} > \text{After}$. The test statistic is the number of plus signs, $x = 6$. $\text{P}(X \geq 6) = \text{P}(X=6) + \text{P}(X=7) + \text{P}(X=8) = {}_{8} C_{6} \cdot 0.5^{6} \cdot 0.5^{2} + {}_{8} C_{7} \cdot 0.5^{7} \cdot 0.5^{1} + {}_{8} C_{8} \cdot 0.5^{8} \cdot 0.5^{0} = 0.109375 + 0.031250 + 0.003906 = 0.144531$, which is the p-value. For a TI calculator, use $1 - \text{binomcdf}(8, 0.5, 6) = 0.1445$. The p-value = 0.1445, which is greater than $\alpha = 0.05$; therefore do not reject $H_{0}$. There is not enough evidence to support the claim that the additional maintenance reduces the number of defective parts manufactured by the machines.
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/13%3A_Nonparametric_Tests/13.01%3A_Advantages_and_Disadvantages_of_Nonparametric_Methods.txt
The last few sections in this chapter require one to rank a data set. To rank a data set, you first must arrange the data from smallest to largest. The smallest value gets a rank of 1, the next smallest gets a rank of 2, etc. If there are any values that tie, then each of the tied values gets the average of the corresponding ranks. Rank the following random sample: $8, -4, 1, -3, 5, 2, -3, 0, 5, 3, 5.$ Solution First, sort the data from smallest to largest: $-4, -3, -3, 0, 1, 2, 3, 5, 5, 5, 8$ Next, rank the data. The $-4$ gets a rank of 1. There is a tie between the next two values of $-3$. They would have received the ranks of 2 and 3, but we do not want one of the values to be ranked higher than the other so we give both $-3$’s a rank of $\frac{2+3}{2} = 2.5$. Then the next value of $0$ gets a rank of 4 (we already used the 2nd and 3rd positions). The next set of ties for the three $5$’s occurs for the rank of 8th, 9th and 10th place. The average of these ranks is $\frac{8+9+10}{3} = 9$. The following is a table of the sorted data with the corresponding ranks. Data $-4$ $-3$ $-3$ $0$ $1$ $2$ $3$ $5$ $5$ $5$ $8$ Rank 1 2.5 2.5 4 5 6 7 9 9 9 11 If there is no tie for the last data point, then your last rank will be the same as your sample size. What is the rank for the number 15 in the following sample: $10, 25, 15, 8, 20, 15, 10, 9, 8, 22$? Solution Order the data from smallest to largest: $8, 8, 9, 10, 10, 15, 15, 20, 22, 25$. Next, rank the data. The two $8$’s tie for first and second place, so each gets a rank of $\frac{1+2}{2} = 1.5$. The $9$ is in the third spot so it gets a rank of 3. The two $10$’s tie for fourth and fifth place so each gets a rank of $\frac{4 + 5}{2} = 4.5$. The two $15$’s tie for sixth and seventh place so each gets a rank of $\frac{6+7}{2} = 6.5$. The next three numbers get the ranks of 8, 9 and 10. The answer, then, is 6.5, the rank of the number $15$. The Tevis Cup Ride is a 24-hour, 100-mile horse race over the Sierra Nevada mountains from Lake Tahoe to Auburn in a single day. The top 10 completion times for 2019 are shown below. Rank the completion times. Name Completion Time Sanoma Blakeley $09\text{:}27 \text{ PM}$ Jeremy Reynolds $09\text{:}27 \text{ PM}$ Haley Moquin $09\text{:}36 \text{ PM}$ Richard George $09\text{:}37 \text{ PM}$ Suzanne Huff $09\text{:}54 \text{ PM}$ Karen Donley $09\text{:}54 \text{ PM}$ Nicki Meuten $10\text{:}06 \text{ PM}$ Gwen Hall $10\text{:}20 \text{ PM}$ Lindsay Fisher $10\text{:}28 \text{ PM}$ Suzanne Hayes $10\text{:}29 \text{ PM}$ Solution The data are already ordered. There are two ties at $9\text{:}27$ and $9\text{:}54$. Time $9\text{:}27$ $9\text{:}27$ $9\text{:}36$ $9\text{:}37$ $9\text{:}54$ $9\text{:}54$ $10\text{:}06$ $10\text{:}20$ $10\text{:}28$ $10\text{:}29$ Rank 1.5 1.5 3 4 5.5 5.5 7 8 9 10
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/13%3A_Nonparametric_Tests/13.03%3A_Ranking_Data.txt
The Wilcoxon Signed-Rank Sum test is the non-parametric alternative to the dependent t-test. The Wilcoxon Signed-Rank Sum test compares the medians of two dependent distributions. The Signed-Rank Sum test, developed by Frank Wilcoxon, finds the difference between paired data values and ranks the absolute value of the differences. Then we sum the ranks for all the negative and positive differences separately. The absolute value of the smaller of these summed ranks is called $w_{s}$. If there were any differences of zero you would not count them in your sample size. Small Sample Size Case: $n < 30$ When the sample size is less than 30, the test statistic is $w_{s}$, the absolute value of the smaller of the sum of ranks. Figure 13-5 provides critical values for the Wilcoxon Signed-Rank test. If the test statistic $w_{s}$ is greater than the critical value from the table, we fail to reject $H_{0}$. If the test statistic $w_{s}$ is less than or equal to the critical value from the table, we reject $H_{0}$. Figure 13-5: Critical Values for the Signed Rank Test. Dashes indicate that the sample is too small to reject $H_{0}$. 1-Tailed $\alpha$ 2-Tailed $\alpha$ $n$ 0.01 0.05 0.10 0.01 0.05 0.10 5 - 0 2   - - 0 6 - 2 3   - 0 2 7 0 3 5   - 2 3 8 1 5 8   0 3 5 9 3 8 10   1 5 8 10 5 10 14   3 8 10 11 7 13 17   5 10 13 12 9 17 21   7 13 17 13 12 21 26   9 17 21 14 15 25 31   12 21 25 15 19 30 36   15 25 30 16 23 35 42   19 29 35 17 27 41 48   23 34 41 18 32 47 55   27 40 47 19 37 53 62   32 46 53 20 43 60 69   37 52 60 21 49 67 77   42 58 67 22 55 75 86   48 65 75 23 62 83 94   54 73 83 24 69 91 104   61 81 91 25 76 100 113   68 89 100 26 84 110 124   75 98 110 27 92 119 134   83 107 119 28 101 130 145   91 116 130 29 110 140 157   100 126 140 In an effort to increase production of an automobile part, the factory manager decides to play music in the manufacturing area. Eight workers are selected, and the number of items each produced for a specific day is recorded. After one week of music, the same workers are monitored again. The data are given in the table. At $\alpha = 0.05$, can the manager conclude that listening to music has increased production? Use the Wilcoxon Signed-Rank Test since there is no mention of the population being normally distributed. Worker 1 2 3 4 5 6 7 8 9 Before 6 8 10 9 5 12 9 5 7 After 10 12 9 12 8 13 8 5 10 Solution The correct hypotheses are: $H_{0}$: Music in the manufacturing area does not increase production. $H_{1}$: Music in the manufacturing area increases production. This is a left-tailed test. In order to compute the t-test statistic, first compute the differences between each of the matched pairs. Before $(x_{1})$ 6 8 10 9 5 12 9 5 7 After $(x_{2})$ 10 12 9 12 8 13 8 5 10 $D = x_{1} - x_{2}$ –4 –4 1 –3 –3 –1 1 0 –3 Take the absolute value of each difference. Before $(x_{1})$ 6 8 10 9 5 12 9 5 7 After $(x_{2})$ 10 12 9 12 8 13 8 5 10 $D = x_{1} - x_{2}$ –4 –4 1 –3 –3 –1 1 0 –3 $|D|$ 4 4 1 3 3 1 1 0 3 Rank the data and drop any ties. At this point, if any of the differences are zero, that pair is no longer used and is not ranked. Before $(x_{1})$ 6 8 10 9 5 12 9 5 7 After $(x_{2})$ 10 12 9 12 8 13 8 5 10 $D = x_{1} - x_{2}$ –4 –4 1 –3 –3 –1 1 0 –3 $|D|$ 4 4 1 3 3 1 1 0 3 Rank 7.5 7.5 2 5 5 2 2 drop 5 The sample size $n$ is the number of differences that are not zero. So, in this case, $n = 8$. Next, take the sign of the difference and attach this plus or minus sign to each rank. Before $(x_{1})$ 6 8 10 9 5 12 9 7 After $(x_{2})$ 10 12 9 12 8 13 8 10 $D = x_{1} - x_{2}$ –4 –4 1 –3 –3 –1 1 –3 $|D|$ 4 4 1 3 3 1 1 3 Rank 7.5 7.5 2 5 5 2 2 5 Signed Rank –7.5 –7.5 +2 –5 –5 +2 +2 –5 Find the sum of the positive and negative ranks: Positive ranks: $2 + 2 = 4$ Negative ranks: $(-7.5) + (-7.5) + (-5) + (-5) + (-2) + (-5) = -32$. Take the smaller of the absolute value of the sums of the ranks: $|4| = 4, |-32| = 32$, so 4 is smaller. This is our test statistic called $w_{s} = 4$. Next, use the table in Figure 13-5 to get the critical value. The table provides critical values for two-tailed tests. This is a one-tailed test, with $\alpha = 0.05$ and $n = 8$. See Figure 13-6 that shows which row and column from Figure 13- 5 to use to find the critical value. Figure 13-6: Critical value for 1-tailed test with $\alpha = 0.05$ and $n=8$. 1-Tailed $\alpha$ $n$ 0.01 0.05 0.10 5 - 0 2 6 - 2 3 7 0 3 5 8 1 5 8 The critical value = 5. The test statistic $w_{s} = 4$ is less than the critical value of 5. The decision rule for the critical values in Figure 13-5 is to reject the null if the test statistic is less than or equal to the critical value, and do not reject the null hypothesis if the test statistic is larger than the critical value. Since $w_{s} < \text{Critical Value} = 5$, the decision is to reject $H_{0}$. There is enough evidence to support the claim that listening to music has increased production. When the sample size is 30 or more,a paired t-test may be used in most situations. However, if your population is heavily skewed or you are using interval data, then use the large sample size normal approximation Wilcoxon Signed-Rank test. Large Sample Size Case: $n \geq 30$ We can use the normal approximation for sample sizes of 30 or more. The formula for the test statistic is: $z = \frac{\left(w_{s} - \left(\dfrac{n (n+1)}{4}\right) \right)}{\sqrt{\left( \dfrac{n(n+1)(2n+1)}{24} \right)}} \nonumber$ where $n$ is the reduced samples size excluding any differences of zero, and $w_{s}$ is the smaller in absolute value of the signed ranks for a two-tailed test, the sum of the positive ranks for a left-tailed test, or the sum of the negative ranks for a right-tailed test. The sample size $n$ is the reduced sample size not including any differences of zero. A pharmaceutical company is testing to see if there is a significant difference in the pain relief for two new pain medications. They randomly assign the two different pain medications for 34 patients with chronic pain and record the pain rating for each patient one hour after each dose. The pain ratings are on a sliding scale from 1 to 10. The results are listed below. Use the Wilcoxon Signed-Rank test to see if there is a significant difference at $\alpha = 0.05$. Patient Drug 1 Drug 2 Patient Drug 1 Drug 2 Patient Drug 1 Drug 2 1 2.4 2.5   13 4 6.1   25 4 5.1 2 4.7 3.3   14 2.2 2.9   26 5.5 4.4 3 1.2 5.3   15 2.7 4.3   27 3.6 3.6 4 5.9 5.6   16 2.9 3.3   28 3.8 3.5 5 4.5 5   17 5 5   29 5.4 4.8 6 4 5.3   18 3.1 5.1   30 2.4 3.2 7 2.5 4.6   19 3.3 3.3   31 4.1 2.6 8 3 2.5   20 3 5.9   32 4.5 5.7 9 5 3.4   21 5.4 3.2   33 4 5.8 10 5.8 5.4   22 4.2 5.9   34 6 5 11 1.9 5.1   23 3.6 5.9         12 3.2 4.3   24 2.2 5.6 Solution The correct hypotheses are: $H_{0}$: There is no difference in the pain scale rating for the two pain medications. $H_{1}$: There is a difference in the pain scale rating for the two pain medications. Compute the differences between each of the matched pairs. Rank the absolute value of the differences. Make sure to average the ranks repeated differences and do not rank any differences of zero. After the differences are ranked, attach this sign of the difference to each rank. Patient Drug 1 Drug 2 Difference $|D|$ Rank Signed Rank 1 2.4 2.5 –0.1 0.1 1 –1 2 4.7 3.3 1.4 1.4 17 17 3 1.2 5.3 –4.1 4.1 31 –31 4 5.9 5.6 0.3 0.3 2.5 2.5 5 4.5 5 –0.5 0.5 6.5 –6.5 6 4 5.3 –1.3 1.3 16 –16 7 2.5 4.6 –2.1 2.1 24.5 –24.5 8 3 2.5 0.5 0.5 6.5 6.5 9 5 3.4 1.6 1.6 19.5 19.5 10 5.8 5.4 0.4 0.4 4.5 4.5 11 1.9 5.1 –3.2 3.2 29 –29 12 3.2 4.3 –1.1 1.1 13 –13 13 4 6.1 –2.1 2.1 24.5 –24.5 14 2.2 2.9 –0.7 0.7 9 –9 15 2.7 4.3 –1.6 1.6 19.5 –19.5 16 2.9 3.3 –0.4 0.4 4.5 –4.5 17 5 5 0 18 3.1 5.1 –2 2 23 –23 19 3.3 3.3 0 20 3 5.9 –2.9 2.9 28 –28 21 5.4 3.2 2.2 2.2 26 26 22 4.2 5.9 –1.7 1.7 21 –21 23 3.6 5.9 –2.3 2.3 27 –27 24 2.2 5.6 –3.4 3.4 30 –30 25 4 5.1 –1.1 1.1 13 – 13 26 5.5 4.4 1.1 1.1 13 13 27 3.6 3.6 0 28 3.8 3.5 0.3 0.3 2.5 2.5 29 5.4 4.8 0.6 0.6 8 8 30 2.4 3.2 –0.8 0.8 10 –10 31 4.1 2.6 1.5 1.5 18 18 32 4.5 5.7 –1.2 1.2 15 –15 33 4 5.8 –1.8 1.8 22 –22 34 6 5 1 1 11 11 Find the sum of the positive and negative ranks: Positive ranks: $17 + 2.5 + 6.5 + 19.5 + 4.5 + 26 + 13 + 2.5 + 8 + 18 + 11 = 128.5$ Negative ranks: $(-1) + (-31) + (-6.5) + (-16) + (-24.5) + (-29) + (-13) + (-24.5) + (-9) + (-19.5) + (-4.5) + (-23) + (-28) + (-21) + (-27) + (-30) + (-13) + (-10) + (-15) + (-22) = -367.5$. Take the smaller of the absolute value of the sums of the ranks: $|128.5| = 128.5, |-367.5| = 367.5$, so 128.5 is smaller. The smaller of the absolute value of the sum of the ranks is $w_{s} = 128.5$. Throw out the three differences of zero. The sample size is $n = 31$. The test statistic is: $z = \frac{\left(w_{s} - \left(\frac{n(n+1)}{4}\right)\right)}{\sqrt{\left( \frac{n (n+1) (2n+1)}{24}\right) }} = \frac{\left(128.5 - \left( \frac{31 \cdot 32}{4}\right) \right)}{\sqrt{\left( \frac{31 \cdot 32 \cdot 63}{24}\right) }} = \frac{(128.5 - 248)}{\sqrt{(2604)}} = -2.341787$ Either the $z$ critical value or p-value method may be used, similar to how we used previous z-tests. Compute the $z_{\alpha/2}$ critical values. Draw and label the distribution; see Figure 13-7. Use the inverse normal function $\text{invNorm}(0.025,0,1)$ to get $z_{\alpha/2} = \pm 1.96$. The test statistic $z = -2.3418$ is in the shaded critical region, so reject $H_{0}$. There is enough evidence to support the claim that there is a significant difference in the pain scale rating for the two pain medications. These calculations can be done by hand or using the following online calculator: http://www.socscistatistics.com/tests/signedranks. The TI calculators and Excel do not have built-in nonparametric tests. It is an important and popular fact that things are not always what they seem. For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much – the wheel, New York, wars and so on – whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man – for precisely the same reasons. (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/13%3A_Nonparametric_Tests/13.04%3A_Wilcoxon_Signed-Rank_Test.txt
The Mann-Whitney U Test is the non-parametric alternative to the independent t-test. The test was expanded on Frank Wilcoxon’s Rank Sum test by Henry Mann and Donald Whitney. The independent t-test assumes the populations are normally distributed. When these conditions are not met, the Mann-Whitney Test is an alternative method. If two groups come from the same distribution and were randomly assigned labels, then the two different groups should have values somewhat equally distributed between the two groups. The Mann-Whitney Test looks at all the possible rankings between the data points. For large sample sizes, a normal approximation of the distribution of ranks is used. Small Sample Size Case $(n \leq 20)$ Combine the data from both groups and sort from smallest to largest. Make sure to label the data values so you know which group they came from. Rank the data. Sum the ranks separately from each group. Let $R_{1}$ = sum of ranks for group one and $R_{2}$ = sum of ranks for group two. Find the $U$ statistic for both groups: $U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2}, U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2}$. The test statistic $U = \text{Min} \left(U_{1}, U_{2}\right)$ is the smaller of $U_{1}$ or $U_{2}$. Critical values are found given in the tables in Figures 13-6 $(\alpha = 0.05)$ and 13-7 $(\alpha = 0.01)$. Figure 13-8: Critical Values for 2-Tailed Mann-Whitney U Test for $\alpha = 0.05$ $n_{2}$ $n_{1}$ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 - - - - - - 0 0 0 0 1 1 1 1 1 2 2 2 2 3 - - - 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 4 - - 0 1 2 3 4 4 5 6 7 8 9 10 11 11 12 13 13 5 - 0 1 2 3 5 6 7 8 9 11 12 13 14 15 17 18 19 20 6 - 1 2 3 5 6 8 10 11 13 14 16 17 19 21 22 24 25 27 7 - 1 3 5 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 8 0 2 4 6 8 10 13 15 17 19 22 24 26 29 31 34 36 38 41 9 0 2 4 7 10 12 15 17 21 23 26 28 31 34 37 39 42 45 48 10 0 3 5 8 11 14 17 20 23 26 29 33 36 39 42 45 48 52 55 11 0 3 6 9 13 16 19 23 26 30 33 37 40 44 47 51 55 58 62 12 1 4 7 11 14 18 22 26 29 33 37 41 45 49 53 57 61 65 69 13 1 4 8 12 16 20 24 28 33 37 41 45 50 54 59 63 67 72 76 14 1 5 9 13 17 22 26 31 36 40 45 50 55 59 64 67 74 78 83 15 1 5 10 14 19 24 29 34 39 44 49 54 59 64 70 75 80 85 90 16 1 6 11 15 21 26 31 37 42 47 53 59 64 70 75 81 86 92 98 17 2 6 11 17 22 28 34 39 45 51 57 63 67 75 81 87 93 99 105 18 2 7 12 18 24 30 36 42 48 55 61 67 74 80 86 93 99 106 112 19 2 7 13 19 25 32 38 45 52 58 65 72 78 85 92 99 106 113 119 20 2 8 14 20 27 34 41 48 55 62 69 76 83 90 98 105 112 119 127 Figure 13-9: Critical Values for 2-Tailed Mann-Whitney U Test for $\alpha = 0.01$ $n_{2}$ $n_{1}$ 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 - - - - - - - - - - - - - - - - - 0 0 3 - - - - - - - 0 0 0 1 1 1 2 2 2 2 3 3 4 - - - - 0 0 1 1 2 2 3 3 4 5 5 6 6 7 8 5 - - - 0 1 1 2 3 4 5 6 7 7 8 9 10 11 12 13 6 - - 0 1 2 3 4 5 6 7 9 10 11 12 13 15 16 17 18 7 - - 0 1 3 4 6 7 9 10 12 13 15 16 18 19 21 22 24 8 - - 1 2 4 6 7 9 11 13 15 17 18 20 22 24 26 28 30 9 - 0 1 3 5 7 9 11 13 16 18 20 22 24 27 29 31 33 36 10 - 0 2 4 6 9 11 13 16 18 21 24 26 29 31 34 37 39 42 11 - 0 2 5 7 10 13 16 18 21 24 27 30 33 36 39 42 45 46 12 - 1 3 6 9 12 15 18 21 24 27 31 34 37 41 44 47 51 54 13 - 1 3 7 10 13 17 20 24 27 31 34 38 42 45 49 53 56 60 14 - 1 4 7 11 15 18 22 26 30 34 38 42 46 50 54 58 63 67 15 - 2 5 8 12 16 20 24 29 33 37 42 46 51 55 60 64 69 73 16 - 2 5 9 13 18 22 27 31 36 41 45 50 55 60 65 70 74 79 17 - 2 6 10 15 19 24 29 34 39 44 49 54 60 65 70 75 81 86 18 - 2 6 11 16 21 26 31 37 42 47 53 58 64 70 75 81 87 92 19 0 3 7 12 17 22 28 33 39 45 51 56 63 69 74 81 87 93 99 20 0 3 8 13 18 24 30 36 42 46 54 60 67 73 79 86 92 99 105 If $U$ is less than or equal to the critical value, then reject $H_{0}$. Dashes indicate that the sample is too small to reject $H_{0}$. If you have only sample size above 20, use the following online calculator to find the critical value: https://www.socscistatistics.com/tests/mannwhitney/default.aspx. Student employees are a major part of most college campus employment. Two major departments that participate in student hiring are listed below with the number of hours worked by students for a month. At the 0.05 level of significance, is there sufficient evidence to conclude a difference in hours between the two departments? Athletics 20 24 17 12 18 22 25 30 15 19   Library 35 28 24 20 25 18 22 26 31 21 19 Solution The hypotheses are: $H_{0}$: There is no difference in the number of hours student employees work for the athletics department and the library. $H_{1}$: There is a difference in the number of hours student employees work for the athletics department and the library. Since the sample sizes are small and the distributions are not assumed to be normally distributed, the t-test for independent groups should not be used. Instead, we will use the nonparametric Mann-Whitney Test. To start, combine the groups, sort the data from smallest to largest, and note which group the data point is from. Rank the data and look for the ties. Figure 13-10 shows the ranks for the combined data. Figure 13-10: Ranks for combined and ordered data. Student Department Hours Rank 1 Athletics 12 1 2 Athletics 15 2 3 Athletics 17 3 4 Athletics 18 4.5 5 Library 18 4.5 6 Athletics 19 6.5 7 Library 19 6.5 8 Athletics 20 8.5 9 Library 20 8.5 10 Library 21 10 11 Athletics 22 11.5 12 Library 22 11.5 13 Athletics 24 13.5 14 Library 24 13.5 15 Athletics 25 15.5 16 Library 25 15.5 17 Library 26 17 18 Library 28 18 19 Athletics 30 19 20 Library 31 20 21 Library 35 21 Sum the ranks for each group: $R_{1} = 1 + 2 + 3 + 4.5 + 6.5 + 8.5 + 11.5 + 13.5 + 15.5 + 19 = 85$ $R_{2} = 4.5 + 6.5 +8.5 + 10 + 11.5 + 13.5 + 15.5 + 17 + 18 + 20 + 21 = 146$ Compute the test statistic: $U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2} = 85 - \frac{10 \cdot 11}{2} = 30$ $U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2} = 146 - \frac{11 \cdot 12}{2} = 80$ $U = 30$ Find the critical value using Figure 13-8, where $n_{1} = 10$ and $n_{2} = 11$. The critical value = 26. Do not reject $H_{0}$, since $U = 30 > \text{CV} = 26$. There is not enough evidence to support the claim that there is a difference in the number of hours student employees work for the athletics department and the library. Large Sample Size Case ($n_{1} > 20$ and $n_{2} > 20$) Find the $U$ statistic for both groups: $U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2}$, $U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2}$. Let $U = \text{Min} \left(U_{1}, U_{2}\right)$, the smaller of $U_{1}$ or $U_{2}$. The formula for the test statistic is: $z = \frac{\left(U - \left( \dfrac{n_{1} \cdot n_{2}}{2} \right)\right)}{\sqrt{\dfrac{n_{1} \cdot n_{2} \left(n_{1} + n_{2} + 1\right)}{12}}} \nonumber$ A manager believes that the sales of coffee at their Portland store is more than the sales at their Cannon Beach store. They take a random sample of weekly sales from the two stores over the last year. Use the Mann-Whitney test to see if the manager’s claim could be true. Use the p-value method with $\alpha = 0.05$. Portland Cannon Beach 1510 1257   3585 1510 4125 4677   4399 5244 1510 3055   1764 1510 5244 1764   3853 4399 4125 6128   5244 1510 6128 3319   1510 5244 3319 6433   2533 4125 3319 5244   3585 2275 3055 6134   2533 2275 4025 3015   4399 3585       4125 5244\ Solution Always choose group 1 as the group with the smallest sample size: in this case, Portland. (If the sample sizes are equal, then whatever group comes first in the problem is group one.) If there are no ties at the end, the last rank should match the total of both sample sizes. Combine the data, keeping the group label, then rank the combined data. Order Store Sales Rank Order Store Sales Rank 1 Portland 1257 1   22 Cannon Beach 3585 21 2 Portland 1510 4.5   23 Cannon Beach 3853 23 3 Portland 1510 4.5   24 Portland 4025 24 4 Cannon Beach 1510 4.5   25 Portland 4125 26.5 5 Cannon Beach 1510 4.5   26 Portland 4125 26.5 6 Cannon Beach 1510 4.5   27 Cannon Beach 4125 26.5 7 Cannon Beach 1510 4.5   28 Cannon Beach 4125 26.5 8 Portland 1764 8.5   29 Cannon Beach 4399 30 9 Cannon Beach 1764 8.5   30 Cannon Beach 4399 30 10 Cannon Beach 2275 10.5   31 Cannon Beach 4399 30 11 Cannon Beach 2275 10.5   32 Portland 4677 32 12 Cannon Beach 2533 12.5   33 Portland 5244 35.5 13 Cannon Beach 2533 12.5   34 Portland 5244 35.5 14 Portland 3015 14   35 Cannon Beach 5244 35.5 15 Portland 3055 15.5   36 Cannon Beach 5244 35.5 16 Portland 3055 15.5   37 Cannon Beach 5244 35.5 17 Portland 3319 18   38 Cannon Beach 5244 35.5 18 Portland 3319 18   39 Portland 6128 39.5 19 Portland 3319 18   40 Portland 6128 39.5 20 Cannon Beach 3585 21   41 Portland 6134 41 21 Cannon Beach 3585 21   42 Portland 6433 42 The hypotheses are: $H_{0}$: There is no difference in the coffee sales between the Portland and Cannon Beach stores. $H_{1}$: There is a difference in the coffee sales between the Portland and Cannon Beach stores. Sum the ranks for each group. The sum for the Portland store’s ranks: $R_{1} = 459.5$. The sum for the Cannon Beach store’s ranks: $R_{2} = 443.5$. Compute the test statistic: $U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2} = 459.5 - \frac{20 \cdot 21}{2} = 249.5$ $U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2} = 443.5 - \frac{22 \cdot 23}{2} = 190.5$ $U = 190.5$ $z = \frac{190.5 - \left(\frac{20 \cdot 22}{2}\right)}{\sqrt{ \left(\frac{20 \cdot 22 (20 + 22 + 1)}{12}\right) }} = -0.7429$ This test uses the standard normal distribution with the same technique for finding a p-value or critical value as the z-test performed in previous chapters. Compute the p-value for a standard normal distribution for $z = -0.7429$ for a two-tailed test using $2 * \text{normalcdf}(-1E99,-0.7429,0,1) = 0.4575$. The p-value = $0.4575 > \alpha = 0.05$; therefore, do not reject $H_{0}$. This is a two-tailed test with $\alpha = 0.05$. Use the lower tail area of $\alpha/2 = 0.05$ and you get critical values of $z_{\alpha/2} = \pm 1.96$. There is not enough evidence to support the claim that there is a difference in coffee sales between the Portland and Cannon beach stores. There are no shortcut keys on the TI calculators or Excel for this Nonparametric Test. Note that if your data has tied ranks, there are several methods not addressed in this text, to correct the standard deviation. Hence, the z-score in some software packages may not match your results calculated by hand. 13.06: Chapter 13 Formulas Ranking Data • Order the data from smallest to largest. • The smallest value gets a rank of 1. • The next smallest gets a rank of 2, etc. • If there are any values that tie, then each of the tied values gets the average of the corresponding ranks. Sign Test $H_{0}:$ Median $= MD_{0}$ $H_{1}:$ Median $\neq MD_{0}$ p-value uses binomial distribution with $p = 0.5$ and $n$ is the sample size not including ties with the median or differences of 0. • For a two-tailed test, the test statistic, $x$, is the smaller of the plus or minus signs. If $x$ is the test statistic, the p-value for a two-tailed test is $2* \text{P}(X \leq x)$. • For a right-tailed test, the test statistic, $x$, is the number of plus signs. For a left-tailed test, the test statistic, $x$, is the number of minus signs. The p-value for a one-tailed test is the $\text{P}(X \geq x)$ for a right-tailed test, or $\text{P}(X \leq x)$ for a left-tailed test. Wilcoxon Signed-Rank Test $n$ is the sample size not including a difference of 0. When $n < 30$, use test statistic $w_{s}$, which is the absolute value of the smaller of the sum of ranks. CV uses table in Figure 13-5. If critical value is not in tables then use an online calculator: http://www.socscistatistics.com/tests/signedranks. When $n \geq 30$, use z-test statistic: $z = \frac{\left(w_{s} - \left(\dfrac{n (n+1)}{4}\right) \right)}{\sqrt{\left( \dfrac{n(n+1)(2n+1)}{24} \right)}} \nonumber$ Mann-Whitney U Test When $n_{1} \leq 20$ and $n_{2} \leq 20$: $U_{1} = R_{1} - \frac{n_{1} \left(n_{1}+1\right)}{2}, U_{2} = R_{2} - \frac{n_{2} \left(n_{2}+1\right)}{2}$. $U = \text{Min} \left(U_{1}, U_{2}\right)$ CV uses table in Figures 13-8 or 13-9. If critical value is not in tables then use an online calculator: https://www.socscistatistics.com/tests/mannwhitney/default.aspx. When $n_{1} > 20$ and $n_{2} > 20$ use z-test statistic: $z = \frac{\left(U - \left(\dfrac{n_{1} \cdot n_{2}}{2}\right) \right)}{\sqrt{\left( \dfrac{n_{1} \cdot n_{2} \left(n_{1} + n_{2} + 1\right)}{12} \right)}} \nonumber$ “For instance, a race of hyperintelligent pan‐dimensional beings once built themselves a gigantic supercomputer called Deep Thought to calculate once and for all the Answer to the Ultimate Question of Life, the Universe, and Everything. For seven and a half million years, Deep Thought computed and calculated, and in the end announced that the answer was in fact Forty‐two - and so another, even bigger, computer had to be built to find out what the actual question was.” (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/13%3A_Nonparametric_Tests/13.05%3A__Mann-Whitney_U_Test.txt
Chapter 13 Exercises For exercises 1-6, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 1. A real estate agent suggests that the median rent for a one-bedroom apartment in Portland has changed from last year’s median of $825 per month. A sample of 12 one-bedroom apartments shows these monthly rents in dollars for a one-bedroom apartment; 820, 720, 960, 660, 735, 910, 825, 1050, 915, 905, 1050, 950. Is there enough evidence to claim that the median rent has changed from$825? Use $\alpha = 0.05$. 2. The median age in the United States is 38 years old. The mayor of a particular city believes that her population is considerably “younger.” At $\alpha = 0.05$, is there sufficient evidence to support her claim. The data below represent a random selection of persons from the city. 40 36 27 72 12 30 52 45 10 24 22 25 43 39 48 25 95 29 19 30 50 37 18 36 15 60 38 42 41 61 3. A meteorologist believes that the median temperature for the month of July in Jacksonville, Florida, is higher than the previous July’s 81˚F. The following sample shows the temperatures taken at noon in Jacksonville during July. Is there enough evidence to support the meteorologist’s claim? Use $\alpha = 0.05$. 79 85 81 95 80 98 82 81 76 84 90 93 4. A sample of 10 Kitti’s hog-nosed bats’ weights in grams is shown below. Test the claim that median weight for all bumblebee bats is not equal to 2 grams, using a 1% level of significance. Weight 2.22 1.6 1.78 1.52 1.61 1.98 1.56 2.24 1.55 2.28 5. Test to see if the median assessed property value (in $1,000s) changed between 2010 and 2016. Use the sign test and $\alpha = 0.05$. Ward A B C D E F G H I J K 2010 184 414 22 99 116 49 24 50 282 25 141 2016 161 382 22 190 120 52 28 50 297 40 148 6. A company institutes an exercise break for its workers to see if this will improve job satisfaction. Scores for 10 workers were sampled from a questionnaire before and after the exercise break was implemented. Higher scores indicate a higher job satisfaction. Use the sign test to see if the exercise break increased the job satisfaction scores. Use $\alpha = 0.05$. Before 34 28 29 45 26 27 24 15 15 27 After 33 36 29 50 37 29 25 20 18 28 7. Rank the following data set: $-15, -25, -5, -5, -8, -15, -2, -4, -5$. 8. Rank the following data set: $6, 3, 0, 2, 3, 5, 4, 8, 9, 5, 6, 5, 7. 9$. 9. Rank the following data set: $1, 2, 9, 3, 5, 1, 2, 8, 6$. 10. Rank the following data set: $5, -1, 2, 0, 1, -1, 0, 3, 9, 6, 1, 1, 4$. For exercises 11-20, show all 5 steps for hypothesis testing: a) State the hypotheses. b) Compute the test statistic. c) Compute the critical value or p-value. d) State the decision. e) Write a summary. 11. A manager wishes to see if the time (in minutes) it takes for their workers to complete a certain task will decrease when they are allowed to wear earbuds at work. A random sample of 20 workers' times were collected before and after. Test the claim that the time to complete the task has decreased at a significance level of $\alpha = 0.01$ using the Wilcoxon Signed-Rank test. You obtain the following sample data. Before After Before After 69 62.3 61.7 56.8 71.5 61.6 55.9 44.7 39.3 21.4 56.8 50.6 67.7 60.4 71 63.4 38.3 47.9 80.6 68.9 85.9 77.6 59.8 35.5 67.3 75.1 72.1 77 59.8 46.3 49.9 38.4 72.1 65 56.2 55.4 79 83 63.3 51.6 12. Doctors developed an intensive intervention program for obese patients with heart disease. Subjects with a BMI of 30 kg/m2 or more with heart disease were assigned to a three-month lifestyle change of diet and exercise. Patients’ Left Ventricle Ejection Fraction (LVEF) are measured before and after intervention. Larger numbers indicate a healthier heart. Test to see if the intervention program significantly increased the LVEF. Use the Wilcoxon Signed-Rank test with $\alpha = 0.05$. Before 44 49 50 49 57 62 39 41 52 42 After 56 58 64 60 63 71 49 51 60 55 13. An adviser is testing out a new online learning module for a placement test. They wish to test the claim that the new online learning module increased placement scores at a significance level of $\alpha = 0.05$. You obtain the following paired sample of 19 students who took the placement test before and after the learning module. Use the Wilcoxon Signed-Rank test. Before After Before After Before After Before After 55.8 57.1 11.4 20.6 42.6 51.5 46.1 57 51.7 58.3 30.6 35.2 61.2 76.6 72.8 66.1 76.6 83.6 53 46.7 26.8 28.6 42.2 38.1 47.5 49.5 21 22.5 11.4 14.5 51.3 42.4 48.6 51.1 58.5 47.7 56.3 43.7 14. Dating couples were matched according to who asked the other person out first. Their age was then compared. Is there a significant difference in the age of dating couples based on who asked out the other person first? Dependent samples, use $\alpha = 0.05$ and the Wilcoxon Signed-Rank test. First Asked 18 43 32 27 15 45 21 22 Accepted 16 38 35 29 14 46 25 28 15. In Major League Baseball, the American League (AL) allows a designated hitter (DH) to bat in place of the pitcher, but in the National League (NL), the pitcher has to bat. However, when an AL team is the visiting team for a game against an NL team, the AL team must abide by the home team’s rules and thus, the pitcher must bat. A researcher is curious if an AL team would score differently for games in which the DH was used. She samples 20 games for an AL team for which the DH was used, and 20 games for which there was no DH. The data are below. Use the Mann-Whitney test with $\alpha = 0.05$. With Designated Hitter Without Designated Hitter 0 5 4 7 3 6 5 2 1 2 7 6 12 4 0 1 6 4 2 10 6 3 7 8 1 2 7 5 4 0 5 1 8 4 11 0 2 4 6 4 16. A professor wants to know whether there is a difference in comprehension of a lab assignment among students depending on if the instructions are given all in text, or if they are given primarily with visual illustrations. She randomly divides her class into two groups of 15 and gives one group instructions in text and the second group instructions with visual illustrations. The following data summarizes the scores the students received on a test given after the lab. Is there evidence to suggest that a difference? Use the Mann-Whitney test with $\alpha = 0.05$. Text Visual Illustrations 57.3 87.3 67.2 59.0 76.7 88.2 45.3 75.2 54.4 57.6 78.2 43.8 87.1 88.2 93.0 72.9 64.4 97.1 61.2 67.5 89.2 83.2 89.0 95.1 43.1 86.2 52.0 64.0 72.9 84.1 17. “Durable press” cotton fabrics are treated to improve their recovery from wrinkles after washing. “Wrinkle recovery angle” measures how well a fabric recovers from wrinkles. Higher is better. Here are data on the wrinkle recovery angle (in degrees) for a random sample of fabric specimens. A manufacturer wants to see if there is a difference in the wrinkle recovery angle for two different fabric treatments, Permafresh and Hylite. Test the claim using a 5% level of significance. Use the Mann-Whitney test. Permafresh Hylite 144 102 131 118 139 146 139 139 146 136 127 137 148 131 138 138 132 142 117 137 147 129 133 142 138 137 134 133 137 148 135 146 137 138 138 133 124 139 164 142 139 140 141 140 141 18. A researcher is curious what year in college students make use of the gym at a university. They take a random sample of 30 days and count the number of sophomores and seniors who use the gym each day. Is there evidence to suggest that a difference exists in gym usage based on year in college? Use the Mann-Whitney test with $\alpha = 0.01$. Sophomores Seniors 189 208 167 154 217 209 199 186 210 221 209 198 143 208 220 204 214 230 170 197 188 197 165 207 231 198 201 165 183 235 201 177 186 193 201 187 199 189 194 197 190 165 180 245 200 192 195 200 211 205 199 155 165 188 187 200 190 218 210 229 19. A movie theater company wants to see if there is a difference in the movie ticket sales in San Diego and Portland per week. They sample 20 sales from San Diego and 20 sales from Portland and count the number of tickets sold over a week. Use the Mann-Whitney test to test the claim using a 5% level of significance. San Diego Portland 223 243 231 235 233 228 209 214 221 182 217 211 219 212 214 222 206 229 219 239 226 216 223 220 215 214 234 221 226 219 221 223 226 233 239 232 219 211 218 224 20. A new over-the-counter medicine to treat a sore throat is to be tested for effectiveness. The makers of the medicine take two random samples of 25 individuals showing symptoms of a sore throat. Group 1 receives the new medicine and Group 2 receives a placebo. After a few days on the medicine, each group is interviewed and asked how they would rate their comfort level 1-10 (1 being the most uncomfortable and 10 being no discomfort at all). The results are below. Is there sufficient evidence to conclude that there is a difference? Use the Mann-Whitney test and $\alpha = 0.01$. Group 1 Group 2 3 5 6 7 5 4 5 8 3 5 3 4 5 7 7 2 7 8 2 4 3 2 5 8 8 1 2 2 3 2 7 7 8 4 8 1 3 5 5 1 4 8 3 9 10 6 4 7 8 1 Answers to Odd-Numbered Exercises 1. $H_{0}:$ Median $= 825$; $H_{1}:$ Median $\neq 825$; Test Statistic = 4; p-value = 0.5548. Do not reject $H_{0}$. There is not enough evidence to support the claim that the median rent has changed from last year’s median of$825 per month. 3. $H_{0}:$ Median $= 81$; $H_{1}:$ Median $> 81$; Test Statistic = 7; p-value = 0.0547. Do not reject $H_{0}$. There is not enough evidence to support the claim that the median temperature for the month of July in Jacksonville, Florida, is higher than the previous year's of 81˚F. 5. $H_{0}:$ There is no change in the median assessed property value between 2010 and 2016. $H_{1}:$ There is a change in the median assessed property value between 2010 and 2016. Test Statistic = 2; p-value = 0.1797. Do not reject $H_{0}$. There is not enough evidence to support the claim that is a change in the median assessed property value between 2010 and 2016. 7. Ordered Data Rank -25 1 -15 2.5 -15 2.5 -8 4 -5 6 -5 6 -5 6 -4 8 -2 9 9. Ordered Data Rank 0 1 2 2 3 3.5 3 3.5 4 5 5 7 5 7 5 7 6 9.5 6 9.5 7 11 8 12 9 13 11. $H_{0}:$ The time to complete the task will not decrease when workers are allowed to wear earbuds. $H_{1}:$ The time to complete the task will decrease when workers are allowed to wear earbuds. $w_{s} = 27.5; \text{CV} = 43$. Reject $H_{0}$. There is enough evidence that allowing the workers to wear earbuds significantly decreased the time for workers to complete tasks. 13. $H_{0}:$ The new online learning module did not increase student’s placement scores. $H_{1}:$ The new online learning module increased student’s placement scores. $w_{s} = 74.5; \text{CV} = 53$. Do not reject $H_{0}$. There is not enough evidence to support the claim that the new online learning module increased students’ placement scores. 15. $H_{0}:$ An American League team would score the same for games in which the designated hitter was used. $H_{1}:$ An American League team would score differently for games in which the designated hitter was used. $U = 181; text{CV} = 127$. Do not reject $H_{0}$. There is not enough evidence to support the claim that an American League team would score differently for games in which the designated hitter was used. 17. $H_{0}:$ There is no difference in the wrinkle recovery angle for two different fabric treatments, Permafresh and Hylite. $H_{1}:$ There is a difference in the wrinkle recovery angle for two different fabric treatments, Permafresh and Hylite. $z = -1.4048; \text{CV} = \pm 1.96$. Do not reject $H_{0}$. There is not enough evidence to support the claim that there is a difference in the wrinkle recovery angle for two different fabric treatments, Permafresh and Hylite. 19. $H_{0}:$ There is no difference in the movie ticket sales in San Diego and Portland per week. $H_{1}:$ There is a difference in the movie ticket sales in San Diego and Portland per week. $U = 144.5; \text{CV} = 127$. Do not reject $H_{0}$. There is not enough evidence to support the claim there is a difference in the movie ticket sales in San Diego and Portland per week. “Ford didn't comment. He was listening to something. He passed the Guide over to Arthur and pointed at the screen. The active entry read ‘Earth. Mostly harmless.’ …Earth, a world whose entire entry in the Hitchhiker's Guide to the Galaxy comprised the two words ‘Mostly harmless.’” (Adams, 2002)
textbooks/stats/Introductory_Statistics/Mostly_Harmless_Statistics_(Webb)/13%3A_Nonparametric_Tests/13.07%3A_Chapter_13_Exercises.txt
The topics scientists investigate are as diverse as the questions they ask. However, many of these investigations can be addressed with a small number of data collection techniques, analytic tools, and fundamental concepts in statistical inference. This chapter provides a glimpse into these and other themes we will encounter throughout the rest of the book. We introduce the basic principles of each branch and learn some tools along the way. We will encounter applications from other fields, some of which are not typically associated with science but nonetheless can benefit from statistical study. • 1.1: Prelude to Introduction to Data Scientists seek to answer questions using rigorous methods and careful observations. These observations form the backbone of a statistical investigation and are called data. Statistics is the study of how best to collect, analyze, and draw conclusions from data. It is helpful to put statistics in the context of a general process of investigation: Identify a question or problem. Collect relevant data on the topic. Analyze the data. Form a conclusion. • 1.2: Case Study- Using Stents to Prevent Strokes Section 1.1 introduces a classic challenge in statistics: evaluating the efficacy of a medical treatment. Terms in this section, and indeed much of this chapter, will all be revisited later in the text. The plan for now is simply to get a sense of the role statistics can play in practice. • 1.3: Data Basics Effective presentation and description of data is a first step in most analyses. This section introduces one structure for organizing data as well as some terminology that will be used throughout this book. • 1.4: Overview of Data Collection Principles The first step in conducting research is to identify topics or questions that are to be investigated. A clearly laid out research question is helpful in identifying what subjects or cases should be studied and what variables are important. It is also important to consider how data are collected so that they are reliable and help achieve the research goals. • 1.5: Observational Studies and Sampling Strategies Generally, data in observational studies are collected only by monitoring what occurs, what occurs, while experiments require the primary explanatory variable in a study be assigned for each subject by the researchers. Making causal conclusions based on experiments is often reasonable. However, making the same causal conclusions based on observational data can be treacherous and is not recommended. Thus, observational studies are generally only sufficient to show associations. • 1.6: Experiments Studies where the researchers assign treatments to cases are called experiments. When this assignment includes randomization, e.g. using a coin ip to decide which treatment a patient receives, it is called a randomized experiment. Randomized experiments are fundamentally important when trying to show a causal connection between two variables. • 1.7: Examining Numerical Data In this section we will be introduced to techniques for exploring and summarizing numerical variables. Recall that outcomes of numerical variables are numbers on which it is reasonable to perform basic arithmetic operations. • 1.8: Considering Categorical Data Like numerical data, categorical data can also be organized and analyzed. In this section, we will introduce tables and other basic tools for categorical data that are used throughout this book. • 1.9: Case Study- Gender Discrimination (Special Topic) Statisticians are sometimes called upon to evaluate the strength of evidence. • 1.E: Introduction to Data (Exercises) Exercises for Chapter 1 of the "OpenIntro Statistics" textmap by Diez, Barr and Çetinkaya-Rundel. 01: Introduction to Data Scientists seek to answer questions using rigorous methods and careful observations. These observations { collected from the likes of eld notes, surveys, and experiments { form the backbone of a statistical investigation and are called data. Statistics is the study of how best to collect, analyze, and draw conclusions from data. It is helpful to put statistics in the context of a general process of investigation: 1. Identify a question or problem. 2. Collect relevant data on the topic. 3. Analyze the data. 4. Form a conclusion. Statistics as a subject focuses on making stages 2-4 objective, rigorous, and efficient. That is, statistics has three primary components: How best can we collect data? How should it be analyzed? And what can we infer from the analysis? The topics scientists investigate are as diverse as the questions they ask. However, many of these investigations can be addressed with a small number of data collection techniques, analytic tools, and fundamental concepts in statistical inference. This chapter provides a glimpse into these and other themes we will encounter throughout the rest of the book. We introduce the basic principles of each branch and learn some tools along the way. We will encounter applications from other fields, some of which are not typically associated with science but nonetheless can benefit from statistical study.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.01%3A_Prelude_to_Introduction_to_Data.txt
Section 1.1 introduces a classic challenge in statistics: evaluating the efficacy of a medical treatment. Terms in this section, and indeed much of this chapter, will all be revisited later in the text. The plan for now is simply to get a sense of the role statistics can play in practice. In this section we will consider an experiment that studies effectiveness of stents in treating patients at risk of stroke1. Stents are devices put inside blood vessels that assist in patient recovery after cardiac events and reduce the risk of an additional heart attack or death. Many doctors have hoped that there would be similar bene ts for patients at risk of stroke. We start by writing the principle question the researchers hope to answer: 1Chimowitz MI, Lynn MJ, Derdeyn CP, et al. 2011. Stenting versus Aggressive Medical Therapy for Intracranial Arterial Stenosis. New England Journal of Medicine 365:993-1003. http://www.nejm.org/doi/full/10.1056/NEJMoa1105335. NY Times article reporting on the study: http://www.nytimes.com/2011/09/08/health/research/08stent.html. Does the use of stents reduce the risk of stroke? The researchers who asked this question collected data on 451 at-risk patients. Each volunteer patient was randomly assigned to one of two groups: • Treatment group. Patients in the treatment group received a stent and medical management. The medical management included medications, management of risk factors, and help in lifestyle modi cation. • Control group. Patients in the control group received the same medical manage-ment as the treatment group, but they did not receive stents. Researchers randomly assigned 224 patients to the treatment group and 227 to the control group. In this study, the control group provides a reference point against which we can measure the medical impact of stents in the treatment group. Researchers studied the effect of stents at two time points: 30 days after enrollment and 365 days after enrollment. The results of 5 patients are summarized in Table 1.1. Patient outcomes are recorded as "stroke" or "no event", representing whether or not the patient had a stroke at the end of a time period. Table 1.1: Results for ve patients from the stent study. Patient group 0-30 days 0-365 days 1 treatment no event no event 2 treatment stroke stroke 3 treatment no event no event .. .. .. .. 450 control no event no event 451 control no event no event Considering data from each patient individually would be a long, cumbersome path towards answering the original research question. Instead, performing a statistical data analysis allows us to consider all of the data at once. Table 1.2 summarizes the raw data in a more helpful way. In this table, we can quickly see what happened over the entire study. For instance, to identify the number of patients in the treatment group who had a stroke within 30 days, we look on the left-side of the table at the intersection of the treatment and stroke: 33. Table 1.2: Descriptive statistics for the stent study. 0-30 days 0-365 days stroke no event stroke no event treatment 33 191 45 179 control 13 214 28 199 Total 46 405 73 378 Exercise Exercise 1.1 Of the 224 patients in the treatment group, 45 had a stroke by the end of the first year. Using these two numbers, compute the proportion of patients in the treatment group who had a stroke by the end of their rst year. (Please note: answers to all in-text exercises are provided using footnotes.)2 Answer 2The proportion of the 224 patients who had a stroke within 365 days: $\frac {45}{224}$ = 0.20. We can compute summary statistics from the table. A summary statistic is a single number summarizing a large amount of data (formally, a summary statistic is a value computed from the data. Some summary statistics are more useful than others). For instance, the primary results of the study after 1 year could be described by two summary statistics: the proportion of people who had a stroke in the treatment and control groups. • Proportion who had a stroke in the treatment (stent) group: $\frac {45}{224}$ = 0.20 = 20%. • Proportion who had a stroke in the control group: $\frac {28}{227}$ = 0.12 = 12%. These two summary statistics are useful in looking for differences in the groups, and we are in for a surprise: an additional 8% of patients in the treatment group had a stroke! This is important for two reasons. First, it is contrary to what doctors expected, which was that stents would reduce the rate of strokes. Second, it leads to a statistical question: do the data show a "real" difference between the groups? This second question is subtle. Suppose you flip a coin 100 times. While the chance a coin lands heads in any given coin flip is 50%, we probably won't observe exactly 50% heads. This type of fluctuation is part of almost any type of data generating process. It is possible that the 8% difference in the stent study is due to this natural variation. However, the larger the difference we observe (for a particular sample size), the less believable it is that the difference is due to chance. So what we are really asking is the following: is the difference so large that we should reject the notion that it was due to chance? While we don't yet have our statistical tools to fully address this question on our own, we can comprehend the conclusions of the published analysis: there was compelling evidence of harm by stents in this study of stroke patients. Be careful Do not generalize the results of this study to all patients and all stents. This study looked at patients with very speci c characteristics who volunteered to be a part of this study and who may not be representative of all stroke patients. In addition, there are many types of stents and this study only considered the self-expanding Wingspan stent (Boston Scientific). However, this study does leave us with an important lesson: we should keep our eyes open for surprises.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.02%3A_Case_Study-_Using_Stents_to_Prevent_Strokes.txt
Effective presentation and description of data is a first step in most analyses. This section introduces one structure for organizing data as well as some terminology that will be used throughout this book. Observations, variables, and data matrices Table 1.3 displays rows 1, 2, 3, and 50 of a data set concerning 50 emails received during early 2012. These observations will be referred to as the email50 data set, and they are a random sample from a larger data set that we will see in Section 1.7. Table 1.3: Four rows from the email 50 data matrix. spam num_char line_breaks format number 1 no 21,705 551 html small 2 no 7,011 183 html big 3 yes 631 28 text none \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) 50 no 15,829 242 html small Each row in the table represents a single email or case (a case is also sometimes called a unit of observation or an observational unit.). The columns represent characteristics, called variables, for each of the emails. For example, the first row represents email 1, which is a not spam, contains 21,705 characters, 551 line breaks, is written in HTML format, and contains only small numbers. In practice, it is especially important to ask clarifying questions to ensure important aspects of the data are understood. For instance, it is always important to be sure we know what each variable means and the units of measurement. Descriptions of all five email variables are given in Table 1.4. Table 1.4: Variables and their descriptions for the email 50 data set. variable description spam Specifies whether the message was spam num_char The number of characters in the email line_breaks The number of line breaks in the email (not including text wrapping) format Indicates if the email contained special formatting, such as bolding, tables, or links, which would indicate the message is in HTML format number Indicates whether the email contained no number, a small number (under1 million), or a large number The data in Table 1.3 represent a data matrix, which is a common way to organize data. Each row of a data matrix corresponds to a unique case, and each column corresponds to a variable. A data matrix for the stroke study introduced in Section 1.1 is shown in Table 1.1, where the cases were patients and there were three variables recorded for each patient. Data matrices are a convenient way to record and store data. If another individual or case is added to the data set, an additional row can be easily added. Similarly, another column can be added for a new variable. Exercise \(1\) Exercise 1.2 We consider a publicly available data set that summarizes information about the 3,143 counties in the United states, and we call this the county data set. This data set includes information about each county: its name, the state where it resides, its population in 2000 and 2010, per capita federal spending, poverty rate, and ve additional characteristics. How might these data be organized in a data matrix? Reminder: look in the footnotes for answers to in-text exercises.5 5Each county may be viewed as a case, and there are eleven pieces of information recorded for each case. A table with 3,143 rows and 11 columns could hold these data, where each row represents a county and each column represents a particular piece of information. Seven rows of the county data set are shown in Table 1.5, and the variables are summarized in Table 1.6. These data were collected from the US Census website.6 6quickfacts.census.gov/qfd/index.html Table 1.5: Seven rows from the county data set. name state pop 2000 pop 2010 fed spend poverty home owner-ship multiu- nit income med income smoking ban 1 Autau-ga AL 43671 54571 6.068 10.6 77.5 7.2 24568 53255 none 2 Baldw- in AL 140415 182265 6.140 12.2 76.7 22.6 26469 50147 none 3 Barbo- ur AL 29038 27457 8.752 25.0 68.0 11.1 15875 33219 none 4 Bibb AL 20826 22915 7.122 12.6 82.9 6.6 19918 41770 none 5 Blount AL 51024 57322 5.131 13.4 82.0 3.7 21070 45549 none \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) 3142 Wash-akie WY 8289 8533 8.714 5.6 70.9 10.0 28557 48379 none 3143 West-on WY 6644 7208 6.695 7.9 77.9 6.5 28463 53853 none Table 1.6: Variables and their descriptions for the county data set. variable description name County name state State where the county resides (also including the District of Columbia) pop2000 Population in 2000 pop2010 Population in 2010 fed_spend Federal spending per capita poverty Percent of the population in poverty homeownership Percent of the population that lives in their own home or lives with the owner (e.g. children living with parents who own the home) multiunit Percent of living units that are in multi-unit structures (e.g. apartments) income Income per capita med_income Median household income for the county, where a household's income equals the total income of its occupants who are 15 years or older smoking_ban Type of county-wide smoking ban in place at the end of 2011, which takes one of three values: none, partial, or comprehensive, where a comprehensive ban means smoking was not permitted in restaurants, bars, or workplaces, and partial means smoking was banned in at least one of those three locations Types of variables Examine the fed spend, pop2010, state, and smoking ban variables in the county data set. Each of these variables is inherently different from the other three yet many of them share certain characteristics. First consider fed spend, which is said to be a numerical variable since it can takea wide range of numerical values, and it is sensible to add, subtract, or take averages with those values. On the other hand, we would not classify a variable reporting telephone area codes as numerical since their average, sum, and difference have no clear meaning. The pop2010 variable is also numerical, although it seems to be a little different than fed spend. This variable of the population count can only take whole non-negative numbers (0, 1, 2, ...). For this reason, the population variable is said to be discrete since it can only take numerical values with jumps. On the other hand, the federal spending variable is said to be continuous. The variable state can take up to 51 values after accounting for Washington, DC: AL, ..., and WY. Because the responses themselves are categories, state is called a categorical variable,7 and the possible values are called the variable's levels. Finally, consider the smoking ban variable, which describes the type of county-wide smoking ban and takes values none, partial, or comprehensive in each county. This variable seems to be a hybrid: it is a categorical variable but the levels have a natural ordering. A variable with these properties is called an ordinal variable. To simplify analyses, any ordinal variables in this book will be treated as categorical variables. Example 1.3 Data were collected about students in a statistics course. Three variables were recorded for each student: number of siblings, student height, and whether the student had previously taken a statistics course. Classify each of the variables as continuous numerical, discrete numerical, or categorical. The number of siblings and student height represent numerical variables. Because the number of siblings is a count, it is discrete. Height varies continuously, so it is a continuous numerical variable. The last variable classi es students into two categories - those who have and those who have not taken a statistics course - which makes this variable categorical. Exercise \(1\) Exercise 1.4 Consider the variables group and outcome (at 30 days) from the stent study in Section 1.1. Are these numerical or categorical variables?8 8There are only two possible values for each variable, and in both cases they describe categories. Thus, each are categorical variables. 7Sometimes also called a nominal variable. Relationships between variables Many analyses are motivated by a researcher looking for a relationship between two or more variables. A social scientist may like to answer some of the following questions: 1. Is federal spending, on average, higher or lower in counties with high rates of poverty? 2. If homeownership is lower than the national average in one county, will the percent of multi-unit structures in that county likely be above or below the national average? 3. Which counties have a higher average income: those that enact one or more smoking bans or those that do not? To answer these questions, data must be collected, such as the county data set shown in Table 1.5. Examining summary statistics could provide insights for each of the three questions about counties. Additionally, graphs can be used to visually summarize data and are useful for answering such questions as well. Scatterplots are one type of graph used to study the relationship between two numerical variables. Figure 1.8 compares the variables fed spend and poverty. Each point on the plot represents a single county. For instance, the highlighted dot corresponds to County 1088 in the county data set: Owsley County, Kentucky, which had a poverty rate of 41.5% and federal spending of \$21.50 per capita. The scatterplot suggests a relationship between the two variables: counties with a high poverty rate also tend to have slightly more federal spending. We might brainstorm as to why this relationship exists and investigate each idea to determine which is the most reasonable explanation. Exercise \(1\) Exercise 1.5 Examine the variables in the email50 data set, which are described in Table 1.4 on page 4. Create two questions about the relationships between these variables that are of interest to you.9 9Two sample questions: (1) Intuition suggests that if there are many line breaks in an email then there would tend to also be many characters: does this hold true? (2) Is there a connection between whether an email format is plain text (versus HTML) and whether it is a spam message? The fed_spend and poverty variables are said to be associated because the plot shows a discernible pattern. When two variables show some connection with one another, they are called associated variables. Associated variables can also be called dependent variables and vice-versa. Example \(1\) Example 1.6 This example examines the relationship between homeownership and the percent of units in multi-unit structures (e.g. apartments, condos), which is visualized using a scatterplot in Figure 1.9. Are these variables associated? Solution It appears that the larger the fraction of units in multi-unit structures, the lower the homeownership rate. Since there is some relationship between the variables, they are associated. Because there is a downward trend in Figure 1.9 { counties with more units in multiunit structures are associated with lower homeownership - these variables are said to be negatively associated. A positive association is shown in the relationship between the poverty and fed spend variables represented in Figure 1.8, where counties with higher poverty rates tend to receive more federal spending per capita. If two variables are not associated, then they are said to be independent. That is, two variables are independent if there is no evident relationship between the two. Associated or independent, never both A pair of variables are either related in some way (associated) or not (independent). No pair of variables is both associated and independent.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.03%3A_Data_Basics.txt
The first step in conducting research is to identify topics or questions that are to be investigated. A clearly laid out research question is helpful in identifying what subjects or cases should be studied and what variables are important. It is also important to consider how data are collected so that they are reliable and help achieve the research goals. Populations and samples Consider the following three research questions: 1. What is the average mercury content in sword sh in the Atlantic Ocean? 2. Over the last 5 years, what is the average time to degree for Duke undergraduate students? 3. Does a new drug reduce the number of deaths in patients with severe heart disease? Each research question refers to a target population. In the rst question, the target population is all sword sh in the Atlantic ocean, and each sh represents a case. Often times, it is too expensive to collect data for every case in a population. Instead, a sample is taken. A sample represents a subset of the cases and is often a small fraction of the population. For instance, 60 sword sh (or some other number) in the population might be selected, and this sample data may be used to provide an estimate of the population average and answer the research question. Exercise Exercise 1.7 For the second and third questions above, identify the target population and what represents an individual case.10 Anecdotal Evidence Consider the following possible responses to the three research questions: 1. A man on the news got mercury poisoning from eating sword sh, so the average mercury concentration in sword sh must be dangerously high. 2. I met two students who took more than 7 years to graduate from Duke, so it must take longer to graduate at Duke than at many other colleges. 3. My friend's dad had a heart attack and died after they gave him a new heart disease drug, so the drug must not work. Each of the conclusions are based on some data. However, there are two problems. First, the data only represent one or two cases. Second, and more importantly, it is unclear whether these cases are actually representative of the population. Data collected in this haphazard fashion are called anecdotal evidence. 10(2) Notice that the rst question is only relevant to students who complete their degree; the average cannot be computed using a student who never nished her degree. Thus, only Duke undergraduate students who have graduated in the last ve years represent cases in the population under consideration. Each such student would represent an individual case. (3) A person with severe heart disease represents a case. The population includes all people with severe heart disease. Anecdotal evidence Be careful of data collected in a haphazard fashion. Such evidence may be true and veri able, but it may only represent extraordinary cases. Anecdotal evidence typically is composed of unusual cases that we recall based on their striking characteristics. For instance, we are more likely to remember the two people we met who took 7 years to graduate than the six others who graduated in four years. Instead of looking at the most unusual cases, we should examine a sample of many cases that represent the population. Sampling from a Population We might try to estimate the time to graduation for Duke undergraduates in the last 5 years by collecting a sample of students. All graduates in the last 5 years represent the population, and graduates who are selected for review are collectively called the sample. In general, we always seek to randomly select a sample from a population. The most basic type of random selection is equivalent to how raffles are conducted. For example, in selecting graduates, we could write each graduate's name on a raffle ticket and draw 100 tickets. The selected names would represent a random sample of 100 graduates. Why pick a sample randomly? Why not just pick a sample by hand? Consider the following scenario. Example Example 1.8 Suppose we ask a student who happens to be majoring in nutrition to select several graduates for the study. What kind of students do you think she might collect? Do you think her sample would be representative of all graduates? Perhaps she would pick a disproportionate number of graduates from health-related fields. Or perhaps her selection would be well-representative of the population. When selecting samples by hand, we run the risk of picking a biased sample, even if that bias is unintentional or difficult to discern. If someone was permitted to pick and choose exactly which graduates were included in the sample, it is entirely possible that the sample could be skewed to that person's interests, which may be entirely unintentional. This introduces bias into a sample. Sampling randomly helps resolve this problem. The most basic random sample is called a simple random sample, and it is the equivalent of using a raffle to select cases. This means that each case in the population has an equal chance of being included and there is no implied connection between the cases in the sample. The act of taking a simple random sample helps minimize bias, however, bias can crop up in other ways. Even when people are picked at random, e.g. for surveys, caution must be exercised if the non-response is high. For instance, if only 30% of the people randomly sampled for a survey actually respond, then it is unclear whether the results are representative of the entire population. This non-response bias can skew results. Another common downfall is a convenience sample, where individuals who are easily accessible are more likely to be included in the sample. For instance, if a political survey is done by stopping people walking in the Bronx, this will not represent all of New York City. It is often diffcult to discern what sub-population a convenience sample represents. Exercise Exercise 1.9 We can easily access ratings for products, sellers, and companies through websites. These ratings are based only on those people who go out of their way to provide a rating. If 50% of online reviews for a product are negative, do you think this means that 50% of buyers are dissatisfied with the product?11 11Answers will vary. From our own anecdotal experiences, we believe people tend to rant more about products that fell below expectations than rave about those that perform as expected. For this reason, we suspect there is a negative bias in product ratings on sites like Amazon. However, since our experiences may not be representative, we also keep an open mind should data on the subject become available. Explanatory and Response Variables Consider the following question from page 7 for the county data set: (1) Is federal spending, on average, higher or lower in counties with high rates of poverty? If we suspect poverty might a ect spending in a county, then poverty is the explanatory variable and federal spending is the response variable in the relationship.12 If there are many variables, it may be possible to consider a number of them as explanatory variables. TIP: Explanatory and response variables To identify the explanatory variable in a pair of variables, identify which of the two is suspected of a ecting the other and plan an appropriate analysis. $\text{explanatory variable}\xrightarrow { \text {might affect}} \text {response variable}$ Caution: association does not imply causation Labeling variables as explanatory and response does not guarantee the relationship between the two is actually causal, even if there is an association identi ed between the two variables. We use these labels only to keep track of which variable we suspect a ects the other. In some cases, there is no explanatory or response variable. Consider the following question from page 7: (2) If homeownership is lower than the national average in one county, will the percent of multi-unit structures in that county likely be above or below the national average? It is difficult to decide which of these variables should be considered the explanatory and response variable, i.e. the direction is ambiguous, so no explanatory or response labels are suggested here. 12Sometimes the explanatory variable is called the independent variable and the response variable is called the dependent variable. However, this becomes confusing since a pair of variables might be independent or dependent, so we avoid this language. Introducing observational studies and experiments There are two primary types of data collection: observational studies and experiments. Researchers perform an observational study when they collect data in a way that does not directly interfere with how the data arise. For instance, researchers may collect information via surveys, review medical or company records, or follow a cohort of many similar individuals to study why certain diseases might develop. In each of these situations, researchers merely observe the data that arise. In general, observational studies can provide evidence of a naturally occurring association between variables, but they cannot by themselves show a causal connection. When researchers want to investigate the possibility of a causal connection, they conduct an experiment. Usually there will be both an explanatory and a response variable. For instance, we may suspect administering a drug will reduce mortality in heart attack patients over the following year. To check if there really is a causal connection between the explanatory variable and the response, researchers will collect a sample of individuals and split them into groups. The individuals in each group are assigned a treatment. When individuals are randomly assigned to a group, the experiment is called a randomized experiment. For example, each heart attack patient in the drug trial could be randomly assigned, perhaps by flipping a coin, into one of two groups: the first group receives a placebo (fake treatment) and the second group receives the drug. See the case study in Section 1.1 for another example of an experiment, though that study did not employ a placebo. TIP: association $\ne$ causation In general, association does not imply causation, and causation can only be inferred from a randomized experiment.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.04%3A_Overview_of_Data_Collection_Principles.txt
Observational Studies Generally, data in observational studies are collected only by monitoring what occurs, what occurs, while experiments require the primary explanatory variable in a study be assigned for each subject by the researchers. Making causal conclusions based on experiments is often reasonable. However, making the same causal conclusions based on observational data can be treacherous and is not recommended. Thus, observational studies are generally only sufficient to show associations. Exercise \(1\) Suppose an observational study tracked sunscreen use and skin cancer, and it was found that the more sunscreen someone used, the more likely the person was to have skin cancer. Does this mean sunscreen causes skin cancer? Solution No. See the paragraph following the exercise for an explanation. Some previous research tells us that using sunscreen actually reduces skin cancer risk, so maybe there is another variable that can explain this hypothetical association between sunscreen usage and skin cancer. One important piece of information that is absent is sun exposure. If someone is out in the sun all day, she is more likely to use sunscreen and more likely to get skin cancer. Exposure to the sun is unaccounted for in the simple investigation. Sun exposure is what is called a confounding variable (also called a lurking variable, confounding factor, or a confounder), which is a variable that is correlated with both the explanatory and response variables. While one method to justify making causal conclusions from observational studies is to exhaust the search for confounding variables, there is no guarantee that all confounding variables can be examined or measured. In the same way, the county data set is an observational study with confounding variables, and its data cannot easily be used to make causal conclusions. Exercise \(2\) Figure 1.9 shows a negative association between the homeownership rate and the percentage of multi-unit structures in a county. However, it is unreasonable to conclude that there is a causal relationship between the two variables. Suggest one or more other variables that might explain the relationship visible in Figure 1.9. Solution Answers will vary. Population density may be important. If a county is very dense, then this may require a larger fraction of residents to live in multi-unit structures. Additionally, the high density may contribute to increases in property value, making homeownership infeasible for many residents. Observational studies come in two forms: prospective and retrospective studies. A prospective study identifies individuals and collects information as events unfold. For instance, medical researchers may identify and follow a group of similar individuals over many years to assess the possible influences of behavior on cancer risk. One example of such a study is The Nurses Health Study, started in 1976 and expanded in 1989. This prospective study recruits registered nurses and then collects data from them using questionnaires. Retrospective studies collect data after events have taken place, e.g. researchers may review past events in medical records. Some data sets, such as county, may contain both rospectively- and retrospectively-collected variables. Local governments prospectively collect some variables as events unfolded (e.g. retails sales) while the federal government retrospectively collected others during the 2010 census (e.g. county population counts). Three Sampling Methods Almost all statistical methods are based on the notion of implied randomness. If observational data are not collected in a random framework from a population, these statistical methods are not reliable. Here we consider three random sampling techniques: simple, stratified, and cluster sampling. Figure 1.14 provides a graphical representation of these techniques. Simple random sampling is probably the most intuitive form of random sampling. Consider the salaries of Major League Baseball (MLB) players, where each player is a member of one of the league's 30 teams. To take a simple random sample of 120 baseball players and their salaries from the 2010 season, we could write the names of that season's 828 players onto slips of paper, drop the slips into a bucket, shake the bucket around until we are sure the names are all mixed up, then draw out slips until we have the sample of 120 players. In general, a sample is referred to as "simple random" if each case in the population has an equal chance of being included in the nal sample and knowing that a case is included in a sample does not provide useful information about which other cases are included. Stratified sampling is a divide-and-conquer sampling strategy. The population is divided into groups called strata. The strata are chosen so that similar cases are grouped together, then a second sampling method, usually simple random sampling, is employed within each stratum. In the baseball salary example, the teams could represent the strata; some teams have a lot more money (we're looking at you, Yankees). Then we might randomly sample 4 players from each team for a total of 120 players. Stratified sampling is especially useful when the cases in each stratum are very similar with respect to the outcome of interest. The downside is that analyzing data from a stratified sample is a more complex task than analyzing data from a simple random sample. The analysis methods introduced in this book would need to be extended to analyze data collected using stratified sampling. Example \(1\) Why would it be good for cases within each stratum to be very similar? Solution We might get a more stable estimate for the subpopulation in a stratum if the cases are very similar. These improved estimates for each subpopulation will help us build a reliable estimate for the full population. A cluster sample is much like a two-stage simple random sample. We break up the population into many groups, called clusters. Then we sample a fixed number of clusters and collect a simple random sample within each cluster. This technique is similar to stratified sampling in its process, except that there is no requirement in cluster sampling to sample from every cluster. Stratified sampling requires observations be sampled from every stratum. Sometimes cluster sampling can be a more economical random sampling technique than the alternatives. Also, unlike stratified sampling, cluster sampling is most helpful when there is a lot of case-to-case variability within a cluster but the clusters themselves don't look very different from one another. For example, if neighborhoods represented clusters, then this sampling method works best when the neighborhoods are very diverse. A downside of cluster sampling is that more advanced analysis techniques are typically required, though the methods in this book can be extended to handle such data. Example \(3\) Suppose we are interested in estimating the malaria rate in a densely tropical portion of rural Indonesia. We learn that there are 30 villages in that part of the Indonesian jungle, each more or less similar to the next. Our goal is to test 150 individuals for malaria. What sampling method should be employed? Solution A simple random sample would likely draw individuals from all 30 villages, which could make data collection extremely expensive. Stratified sampling would be a challenge since it is unclear how we would build strata of similar individuals. However, cluster sampling seems like a very good idea. First, we might randomly select half the villages, then randomly select 10 people from each. This would probably reduce our data collection costs substantially in comparison to a simple random sample and would still give us reliable information.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.05%3A_Observational_Studies_and_Sampling_Strategies.txt
Studies where the researchers assign treatments to cases are called experiments. When this assignment includes randomization, e.g. using a coin ip to decide which treatment a patient receives, it is called a randomized experiment. Randomized experiments are fundamentally important when trying to show a causal connection between two variables. Principles of experimental design Randomized experiments are generally built on four principles. Controlling. Researchers assign treatments to cases, and they do their best to control any other differences in the groups. For example, when patients take a drug in pill form, some patients take the pill with only a sip of water while others may have it with an entire glass of water. To control for water consumption, a doctor may ask all patients to drink a 12 ounce glass of water with the pill. Randomization. Researchers randomize patients into treatment groups to account for variables that cannot be controlled. For example, some patients may be more susceptible to a disease than others due to their dietary habits. Randomizing patients into the treatment or control group helps even out such differences, and it also prevents accidental bias from entering the study. Replication. The more cases researchers observe, the more accurately they can estimate the effect of the explanatory variable on the response. In a single study, we replicate by collecting a sufficiently large sample. Additionally, a group of scientists may replicate an entire study to verify an earlier nding. Blocking. Researchers sometimes know or suspect that variables, other than the treatment, inuence the response. Under these circumstances, they may rst group individuals based on this variable into blocks and then randomize cases within each block to the treatment groups. This strategy is often referred to as blocking. For instance, if we are looking at the effect of a drug on heart attacks, we might rst split patients in the study into low-risk and high-risk blocks, then randomly assign half the patients from each block to the control group and the other half to the treatment group, as shown in Figure 1.15. This strategy ensures each treatment group has an equal number of low-risk and high-risk patients. It is important to incorporate the rst three experimental design principles into any study, and this book describes applicable methods for analyzing data from such experiments. Blocking is a slightly more advanced technique, and statistical methods in this book may be extended to analyze data collected using blocking. Reducing bias in human experiments Randomized experiments are the gold standard for data collection, but they do not ensure an unbiased perspective into the cause and effect relationships in all cases. Human studies are perfect examples where bias can unintentionally arise. Here we reconsider a study where a new drug was used to treat heart attack patients.17 In particular, researchers wanted to know if the drug reduced deaths in patients. 17Anturane Reinfarction Trial Research Group. 1980. Sul npyrazone in the prevention of sudden death after myocardial infarction. New England Journal of Medicine 302(5):250-256. These researchers designed a randomized experiment because they wanted to draw causal conclusions about the drug's effect. Study volunteers18 were randomly placed into two study groups. One group, the treatment group, received the drug. The other group, called the control group, did not receive any drug treatment. Put yourself in the place of a person in the study. If you are in the treatment group, you are given a fancy new drug that you anticipate will help you. On the other hand, a person in the other group doesn't receive the drug and sits idly, hoping her participation doesn't increase her risk of death. These perspectives suggest there are actually two effects: the one of interest is the effectiveness of the drug, and the second is an emotional effect that is difficult to quantify. Researchers aren't usually interested in the emotional effect, which might bias the study. To circumvent this problem, researchers do not want patients to know which group they are in. When researchers keep the patients uninformed about their treatment, the study is said to be blind. But there is one problem: if a patient doesn't receive a treatment, she will know she is in the control group. The solution to this problem is to give fake treatments to patients in the control group. A fake treatment is called a placebo, and an effective placebo is the key to making a study truly blind. A classic example of a placebo is a sugar pill that is made to look like the actual treatment pill. Often times, a placebo results in a slight but real improvement in patients. This effect has been dubbed the placebo effect. The patients are not the only ones who should be blinded: doctors and researchers can accidentally bias a study. When a doctor knows a patient has been given the real treatment, she might inadvertently give that patient more attention or care than a patient that she knows is on the placebo. To guard against this bias, which again has been found to have a measurable effect in some instances, most modern studies employ a double-blind setup where doctors or researchers who interact with patients are, just like the patients, unaware of who is or is not receiving the treatment.19 Exercise 1.14 Look back to the study in Section 1.1 where researchers were testing whether stents were effective at reducing strokes in at-risk patients. Is this an experiment? Was the study blinded? Was it double-blinded?20
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.06%3A_Experiments.txt
In this section we will be introduced to techniques for exploring and summarizing numerical variables. The email50 and county data sets from Section 1.2 provide rich opportunities for examples. Recall that outcomes of numerical variables are numbers on which it is reasonable to perform basic arithmetic operations. For example, the pop2010 variable, which represents the populations of counties in 2010, is in 2010, is numerical since we can sensibly discuss the difference or ratio of the populations in two counties. On the other hand, area codes and zip codes are not numerical, but rather they are categorical variables. 18Human subjects are often called patients, volunteers, or study participants. 19There are always some researchers involved in the study who do know which patients are receiving which treatment. However, they do not interact with the study's patients and do not tell the blinded health care professionals who is receiving which treatment. 20The researchers assigned the patients into their treatment groups, so this study was an experiment. However, the patients could distinguish what treatment they received, so this study was not blind. The study could not be double-blind since it was not blind. Scatterplots for Paired Data A scatterplot provides a case-by-case view of data for two numerical variables. In Figure 1.8 on page 7, a scatterplot was used to examine how federal spending and poverty were related in the county data set. Another scatterplot is shown in Figure 1.16, comparing the number of line breaks (line breaks) and number of characters (num char) in emails for the email50 data set. In any scatterplot, each point represents a single case. Since there are 50 cases in email50, there are 50 points in Figure 1.17. To put the number of characters in perspective, this paragraph has 363 characters. Looking at Figure 1.16, it seems that some emails are incredibly verbose! Upon further investigation, we would actually find that most of the long emails use the HTML format, which means most of the characters in those emails are used to format the email rather than provide text. Exercise $1$ What do scatterplots reveal about the data, and how might they be useful? Solution Answers may vary. Scatterplots are helpful in quickly spotting associations relating variables, whether those associations come in the form of simple trends or whether those relationships are more complex. Exercise $1$ Example 1.16 Consider a new data set of 54 cars with two variables: vehicle price and weight.22 A scatterplot of vehicle price versus weight is shown in Figure 1.17. What can be said about the relationship between these variables? The relationship is evidently nonlinear, as highlighted by the dashed line. This is different from previous scatterplots we've seen, such as Figure 1.8 on page 7 and Figure 1.16, which show relationships that are very linear. Exercise $1$ Describe two variables that would have a horseshoe shaped association in a scatterplot. Solution Consider the case where your vertical axis represents something "good" and your horizontal axis represents something that is only good in moderation. Health and water consumption t this description since water becomes toxic when consumed in excessive quantities. Dot Plots and the Mean Sometimes two variables is one too many: only one variable may be of interest. In these cases, a dot plot provides the most basic of displays. A dot plot is a one-variable scatterplot; an example using the number of characters from 50 emails is shown in Figure 1.17. A stacked version of this dot plot is shown in Figure 1.19. The mean, sometimes called the average, is a common way to measure the center of a distribution of data. To find the mean number of characters in the 50 emails, we add up all the character counts and divide by the number of emails. For computational convenience, the number of characters is listed in the thousands and rounded to the first decimal. $\bar {X} = \frac {21.7 + 7.0 + \dots + 15.8}{50} = 11.6$ The sample mean is often labeled $\bar{X}$. The letter x is being used as a generic placeholder for the variable of interest, num char, and the bar says it is the average number of characters in the 50 emails was 11,600. It is useful to think of the mean as the balancing point of the distribution. The sample mean is shown as a triangle in Figures 1.19 and 1.20. Definition: Mean The sample mean of a numerical variable is computed as the sum of all of the observations divided by the number of observations: $\bar {X} = \frac {x_1 + x_2 + \dots + x_n}{n} \label {1.19}$ where $x_1, x_2, \dots, x_n$ represent the n observed values. Exercise $1$ Examine Equations \ref{1.18} and \ref{1.19} above. What does x1 correspond to? And x2? Can you infer a general meaning to what xi might represent? Solution x1 corresponds to the number of characters in the first email in the sample (21.7, in thousands), x2 to the number of characters in the second email (7.0, in thousands), and xi corresponds to the number of characters in the ith email in the data set. Exercise $1$ What was n in this sample of emails? Solution The sample size was n = 50. Exercise 1.21 The email50 data set represents a sample from a larger population of emails that were received in January and March. We could compute a mean for this population in the same $\mu$ way as the sample mean, however, the population mean has a special label: $\mu$ The symbol $\mu$ is the Greek letter mu and represents the average of all observations in the population. Sometimes a subscript, such as x, is used to represent which variable the population mean refers to, e.g. $\mu_x$. Example 1.22 The average number of characters across all emails can be estimated using the sample data. Based on the sample of 50 emails, what would be a reasonable estimate of $\mu_x$, the mean number of characters in all emails in the email data set? (Recall that email50 is a sample from email.) The sample mean, 11,600, may provide a reasonable estimate of $\mu_x$. While this number will not be perfect, it provides a point estimate of the population mean. In Chapter 4 and beyond, we will develop tools to characterize the accuracy of point estimates, and we will nd that point estimates based on larger samples tend to be more accurate than those based on smaller samples. Example 1.23 We might like to compute the average income per person in the US. To do so, we might first think to take the mean of the per capita incomes across the 3,143 counties in the county data set. What would be a better approach? The county data set is special in that each county actually represents many individual people. If we were to simply average across the income variable, we would be treating counties with 5,000 and 5,000,000 residents equally in the calculations. Instead, we should compute the total income for each county, add up all the counties' totals, and then divide by the number of people in all the counties. If we completed these steps with the county data, we would nd that the per capita income for the US is $27,348.43. Had we computed the simple mean of per capita income across counties, the result would have been just$22,504.70! Example 1.23 used what is called a weighted mean, which will not be a key topic in this textbook. However, we have provided an online supplement on weighted means for interested readers: Histograms and Shape Dot plots show the exact value for each observation. This is useful for small data sets, but they can become hard to read with larger samples. Rather than showing the value of each observation, we prefer to think of the value as belonging to a bin. For example, in the email50 data set, we create a table of counts for the number of cases with character counts between 0 and 5,000, then the number of cases between 5,000 and 10,000, and so on. Observations that fall on the boundary of a bin (e.g. 5,000) are allocated to the lower bin. This tabulation is shown in Table 1.20. These binned counts are plotted as bars in Figure 1.21 into what is called a histogram, which resembles the stacked dot plot shown Table 1.20: The counts for the binned num_char data. Characters (in thousands) 0-5 5-10 10-15 15-20 20-25 25-30 $\dots$ 55-60 60-65 Count 19 12 6 2 3 5 $\dots$ 0 1 Histograms provide a view of the data density. Higher bars represent where the data are relatively more common. For instance, there are many more emails with fewer than 20,000 characters than emails with at least 20,000 in the data set. The bars make it easy to see how the density of the data changes relative to the number of characters. Histograms are especially convenient for describing the shape of the data distribution. Figure 1.21 shows that most emails have a relatively small number of characters, while fewer emails have a very large number of characters. When data trail off to the right in this way and have a longer right tail, the shape is said to be right skewed.26 Data sets with the reverse characteristic - a long, thin tail to the left - are said to be left skewed. We also say that such a distribution has a long left tail. Data sets that show roughly equal trailing off in both directions are called symmetric. 26Other ways to describe data that are skewed to the right: skewed to the right, skewed to the high end, or skewed to the positive end. Long tails to identify skew When data trail off in one direction, the distribution has a long tail. If a distribution has a long left tail, it is left skewed. If a distribution has a long right tail, it is right skewed. Exercise $1$ Take a look at the dot plots in Figures 1.18 and 1.19. Can you see the skew in the data? Is it easier to see the skew in this histogram or the dot plots? Solution The skew is visible in all three plots, though the at dot plot is the least useful. The stacked dot plot and histogram are helpful visualizations for identifying skew. Exercise $1$ Besides the mean (since it was labeled), what can you see in the dot plots that you cannot see in the histogram? Solution Character counts for individual emails. In addition to looking at whether a distribution is skewed or symmetric, histograms can be used to identify modes. A mode is represented by a prominent peak in the distribution. There is only one prominent peak in the histogram of num char. Another Definition: Mode Another definition of mode, which is not typically used in statistics, is the value with the most occurrences. It is common to have no observations with the same value in a data set, which makes this other definition useless for many real data sets. Figure 1.22 shows histograms that have one, two, or three prominent peaks. Such distributions are called unimodal, bimodal, and multimodal, respectively. Any distribution with more than 2 prominent peaks is called multimodal. Notice that there was one prominent peak in the unimodal distribution with a second less prominent peak that was not counted since it only differs from its neighboring bins by a few observations. Exercise $1$ Figure 1.21 reveals only one prominent mode in the number of characters. Is the distribution unimodal, bimodal, or multimodal? Solution Unimodal. Remember that uni stands for 1 (think unicycles). Similarly, bi stands for 2 (think bicycles). (We're hoping a multicycle will be invented to complete this analogy.) Exercise 1.27 Height measurements of young students and adult teachers at a K-3 elementary school were taken. How many modes would you anticipate in this height data set?31 TIP: Looking for modes Looking for modes isn't about nding a clear and correct answer about the number of modes in a distribution, which is why prominent is not rigorously de ned in this book. The important part of this examination is to better understand your data and how it might be structured. Variance and Standard Deviation The mean was introduced as a method to describe the center of a data set, but the variability in the data is also important. Here, we introduce two measures of variability: the variance and the standard deviation. Both of these are very useful in data analysis, even though their formulas are a bit tedious to calculate by hand. The standard deviation is the easier of the two to understand, and it roughly describes how far away the typical observation is from the mean. We call the distance of an observation from its mean its deviation. Below are the deviations for the 1st, 2nd, 3rd, and 50th observations in the num char variable. For computational convenience, the number of characters is listed in the thousands and rounded to the first decimal. $x_1 - \bar {x} = 21.7 - 11.6 = 10.1$ $x_2 - \bar {x} = 7.0 - 11.6 = -4.6$ $x_3 - \bar {x} = 0.6 - 11.6 = --11.0$ $\vdots$ $x_{50}- \bar {x} = 15.8 - 11.6 = 4.2$ 31There might be two height groups visible in the data set: one of the students and one of the adults. That is, the data are probably bimodal. If we square these deviations and then take an average, the result is about equal to the s2 sample variance, denoted by s2: $s^2 = \frac {10.1^2 + (-4.6)^2 + (-11.0)^2 + \dots + 4.2^2}{ 50 - 1}$ $= \frac {102.01 + 21.16 + 121.00 + \dots + 17.64}{49}$ $= 172.44$ We divide by n - 1, rather than dividing by n, when computing the variance; you need not worry about this mathematical nuance for the material in this textbook. Notice that squaring the deviations does two things. First, it makes large values much larger, seen by comparing 10:12, $(-4:6)^2$, $(-11:0)^2$, and $4.2^2$. Second, it gets rid of any negative signs. The standard deviation is de ned as the square root of the variance: $s = \sqrt {172.44} = 13.13$ The standard deviation of the number of characters in an email is about 13.13 thousand. A subscript of x may be added to the variance and standard deviation, i.e. $s^2_x and s_x$, as a reminder that these are the variance and standard deviation of the observations represented by x1, x2, ..., xn. The x subscript is usually omitted when it is clear which data the variance or standard deviation is referencing. Variance and standard deviation The variance is roughly the average squared distance from the mean. The standard deviation is the square root of the variance. The standard deviation is useful when considering how close the data are to the mean. Formulas and methods used to compute the variance and standard deviation for a population are similar to those used for a sample (the only difference is that the population variance has a division by n instead of n - 1). However, like the mean, the population values have special symbols: $\sigma^2$ for the variance and $\sigma$ for the standard deviation. The symbol $\sigma$ is the Greek letter sigma. TIP: standard deviation describes variability Focus on the conceptual meaning of the standard deviation as a descriptor of variability rather than the formulas. Usually 70% of the data will be within one standard deviation of the mean and about 95% will be within two standard deviations. However, as seen in Figures 1.23 and 1.24, these percentages are not strict rules. Exercise $1$ On page 23, the concept of shape of a distribution was introduced. A good description of the shape of a distribution should include modality and whether the distribution is symmetric or skewed to one side. Using Figure 1.24 as an example, explain why such a description is important.33 Solution Figure 1.24 shows three distributions that look quite different, but all have the same mean, variance, and standard deviation. Using modality, we can distinguish between the first plot (bimodal) and the last two (unimodal). Using skewness, we can distinguish between the last plot (right skewed) and the first two. While a picture, like a histogram, tells a more complete story, we can use modality and shape (symmetry/skew) to characterize basic information about a distribution. Example 1.29 Describe the distribution of the num char variable using the histogram in Figure 1.21 on page 24. The description should incorporate the center, variability, and shape of the distribution, and it should also be placed in context: the number of characters in emails. Also note any especially unusual cases. The distribution of email character counts is unimodal and very strongly skewed to the high end. Many of the counts fall near the mean at 11,600, and most fall within one standard deviation (13,130) of the mean. There is one exceptionally long email with about 65,000 characters. In practice, the variance and standard deviation are sometimes used as a means to an end, where the "end" is being able to accurately estimate the uncertainty associated with a sample statistic. For example, in Chapter 4 we will use the variance and standard deviation to assess how close the sample mean is to the population mean. Box plots, Quartiles, and the Median A box plot summarizes a data set using ve statistics while also plotting unusual observations. Figure 1.25 provides a vertical dot plot alongside a box plot of the num char variable from the email50 data set. The first step in building a box plot is drawing a dark line denoting the median, which splits the data in half. Figure 1.25 shows 50% of the data falling below the median (dashes) and other 50% falling above the median (open circles). There are 50 character counts in the data set (an even number) so the data are perfectly split into two groups of 25. We take the median in this case to be the average of the two observations closest to the 50th percentile: (6,768+7,012)/2 = 6,890. When there are an odd number of observations, there will be exactly one observation that splits the data into two halves, and in this case that observation is the median (no average needed). Definition: Median - the number in the middle If the data are ordered from smallest to largest, the median is the observation right in the middle. If there are an even number of observations, there will be two values in the middle, and the median is taken as their average. The second step in building a box plot is drawing a rectangle to represent the middle 50% of the data. The total length of the box, shown vertically in Figure 1.25, is called the interquartile range (IQR, for short). It, like the standard deviation, is a measure of variability in data. The more variable the data, the larger the standard deviation and IQR. The two boundaries of the box are called the first quartile (the 25th percentile, i.e. 25% of the data fall below this value) and the third quartile (the 75th percentile), and these are often labeled Q1 and Q3, respectively. Definition: Interquartile range (IQR) The IQR is the length of the box in a box plot. It is computed as $IQR = Q_3 - Q_1$ where Q1 and Q3 are the 25th and 75th percentiles. Exercise $1$ What percent of the data fall between $Q_1$ and the median? What percent is between the median and $Q_3$?34 34Since $Q_1$ and $Q_3$ capture the middle 50% of the data and the median splits the data in the middle, 25% of the data fall between $Q_1$ and the median, and another 25% falls between the median and $Q_3$. Extending out from the box, the whiskers attempt to capture the data outside of the box, however, their reach is never allowed to be more than$1.5 \times IQR$ (while the choice of exactly 1.5 is arbitrary, it is the most commonly used value for box plots). They capture everything within this reach. In Figure 1.25, the upper whisker does not extend to the last three points, which is beyond $Q_3 + 1.5 \times IQR$, and so it extends only to the last point below this limit. The lower whisker stops at the lowest value, 33, since there is no additional data to reach; the lower whisker's limit is not shown in the gure because the plot does not extend down to $Q_1 - 1.5 \times IQR$. In a sense, the box is like the body of the box plot and the whiskers are like its arms trying to reach the rest of the data. Any observation that lies beyond the whiskers is labeled with a dot. The purpose of labeling these points - instead of just extending the whiskers to the minimum and maximum observed values - is to help identify any observations that appear to be unusually distant from the rest of the data. Unusually distant observations are called outliers. In this case, it would be reasonable to classify the emails with character counts of 41,623, 42,793, and 64,401 as outliers since they are numerically distant from most of the data. Definition: Outliers are extreme An outlier is an observation that appears extreme relative to the rest of the data. TIP: Why it is important to look for outliers Examination of data for possible outliers serves many useful purposes, including 1. Identifying strong skew in the distribution. 2. Identifying data collection or entry errors. For instance, we re-examined the email purported to have 64,401 characters to ensure this value was accurate. 3. Providing insight into interesting properties of the data. Exercise $1$ The observation 64,401, a suspected outlier, was found to be an accurate observation. What would such an observation suggest about the nature of character counts in emails?36 36That occasionally there may be very long emails. Exercise $1$ Using Figure 1.25, estimate the following values for num char in the email50 data set: (a) Q1, (b) Q3, and (c) IQR.37 37These visual estimates will vary a little from one person to the next: $Q_1$ = 3,000, $Q_3$ = 15,000, IQR = $Q_3 - Q_1$ = 12,000. (The true values: $Q_1$ = 2,536, $Q_3$ = 15,411, IQR = 12,875.) Robust Statistics How are the sample statistics of the num char data set affected by the observation, 64,401? What would have happened if this email wasn't observed? What would happen to these summary statistics if the observation at 64,401 had been even larger, say 150,000? These scenarios are plotted alongside the original data in Figure 1.26, and sample statistics are computed under each scenario in Table 1.27. Table 1.27: A comparison of how the median, IQR, mean ($\bar {x}$), and standard deviation (s) change when extreme observations are present. robust not robust scenario median IQR $\bar {x}$ s original num_char data 6,890 12,875 11,600 13,130 drop 66,924 observation 6,768 11,702 10,521 10,798 move 66,924 to 150,000 6,890 12,875 13,310 22,434 Exercise $1$ (a) Which is more affected by extreme observations, the mean or median? Table 1.27 may be helpful. (b) Is the standard deviation or IQR more a ected by extreme observations?38 38(a) Mean is affected more. (b) Standard deviation is affected more. Complete explanations are provided in the material following Exercise 1.33. The median and IQR are called robust estimates because extreme observations have little effect on their values. The mean and standard deviation are much more a ected by changes in extreme observations. Example 1.34 The median and IQR do not change much under the three scenarios in Table 1.27. Why might this be the case? The median and IQR are only sensitive to numbers near Q1, the median, and Q3. Since values in these regions are relatively stable - there aren't large jumps between observations - the median and IQR estimates are also quite stable. Exercise $1$ The distribution of vehicle prices tends to be right skewed, with a few luxury and sports cars lingering out into the right tail. If you were searching for a new car and cared about price, should you be more interested in the mean or median price of vehicles sold, assuming you are in the market for a regular car?39 39Buyers of a "regular car" should be concerned about the median price. High-end car sales can drastically inate the mean price while the median will be more robust to the inuence of those sales. Transforming data (special topic) When data are very strongly skewed, we sometimes transform them so they are easier to model. Consider the histogram of salaries for Major League Baseball players' salaries from 2010, which is shown in Figure 1.28(a). Example 1.36 The histogram of MLB player salaries is useful in that we can see the data are extremely skewed and centered (as gauged by the median) at about $1million. What isn't useful about this plot? Most of the data are collected into one bin in the histogram and the data are so strongly skewed that many details in the data are obscured. There are some standard transformations that are often applied when much of the data cluster near zero (relative to the larger values in the data set) and all observations are positive. A transformation is a rescaling of the data using a function. For instance, a plot of the natural logarithm40 of player salaries results in a new histogram in Figure 1.28(b). Transformed data are sometimes easier to work with when applying statistical models because the transformed data are much less skewed and outliers are usually less extreme. Transformations can also be applied to one or both variables in a scatterplot. A scatterplot of the line breaks and num char variables is shown in Figure 1.29(a), which was earlier shown in Figure 1.16. We can see a positive association between the variables and that many observations are clustered near zero. In Chapter 7, we might want to use a straight line to model the data. However, we'll nd that the data in their current state cannot be modeled very well. Figure 1.29(b) shows a scatterplot where both the line breaks and num char variables have been transformed using a log (base e) transformation. While there is a positive association in each plot, the transformed data show a steadier trend, which is easier to model than the untransformed data. Transformations other than the logarithm can be useful, too. For instance, the square root $\sqrt { 0riginal observation}$ and inverse $( \frac {1}{original observation}$) are used by statisticians. Common goals in transforming data are to see the data structure differently, reduce skew, assist in modeling, or straighten a nonlinear relationship in a scatterplot. 40Statisticians often write the natural logarithm as log. You might be more familiar with it being written as ln. Mapping data (special topic) The county data set offers many numerical variables that we could plot using dot plots, scatterplots, or box plots, but these miss the true nature of the data. Rather, when we encounter geographic data, we should map it using an intensity map, where colors are used to show higher and lower values of a variable. Figures 1.30 and 1.31 shows intensity maps for federal spending per capita (fed spend), poverty rate in percent (poverty), homeownership rate in percent (homeownership), and median household income (medincome). The color key indicates which colors correspond to which values. Note that the intensity maps are not generally very helpful for getting precise values in any given county, but they are very helpful for seeing geographic trends and generating interesting research questions. Example 1.37 What interesting features are evident in the fed spend and poverty intensity maps? The federal spending intensity map shows substantial spending in the Dakotas and along the central-to-western part of the Canadian border, which may be related to the oil boom in this region. There are several other patches of federal spending, such as a vertical strip in eastern Utah and Arizona and the area where Colorado, Nebraska, and Kansas meet. There are also seemingly random counties with very high federal spending relative to their neighbors. If we did not cap the federal spending range at$18 per capita, we would actually nd that some counties have extremely high federal spending while there is almost no federal spending in the neighboring counties. These high-spending counties might contain military bases, companies with large government contracts, or other government facilities with many employees. Poverty rates are evidently higher in a few locations. Notably, the deep south shows higher poverty rates, as does the southwest border of Texas. The vertical strip of eastern Utah and Arizona, noted above for its higher federal spending, also appears to have higher rates of poverty (though generally little correspondence is seen between the two variables). High poverty rates are evident in the Mississippi ood plains a little north of New Orleans and also in a large section of Kentucky andWest Virginia. Exercise $1$ Exercise 1.38 What interesting features are evident in the med income intensity map?41
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.07%3A_Examining_Numerical_Data.txt
Like numerical data, categorical data can also be organized and analyzed. In this section, we will introduce tables and other basic tools for categorical data that are used throughout this book. The email50 data set represents a sample from a larger email data set called email. This larger data set contains information on 3,921 emails. In this section we will examine whether the presence of numbers, small or large, in an email provides any useful value in classifying email as spam or not spam. Contingency Tables and Bar Plots Table 1.32 summarizes two variables: spam and number. Recall that number is a categorical variable that describes whether an email contains no numbers, only small numbers (values under 1 million), or at least one big number (a value of 1 million or more). A table that summarizes data for two categorical variables in this way is called a contingency table. Each value in the table represents the number of times a particular combination of variable outcomes occurred. For example, the value 149 corresponds to the number of emails in the data set that are spam and had no number listed in the email. Row and column totals are also included. The row totals provide the total counts across each row (e.g. 149 + 168 + 50 = 367), and column totals are total counts down each column. Table 1.32: A contingency table for spam and number. number none small big Total spam 149 168 50 367 not spam 400 2659 495 3554 Total 549 2827 545 3921 A table for a single variable is called a frequency table. Table 1.33 is a frequency table for the number variable. If we replaced the counts with percentages or proportions, the table would be called a relative frequency table. Table 1.33: A frequency table for the number variable. none small big Total 549 2827 545 3921 A bar plot is a common way to display a single categorical variable. The left panel of Figure 1.34 shows a bar plot for the number variable. In the right panel, the counts are converted into proportions (e.g. 549/3921 = 0.140 for none), showing the proportion of observations that are in each level (i.e. in each category). 41Note: answers will vary. There is a very strong correspondence between high earning and metropolitan areas. You might look for large cities you are familiar with and try to spot them on the map as dark spots. Row and Column Proportions Table 1.35 shows the row proportions for Table 1.32. The row proportions are computed as the counts divided by their row totals. The value 149 at the intersection of spam and none is replaced by 149/367 = 0.406, i.e. 149 divided by its row total, 367. So what does 0.406 represent? It corresponds to the proportion of spam emails in the sample that do not have any numbers. Table 1.35: A contingency table with row proportions for the spam and number variables. none small big Total spam 149/367 = 0.406 168/367 = 0.458 50/367 = 0.136 1.000 not spam 400/3554 = 0.113 2657/3554 = 0.748 495/3554 = 0.139 1.000 Total 549/3921 = 0.140 2827/3921 = 0.721 545/3921 = 0.139 1.000 A contingency table of the column proportions is computed in a similar way, where each column proportion is computed as the count divided by the corresponding column total. Table 1.36 shows such a table, and here the value 0.271 indicates that 27.1% of emails with no numbers were spam. This rate of spam is much higher compared to emails with only small numbers (5.9%) or big numbers (9.2%). Because these spam rates vary between the three levels of number (none, small, big), this provides evidence that the spam and number variables are associated. Table 1.36: A contingency table with column proportions for the spam and number variables. none small big Total spam 149/549 = 0.271 168/2827 = 0.059 50/545 = 0.092 367/3921 = 0.094 not spam 400/549 = 0.729 2659/2827 = 0.941 495/545 = 0.908 3684/3921 = 0.906 Total 1.000 1.000 1.000 1.000 We could also have checked for an association between spam and number in Table 1.35 using row proportions. When comparing these row proportions, we would look down columns to see if the fraction of emails with no numbers, small numbers, and big numbers varied from spam to not spam. Exercise \(1\) What does 0.458 represent in Table 1.35? What does 0.059 represent in Table 1.36? Solution 0.458 represents the proportion of spam emails that had a small number. 0.058 represents the fraction of emails with small numbers that are spam. Exercise \(2\) What does 0.139 at the intersection of not spam and big represent in Table 1.35? What does 0.908 represent in the Table 1.36? Solution 0.139 represents the fraction of non-spam email that had a big number. 0.908 represents the fraction of emails with big numbers that are non-spam emails. Example \(1\) Data scientists use statistics to filter spam from incoming email messages. By noting specific characteristics of an email, a data scientist may be able to classify some emails as spam or not spam with high accuracy. One of those characteristics is whether the email contains no numbers, small numbers, or big numbers. Another characteristic is whether or not an email has any HTML content. A contingency table for the spam and format variables from the email data set are shown in Table 1.37. Recall that an HTML email is an email with the capacity for special formatting, e.g. bold text. In Table 1.37, which would be more helpful to someone hoping to classify email as spam or regular email: row or column proportions? Such a person would be interested in how the proportion of spam changes within each email format. This corresponds to column proportions: the proportion of spam in plain text emails and the proportion of spam in HTML emails. If we generate the column proportions, we can see that a higher fraction of plain text emails are spam (209/1195 = 17.5%) than compared to HTML emails (158/2726 = 5.8%). This information on its own is insufficient to classify an email as spam or not spam, as over 80% of plain text emails are not spam. Yet, when we carefully combine this information with many other characteristics, such as number and other variables, we stand a reasonable chance of being able to classify some email as spam or not spam. This is a topic we will return to in Chapter 8. Table 1.37: A contingency table for spam and format. text HTML Total spam 209 158 367 not spam 986 2568 3554 Total 1195 2726 3921 Example \(1\) points out that row and column proportions are not equivalent. Before settling on one form for a table, it is important to consider each to ensure that the most useful table is constructed. Exercise \(3\) Look back to Tables 1.35 and 1.36. Which would be more useful to someone hoping to identify spam emails using the number variable? Solution The column proportions in Table 1.36 will probably be most useful, which makes it easier to see that emails with small numbers are spam about 5.9% of the time (relatively rare). We would also see that about 27.1% of emails with no numbers are spam, and 9.2% of emails with big numbers are spam. Segmented Bar and Mosaic Plots Contingency tables using row or column proportions are especially useful for examining how two categorical variables are related. Segmented bar and mosaic plots provide a way to visualize the information in these tables. A segmented bar plot is a graphical display of contingency table information. For example, a segmented bar plot representing Table 1.36 is shown in Figure 1.38(a), where we have first created a bar plot using the number variable and then divided each group by the levels of spam. The column proportions of Table 1.36 have been translated into a standardized segmented bar plot in Figure 1.38(b), which is a helpful visualization of the fraction of spam emails in each level of number. Example \(2\) Examine both of the segmented bar plots. Which is more useful? Solution Figure 1.38(a) contains more information, but Figure 1.38(b) presents the information more clearly. This second plot makes it clear that emails with no number have a relatively high rate of spam email - about 27%! On the other hand, less than 10% of email with small or big numbers are spam. Since the proportion of spam changes across the groups in Figure 1.38(b), we can conclude the variables are dependent, which is something we were also able to discern using table proportions. Because both the none and big groups have relatively few observations compared to the small group, the association is more difficult to see in Figure 1.38(a). In some other cases, a segmented bar plot that is not standardized will be more useful in communicating important information. Before settling on a particular segmented bar plot, create standardized and non-standardized forms and decide which is more effective at communicating features of the data. A mosaic plot is a graphical display of contingency table information that is similar to a bar plot for one variable or a segmented bar plot when using two variables. Figure 1.39(a) shows a mosaic plot for the number variable. Each column represents a level of number, and the column widths correspond to the proportion of emails of each number type. For instance, there are fewer emails with no numbers than emails with only small numbers, so the no number email column is slimmer. In general, mosaic plots use box areas to represent the number of observations that box represents. This one-variable mosaic plot is further divided into pieces in Figure 1.39(b) using the spam variable. Each column is split proportionally according to the fraction of emails that were spam in each number category. For example, the second column, representing emails with only small numbers, was divided into emails that were spam (lower) and not spam (upper). As another example, the bottom of the third column represents spam emails that had big numbers, and the upper part of the third column represents regular emails that had big numbers. We can again use this plot to see that the spam and number variables are associated since some columns are divided in different vertical locations than others, which was the same technique used for checking an association in the standardized version of the segmented bar plot. In a similar way, a mosaic plot representing row proportions of Table 1.32 could be constructed, as shown in Figure 1.40. However, because it is more insightful for this application to consider the fraction of spam in each category of the number variable, we prefer Figure 1.39(b). The only pie chart you will see in this book While pie charts are well known, they are not typically as useful as other charts in a data analysis. A pie chart is shown in Figure 1.41 alongside a bar plot. It is generally more difficult to compare group sizes in a pie chart than in a bar plot, especially when categories have nearly identical counts or proportions. In the case of the none and big categories, the difference is so slight you may be unable to distinguish any difference in group sizes for either plot! Comparing numerical data across groups Some of the more interesting investigations can be considered by examining numerical data across groups. The methods required here aren't really new. All that is required is to make a numerical plot for each group. Here two convenient methods are introduced: side-by-side box plots and hollow histograms. We will take a look again at the county data set and compare the median household income for counties that gained population from 2000 to 2010 versus counties that had no gain. While we might like to make a causal connection here, remember that these are observational data and so such an interpretation would be unjustified. There were 2,041 counties where the population increased from 2000 to 2010, and there were 1,099 counties with no gain (all but one were a loss). A random sample of 100 counties from the first group and 50 from the second group are shown in Table 1.42 to give a better sense of some of the raw data. Table 1.42: In this table, median household income (in \$1000s) from a random sample of 100 counties that gained population over 2000-2010 are shown on the left. Median incomes from a random sample of 50 counties that had no population gain are shown on the right. population gain no gain 41.2 33.1 30.4 37.3 79.1 34.5 22.9 39.9 31.4 45.1 50.6 59.4 47.9 36.4 42.2 43.2 31.8 36.9 50.1 27.3 37.5 53.5 26.1 57.2 57.4 42.6 40.6 48.8 28.1 29.4 43.8 26 33.8 35.7 38.5 42.3 41.3 40.5 68.3 31 46.7 30.5 68.3 48.3 38.7 62 37.6 32.2 42.6 53.6 50.7 35.1 30.6 56.8 66.4 41.4 34.3 38.9 37.3 41.7 51.9 83.3 46.3 48.4 40.8 42.6 44.5 34 48.7 45.2 34.7 32.2 39.4 38.6 40 57.3 45.2 33.1 43.8 71.7 45.1 32.2 63.3 54.7 71.3 36.3 36.4 41 37 66.7 50.2 45.8 45.7 60.2 53.1 35.8 40.4 51.5 66.4 36.1 40.3 33.5 34.8 29.5 31.8 41.3 28 39.1 42.8 38.1 39.5 22.3 43.3 37.5 47.1 43.7 36.7 36 35.8 38.7 39.8 46 42.3 48.2 38.6 31.9 31.1 37.6 29.3 30.1 57.5 32.6 31.1 46.2 26.5 40.1 38.4 46.7 25.9 36.4 41.5 45.7 39.7 37 37.7 21.4 29.3 50.1 43.6 39.8 The side-by-side box plot is a traditional tool for comparing across groups. An example is shown in the left panel of Figure 1.43, where there are two box plots, one for each group, placed into one plotting window and drawn on the same scale. Another useful plotting method uses hollow histograms to compare numerical data across groups. These are just the outlines of histograms of each group put on the same plot, as shown in the right panel of Figure 1.43. Exercise \(1\) Use the plots in Figure 1.43 to compare the incomes for counties across the two groups. What do you notice about the approximate center of each group? What do you notice about the variability between groups? Is the shape relatively consistent between groups? How many prominent modes are there for each group? Solution Answers may vary a little. The counties with population gains tend to have higher income (median of about \$45,000) versus counties without a gain (median of about \$40,000). The variability is also slightly larger for the population gain group. This is evident in the IQR, which is about 50% bigger in the gain group. Both distributions show slight to moderate right skew and are unimodal. There is a secondary small bump at about \$60,000 for the no gain group, visible in the hollow histogram plot, that seems out of place. (Looking into the data set, we would nd that 8 of these 15 counties are in Alaska and Texas.) The box plots indicate there are many observations far above the median in each group, though we should anticipate that many observations will fall beyond the whiskers when using such a large data set. Exercise \(1\) What components of each plot in Figure 1.43 do you nd most useful? Solution
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.08%3A_Considering_Categorical_Data.txt
Exercise $1$ Suppose your professor splits the students in class into two groups: students on the left and students on the right. If $\hat {p}_L$and $\hat {p}_R$ represent the proportion of students who own an Apple product on the left and right, respectively, would you be surprised if $\hat {p}_L$ did not exactly equal $\hat {p}_R$? While the proportions would probably be close to each other, it would be unusual for them to be exactly the same. We would probably observe a small difference due to chance. Solution Answers will vary. The side-by-side box plots are especially useful for comparing centers and spreads, while the hollow histograms are more useful for seeing distribution shape, skew, and groups of anomalies. Exercise $2$ If we don't think the side of the room a person sits on in class is related to whether the person owns an Apple product, what assumption are we making about the relationship between these two variables? Solution We would be assuming that these two variables are independent. Variability Within Data We consider a study investigating gender discrimination in the 1970s, which is set in the context of personnel decisions within a bank (Rosen B and Jerdee T. 1974. Inuence of sex role stereotypes on personnel decisions. Journal of Applied Psychology 59(1):9-14). The research question we hope to answer is, "Are females unfairly discriminated against in promotion decisions made by male managers?" The participants in this study are 48 male bank supervisors attending a management institute at the University of North Carolina in 1972. They were asked to assume the role of the personnel director of a bank and were given a personnel le to judge whether the person should be promoted to a branch manager position. The les given to the participants were identical, except that half of them indicated the candidate was male and the other half indicated the candidate was female. These les were randomly assigned to the subjects. Exercise $3$ Is this an observational study or an experiment? What implications does the study type have on what can be inferred from the results? Solution The study is an experiment, as subjects were randomly assigned a male le or a female le. Since this is an experiment, the results can be used to evaluate a causal relationship between gender of a candidate and the promotion decision. For each supervisor we record the gender associated with the assigned file and the promotion decision. Using the results of the study summarized in Table 1.44, we would like to evaluate if females are unfairly discriminated against in promotion decisions. In this study, a smaller proportion of females are promoted than males (0.583 versus 0.875), but it is unclear whether the difference provides convincing evidence that females are unfairly discriminated against. Table 1.44: Summary results for the gender discrimination study. decision promoted not promoted Total male 21 3 24 female 14 10 24 Total 35 13 48 Example $1$ Statisticians are sometimes called upon to evaluate the strength of evidence. When looking at the rates of promotion for males and females in this study, what comes to mind as we try to determine whether the data show convincing evidence of a real difference? Solution The observed promotion rates (58.3% for females versus 87.5% for males) suggest there might be discrimination against women in promotion decisions. However, we cannot be sure if the observed difference represents discrimination or is just from random chance. Generally there is a little bit of fluctuation in sample data, and we wouldn't expect the sample proportions to be exactly equal, even if the truth was that the promotion decisions were independent of gender. Example 1.49 is a reminder that the observed outcomes in the sample may not perfectly reflect the true relationships between variables in the underlying population. Table 1.44 shows there were 7 fewer promotions in the female group than in the male group, a difference in promotion rates of 29.2% $( \frac {21}{24} - \frac {14}{24} = 0.292 )$. This difference is large, but the sample size for the study is small, making it unclear if this observed difference represents discrimination or whether it is simply due to chance. We label these two competing claims, H0 and HA: • H0: Independence model. The variables gender and decision are independent. They have no relationship, and the observed difference between the proportion of males and females who were promoted, 29.2%, was due to chance. • HA: Alternative model. The variables gender and decision are not independent. The difference in promotion rates of 29.2% was not due to chance, and equally qualified females are less likely to be promoted than males. What would it mean if the independence model, which says the variables gender and decision are unrelated, is true? It would mean each banker was going to decide whether to promote the candidate without regard to the gender indicated on the le. That is, the difference in the promotion percentages was due to the way the les were randomly divided to the bankers, and the randomization just happened to give rise to a relatively large difference of 29.2%. Consider the alternative model: bankers were influenced by which gender was listed on the personnel le. If this was true, and especially if this inuence was substantial, we would expect to see some difference in the promotion rates of male and female candidates. If this gender bias was against females, we would expect a smaller fraction of promotion decisions for female personnel les relative to the male files. We choose between these two competing claims by assessing if the data conflict so much with H0 that the independence model cannot be deemed reasonable. If this is the case, and the data support HA, then we will reject the notion of independence and conclude there was discrimination. Simulating the Study Table 1.44 shows that 35 bank supervisors recommended promotion and 13 did not. Now, suppose the banker's decisions were independent of gender. Then, if we conducted the experiment again with a different random arrangement of les, differences in promotion rates would be based only on random uctuation. We can actually perform this randomization, which simulates what would have happened if the bankers decisions had been independent of gender but we had distributed the les differently. In this simulation, we thoroughly shuffle 48 personnel files, 24 labeled male sim and 24 labeled female sim, and deal these les into two stacks. We will deal 35 les into the first stack, which will represent the 35 supervisors who recommended promotion. The second stack will have 13 les, and it will represent the 13 supervisors who recommended against promotion. Then, as we did with the original data, we tabulate the results and determine the fraction of male sim and female sim who were promoted. The randomization of files in this simulation is independent of the promotion decisions, which means any difference in the two fractions is entirely due to chance. Table 1.45 show the results of such a simulation. Table 1.45: Simulation results, where any difference in promotion rates between male sim and female sim is purely due to chance. decision promoted not promoted Total male_sim 18 6 24 female_sim 17 7 24 Total 35 13 48 Exercise $1$ Exercise 1.50 What is the difference in promotion rates between the two simulated groups in Table 1.45? How does this compare to the observed 29.2% in the actual groups?50 Checking for Independence We computed one possible difference under the independence model in Exercise 1.50, which represents one difference due to chance. While in this first simulation, we physically dealt out les, it is more efficient to perform this simulation using a computer. Repeating the simulation on a computer, we get another difference due to chance: -0.042. And another: 0.208. And so on until we repeat the simulation enough times that we have a good idea of what represents the distribution of differences from chance alone. Figure 1.46 shows a plot of the differences found from 100 simulations, where each dot represents a simulated difference between the proportions of male and female les that were recommended for promotion. Note that the distribution of these simulated differences is centered around 0. We simulated these differences assuming that the independence model was true, and under this condition, we expect the difference to be zero with some random uctation. We would generally be surprised to see a difference of exactly 0: sometimes, just by chance, the difference is higher than 0, and other times it is lower than zero. Example $1$ How often would you observe a difference of at least 29.2% (0.292) according to Figure 1.46? Often, sometimes, rarely, or never? Solution It appears that a difference of at least 29.2% due to chance alone would only happen about 2% of the time according to Figure 1.46. Such a low probability indicates a rare event. The difference of 29.2% being a rare event suggests two possible interpretations of the results of the study: • H0 Independence model. Gender has no effect on promotion decision, and we observed a difference that would only happen rarely. • HA Alternative model. Gender has an effect on promotion decision, and what we observed was actually due to equally quali ed women being discriminated against in promotion decisions, which explains the large difference of 29.2%. Based on the simulations, we have two options. (1) We conclude that the study results do not provide strong evidence against the independence model. That is, we do not have sufficiently strong evidence to conclude there was gender discrimination. (2) We conclude the evidence is sufficiently strong to reject H0 and assert that there was gender discrimination. When we conduct formal studies, usually we reject the notion that we just happened to observe a rare event.51 So in this case, we reject the independence model in favor of the alternative. That is, we are concluding the data provide strong evidence of gender discrimination against women by the supervisors. One field of statistics, statistical inference, is built on evaluating whether such differences are due to chance. In statistical inference, statisticians evaluate which model is most reasonable given the data. Errors do occur, just like rare events, and we might choose the wrong model. While we do not always choose correctly, statistical inference gives us tools to control and evaluate how often these errors occur. In Chapter 4, we give a formal introduction to the problem of model selection. We spend the next two chapters building a foundation of probability and theory necessary to make that discussion rigorous. 51This reasoning does not generally extend to anecdotal observations. Each of us observes incredibly rare events every day, events we could not possibly hope to predict. However, in the non-rigorous setting of anecdotal evidence, almost anything may appear to be a rare event, so the idea of looking for rare events in day-to-day activities is treacherous. For example, we might look at the lottery: there was only a 1 in 176 million chance that the Mega Millions numbers for the largest jackpot in history (March 30, 2012) would be (2, 4, 23, 38, 46) with a Mega ball of (23), but nonetheless those numbers came up! However, no matter what numbers had turned up, they would have had the same incredibly rare odds. That is, any set of numbers we could have observed would ultimately be incredibly rare. This type of situation is typical of our daily lives: each possible event in itself seems incredibly rare, but if we consider every alternative, those outcomes are also incredibly rare. We should be cautious not to misinterpret such anecdotal evidence.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.09%3A_Case_Study-_Gender_Discrimination_%28Special_Topic%29.txt
Case study 1.1 Migraine and accupuncture. A migraine is a particularly painful type of headache, which patients sometimes wish to treat with acupuncture. To determine whether acupuncture relieves migraine pain, researchers conducted a randomized controlled study where 89 females diagnosed with migraine headaches were randomly assigned to one of two groups: treatment or control. 43 patients in the treatment group received acupuncture that is specifically designed to treat migraines. 46 patients in the control group received placebo acupuncture (needle insertion at nonacupoint locations). 24 hours after patients received acupuncture, they were asked if they were pain free. Results are summarized in the contingency table below.52 Pain free Yes No Total Treatment Control 10 2 33 44 43 46 Total 12 77 89 1. What percent of patients in the treatment group were pain free 24 hours after receiving acupuncture? What percent in the control group? 2. At first glance, does acupuncture appear to be an effective treatment for migraines? Explain your reasoning. 3. Do the data provide convincing evidence that there is a real pain reduction for those patients in the treatment group? Or do you think that the observed difference might just be due to chance? 1.2 Sinusitis and antibiotics, Part I. Researchers studying the effect of antibiotic treatment for acute sinusitis compared to symptomatic treatments randomly assigned 166 adults diagnosed with acute sinusitis to one of two groups: treatment or control. Study participants received either a 10-day course of amoxicillin (an antibiotic) or a placebo similar in appearance and taste. The placebo consisted of symptomatic treatments such as acetaminophen, nasal decongestants, etc. At the end of the 10-day period patients were asked if they experienced signi cant improvement in symptoms. The distribution of responses are summarized below.53 Self-reported significant improvement in symptoms Yes No Total Treatment Control 66 65 19 16 85 81 Total 131 35 166 1. What percent of patients in the treatment group experienced a significant improvement in symptoms? What percent in the control group? 2. At first glance, which treatment appears to be more effective for sinusitis? 3. Do the data provide convincing evidence that there is a difference in the improvement rates of sinusitis symptoms? Or do you think that the observed difference might just be due to chance? 52G. Allais et al. "Ear acupuncture in the treatment of migraine attacks: a randomized trial on the efficacy of appropriate versus inappropriate acupoints". In: Neurological Sciences 32.1 (2011), pp. 173-175. 53J.M. Garbutt et al. "Amoxicillin for Acute Rhinosinusitis: A Randomized Controlled Trial". In: JAMA: The Journal of the American Medical Association 307.7 (2012), pp. 685{692. Data basics 1.3 Identify study components, Part I. Identify (i) the cases, (ii) the variables and their types, and (iii) the main research question in the studies described below. 1. Researchers collected data to examine the relationship between pollutants and preterm births in Southern California. During the study air pollution levels were measured by air quality monitoring stations. Speci cally, levels of carbon monoxide were recorded in parts per million, nitrogen dioxide and ozone in parts per hundred million, and coarse particulate matter ($PM_{10}$) in $\mu g=m^3$. Length of gestation data were collected on 143,196 births between the years 1989 and 1993, and air pollution exposure during gestation was calculated for each birth. The analysis suggested that increased ambient PM10 and, to a lesser degree, CO concentrations may be associated with the occurrence of preterm births.54 2. The Buteyko method is a shallow breathing technique developed by Konstantin Buteyko, a Russian doctor, in 1952. Anecdotal evidence suggests that the Buteyko method can reduce asthma symptoms and improve quality of life. In a scientific study to determine the effectiveness of this method, researchers recruited 600 asthma patients aged 18-69 who relied on medication for asthma treatment. These patients were split into two research groups: one practiced the Buteyko method and the other did not. Patients were scored on quality of life, activity, asthma symptoms, and medication reduction on a scale from 0 to 10. On average, the participants in the Buteyko group experienced a signi cant reduction in asthma symptoms and an improvement in quality of life.55 1.4 Identify study components, Part II. Identify (i) the cases, (ii) the variables and their types, and (iii) the main research question of the studies described below. 1. While obesity is measured based on body fat percentage (more than 35% body fat for women and more than 25% for men), precisely measuring body fat percentage is difficult. Body mass index (BMI), calculated as the ratio $\frac { weight}{height}^2$, is often used as an alternative indicator for obesity. A common criticism of BMI is that it assumes the same relative body fat percentage regardless of age, sex, or ethnicity. In order to determine how useful BMI is for predicting body fat percentage across age, sex and ethnic groups, researchers studied 202 black and 504 white adults who resided in or near New York City, were ages 20-94 years old, had BMIs of 18-35 kg/m2, and who volunteered to be a part of the study. Participants reported their age, sex, and ethnicity and were measured for weight and height. Body fat percentage was measured by submerging the participants in water.56 2. In a study of the relationship between socio-economic class and unethical behavior, 129 University of California undergraduates at Berkeley were asked to identify themselves as having low or high social-class by comparing themselves to others with the most (least) money, most (least) education, and most (least) respected jobs. They were also presented with a jar of individually wrapped candies and informed that they were for children in a nearby laboratory, but that they could take some if they wanted. Participants completed unrelated tasks and then reported the number of candies they had taken. It was found that those in the upper-class rank condition took more candy than did those in the lower-rank condition.57 54B. Ritz et al. "Effect of air pollution on preterm birth among children born in Southern California between 1989 and 1993". In: Epidemiology 11.5 (2000), pp. 502-511. 55J. McGowan. "Health Education: Does the Buteyko Institute Method make a difference?" In: Thorax 58 (2003). 56Gallagher et al. "How useful is body mass index for comparison of body fatness across age, sex, and ethnic groups?" In: American Journal of Epidemiology 143.3 (1996), pp. 228-239. 57P.K. Pi et al. "Higher social class predicts increased unethical behavior". In: Proceedings of the National Academy of Sciences (2012). 1.5 Fisher's irises. Sir Ronald Aylmer Fisher was an English statistician, evolutionary biologist, and geneticist who worked on a data set that contained sepal length and width, and petal length and width from three species of iris owers (setosa, versicolor and virginica). There were 50 owers from each species in the data set.58 Irises Photo by Ryan Claussen (flic.kr/p/6QTcuX) CC BY-SA 2.0 license 1. How many cases were included in the data? 2. How many numerical variables are included in the data? Indicate what they are, and if they are continuous or discrete. 3. How many categorical variables are included in the data, and what are they? List the corresponding levels (categories). 1.6 Smoking habits of UK residents. A survey was conducted to study the smoking habits of UK residents. Below is a data matrix displaying a portion of the data collected in this survey. Note that "$" stands for British Pounds Sterling, "cig" stands for cigarettes, and "N/A" refers to a missing component of the data.59 gender age marital grossIncome smoke amtWeekends amtWeekdays 1 2 3 $\vdots$ 1691 Female Male Male $\vdots$ Male 42 44 53 $\vdots$ 40 Single Single Married $\vdots$ Single Under$2,600 $10,400 to$15,600 Above $36,400 $\vdots$$2,600 to $5,200 Yes No Yes $\vdots$ Yes 12 cig/day N/A 6 cig/day $\vdots$ 8 cig/day 12 cig/day N/A 6 cig/day $\vdots$ 8 cig/day (a) What does each row of the data matrix represent? (b) How many participants were included in the survey? (c) Indicate whether each variable in the study is numerical or categorical. If numerical, identify as continuous or discrete. If categorical, indicate if the variable is ordinal. Overview of data collection principles 1.7 Generalizability and causality, Part I. Identify the population of interest and the sample in the the studies described in Exercise 1.3. Also comment on whether or not the results of the study can be generalized to the population and if the ndings of the study can be used to establish causal relationships. 1.8 Generalizability and causality, Part II. Identify the population of interest and the sample in the the studies described in Exercise 1.4. Also comment on whether or not the results of the study can be generalized to the population and if the ndings of the study can be used to establish causal relationships. 58Photo by rtclauss on Flickr, Iris.; R.A Fisher. "The Use of Multiple Measurements in Taxonomic Problems". In: Annals of Eugenics 7 (1936), pp. 179-188. 59Stats4Schools, Smoking. 1.9 GPA and study time. A survey was conducted on 218 undergraduates from Duke University who took an introductory statistics course in Spring 2012. Among many other questions, this survey asked them about their GPA and the number of hours they spent studying per week. The scatterplot below displays the relationship between these two variables. 1. (a) What is the explanatory variable and what is the response variable? 2. (b) Describe the relationship between the two variables. Make sure to discuss unusual observations, if any. 3. (c) Is this an experiment or an observational study? 4. (d) Can we conclude that studying longer hours leads to higher GPAs? 1.10 Income and education. The scatterplot below shows the relationship between per capita income (in thousands of dollars) and percent of population with a bachelor's degree in 3,143 counties in the US in 2010. 1. (a) What are the explanatory and response variables? 2. (b) Describe the relationship between the two variables. Make sure to discuss unusual observations, if any. 3. (c) Can we conclude that having a bachelor's degree increases one's income? Observational studies and sampling strategies 1.11 Propose a sampling strategy. A large college class has 160 students. All 160 students attend the lectures together, but the students are divided into 4 groups, each of 40 students, for lab sections administered by different teaching assistants. The professor wants to conduct a survey about how satis ed the students are with the course, and he believes that the lab section a student is in might affect the student's overall satisfaction with the course. 1. (a) What type of study is this? 2. (b) Suggest a sampling strategy for carrying out this study. 1.12 Internet use and life expectancy. The scatterplot below shows the relationship between estimated life expectancy at birth as of 201260 and percentage of internet users in 201061 in 208 countries. 1. (a) Describe the relationship between life expectancy and percentage of internet users. 2. (b) What type of study is this? 3. (c) State a possible confounding variable that might explain this relationship and describe its potential effect. 1.13 Random digit dialing. The Gallup Poll uses a procedure called random digit dialing, which creates phone numbers based on a list of all area codes in America in conjunction with the associated number of residential households in each area code. Give a possible reason the Gallup Poll chooses to use random digit dialing instead of picking phone numbers from the phone book. 1.14 Sampling strategies. A statistics student who is curious about the relationship between the amount of time students spend on social networking sites and their performance at school decides to conduct a survey. Three research strategies for collecting data are described below. In each, name the sampling method proposed and any bias you might expect. 1. (a) He randomly samples 40 students from the study's population, gives them the survey, asks them to ll it out and bring it back the next day. 2. (b) He gives out the survey only to his friends, and makes sure each one of them lls out the survey. 3. (c) He posts a link to an online survey on his Facebook wall and asks his friends to ll out the survey. 1.15 Family size. Suppose we want to estimate family size, where family is de ned as one or more parents living with children. If we select students at random at an elementary school and ask them what their family size is, will our average be biased? If so, will it overestimate or underestimate the true value? 60CIA Factbook, Country Comparison: Life Expectancy at Birth, 2012. 61ITU World Telecommunication/ICT Indicators database, World Telecommunication/ICT Indicators Database, 2012. 1.16 Flawed reasoning. Identify the aw in reasoning in the following scenarios. Explain what the individuals in the study should have done differently if they wanted to make such strong conclusions. 1. (a) Students at an elementary school are given a questionnaire that they are required to return after their parents have completed it. One of the questions asked is, "Do you nd that your work schedule makes it difficult for you to spend time with your kids after school?" Of the parents who replied, 85% said "no". Based on these results, the school officials conclude that a great majority of the parents have no difficulty spending time with their kids after school. 2. (b) A survey is conducted on a simple random sample of 1,000 women who recently gave birth, asking them about whether or not they smoked during pregnancy. A follow-up survey asking if the children have respiratory problems is conducted 3 years later, however, only 567 of these women are reached at the same address. The researcher reports that these 567 women are representative of all mothers. 3. (c) A orthopedist administers a questionnaire to 30 of his patients who do not have any joint problems and nds that 20 of them regularly go running. He concludes that running decreases the risk of joint problems. 1.17 Reading the paper. Below are excerpts from two articles published in the NY Times: (a) An article called Risks: Smokers Found More Prone to Dementia states the following:62 "Researchers analyzed the data of 23,123 health plan members who participated in a voluntary exam and health behavior survey from 1978 to 1985, when they were 50 to 60 years old. Twenty-three years later, about one-quarter of the group, or 5,367, had dementia, including 1,136 with Alzheimers disease and 416 with vascular dementia. After adjusting for other factors, the researchers concluded that pack-a-day smokers were 37 percent more likely than nonsmokers to develop dementia, and the risks went up sharply with increased smoking; 44 percent for one to two packs a day; and twice the risk for more than two packs." Based on this study, can we conclude that smoking causes dementia later in life? Explain your reasoning. (b) Another article called The School Bully Is Sleepy states the following:63 "The University of Michigan study, collected survey data from parents on each child's sleep habits and asked both parents and teachers to assess behavioral concerns. About a third of the students studied were identi ed by parents or teachers as having problems with disruptive behavior or bullying. The researchers found that children who had behavioral issues and those who were identi ed as bullies were twice as likely to have shown symptoms of sleep disorders." A friend of yours who read the article says, "The study shows that sleep disorders lead to bullying in school children." Is this statement justi ed? If not, how best can you describe the conclusion that can be drawn from this study? 1.18 Shyness on Facebook. Given the anonymity a afforded to individuals in online interactions, researchers hypothesized that shy individuals would have more favorable attitudes toward Facebook and that shyness would be positively correlated with time spent on Facebook. They also hypothesized that shy individuals would have fewer Facebook "Friends" just like they have fewer friends than non-shy individuals have in the offine world. Data were collected on 103 undergraduate students at a university in southwestern Ontario via online questionnaires. The study states "Participants were recruited through the university's psychology participation pool. After indicating an interest in the study, participants were sent an e-mail containing the study's URL as well as the necessary login credentials." Are the results of this study generalizable to the population of all Facebook users?64 62R.C. Rabin. "Risks: Smokers Found More Prone to Dementia". In: New York Times (2010). 63T. Parker-Pope. "The School Bully Is Sleepy". In: New York Times (2011). 64E.S. Orr et al. "The inuence of shyness on the use of Facebook in an undergraduate sample". In: CyberPsychology & Behavior 12.3 (2009), pp. 337-340. Experiments 1.19 Vitamin supplements. In order to assess the effectiveness of taking large doses of vitamin C in reducing the duration of the common cold, researchers recruited 400 healthy volunteers from staff and students at a university. A quarter of the patients were assigned a placebo, and the rest were evenly divided between 1g Vitamin C, 3g Vitamin C, or 3g Vitamin C plus additives to be taken at onset of a cold for the following two days. All tablets had identical appearance and packaging. The nurses who handed the prescribed pills to the patients knew which patient received which treatment, but the researchers assessing the patients when they were sick did not. No significant differences were observed in any measure of cold duration or severity between the four medication groups, and the placebo group had the shortest duration of symptoms.65 1. (a) Was this an experiment or an observational study? Why? 2. (b) What are the explanatory and response variables in this study? 3. (c) Were the patients blinded to their treatment? 4. (d) Was this study double-blind? 5. (e) Participants are ultimately able to choose whether or not to use the pills prescribed to them. We might expect that not all of them will adhere and take their pills. Does this introduce a confounding variable to the study? Explain your reasoning. 1.20 Soda preference. You would like to conduct an experiment in class to see if your classmates prefer the taste of regular Coke or Diet Coke. Briey outline a design for this study. 1.21 Exercise and mental health. A researcher is interested in the effects of exercise on mental health and he proposes the following study: Use stratified random sampling to ensure representative proportions of 18-30, 31-40 and 41-55 year olds from the population. Next, randomly assign half the subjects from each age group to exercise twice a week, and instruct the rest not to exercise. Conduct a mental health exam at the beginning and at the end of the study, and compare the results. 1. (a) What type of study is this? 2. (b) What are the treatment and control groups in this study? 3. (c) Does this study make use of blocking? If so, what is the blocking variable? 4. (d) Does this study make use of blinding? 5. (e) Comment on whether or not the results of the study can be used to establish a causal relationship between exercise and mental health, and indicate whether or not the conclusions can be generalized to the population at large. 6. (f) Suppose you are given the task of determining if this proposed study should get funding. Would you have any reservations about the study proposal? 65C. Audera et al. "Mega-dose vitamin C in treatment of the common cold: a randomised controlled trial". In: Medical Journal of Australia 175.7 (2001), pp. 359-362. 1.22 Chia seeds and weight loss. Chia Pets - those terra-cotta gurines that sprout fuzzy green hair - made the chia plant a household name. But chia has gained an entirely new reputation as a diet supplement. In one 2009 study, a team of researchers recruited 38 men and divided them evenly into two groups: treatment or control. They also recruited 38 women, and they randomly placed half of these participants into the treatment group and the other half into the control group. One group was given 25 grams of chia seeds twice a day, and the other was given a placebo. The subjects volunteered to be a part of the study. After 12 weeks, the scientists found no significant difference between the groups in appetite or weight loss.66 1. (a) What type of study is this? 2. (b) What are the experimental and control treatments in this study? 3. (c) Has blocking been used in this study? If so, what is the blocking variable? 4. (d) Has blinding been used in this study? 5. (e) Comment on whether or not we can make a causal statement, and indicate whether or not we can generalize the conclusion to the population at large. Examining numerical data 1.23 Mammal life spans. Data were collected on life spans (in years) and gestation lengths (in days) for 62 mammals. A scatterplot of life span versus length of gestation is shown below.67 1. (a) What type of an association is apparent between life span and length of gestation? 2. (b) What type of an association would you expect to see if the axes of the plot were reversed, i.e. if we plotted length of gestation versus life span? 3. (c) Are life span and length of gestation independent? Explain your reasoning. 1.24 Office productivity. Office productivity is relatively low when the employees feel no stress about their work or job security. However, high levels of stress can also lead to reduced employee productivity. Sketch a plot to represent the relationship between stress and productivity. 66D.C. Nieman et al. "Chia seed does not promote weight loss or alter disease risk factors in overweight adults". In: Nutrition Research 29.6 (2009), pp. 414-418. 67T. Allison and D.V. Cicchetti. "Sleep in mammals: ecological and constitutional correlates". In: Arch. Hydrobiol 75 (1975), p. 442. 1.25 Associations. Indicate which of the plots show a 1. (a) positive association 2. (b) negative association 3. (c) no association Also determine if the positive and negative associations are linear or nonlinear. Each part may refer to more than one plot. 1.26 Parameters and statistics. Identify which value represents the sample mean and which value represents the claimed population mean. 1. (a) A recent article in a college newspaper stated that college students get an average of 5.5 hrs of sleep each night. A student who was skeptical about this value decided to conduct a survey by randomly sampling 25 students. On average, the sampled students slept 6.25 hours per night. 2. (b) American households spent an average of about$52 in 2007 on Halloween merchandise such as costumes, decorations and candy. To see if this number had changed, researchers conducted a new survey in 2008 before industry numbers were reported. The survey included 1,500 households and found that average Halloween spending was $58 per household. 3. (c) The average GPA of students in 2001 at a private university was 3.37. A survey on a sample of 203 students from this university yielded an average GPA of 3.59 in Spring semester of 2012. 1.27 Make-up exam. In a class of 25 students, 24 of them took an exam in class and 1 student took a make-up exam the following day. The professor graded the rst batch of 24 exams and found an average score of 74 points with a standard deviation of 8.9 points. The student who took the make-up the following day scored 64 points on the exam. 1. (a) Does the new student's score increase or decrease the average score? 2. (b) What is the new average? 3. (c) Does the new student's score increase or decrease the standard deviation of the scores? 1.28 Days off at a mining plant. Workers at a particular mining site receive an average of 35 days paid vacation, which is lower than the national average. The manager of this plant is under pressure from a local union to increase the amount of paid time off. However, he does not want to give more days off to the workers because that would be costly. Instead he decides he should fire 10 employees in such a way as to raise the average number of days off that are reported by his employees. In order to achieve this goal, should he fire employees who have the most number of days off , least number of days off, or those who have about the average number of days off? 1.29 Smoking habits of UK residents, Part I. Exercise 1.6 introduces a data set on the smoking habits of UK residents. Below are histograms displaying the distributions of the number of cigarettes smoked on weekdays and weekends, excluding non-smokers. Describe the two distributions and compare them. 1.30 Stats scores. Below are the nal scores of 20 introductory statistics students. $79, 83, 57, 82, 94, 83, 72, 74, 73, 71,$ $66, 89, 78, 81, 78, 81, 88, 69, 77, 79$ Draw a histogram of these data and describe the distribution. 1.31 Smoking habits of UK residents, Part II. A random sample of 5 smokers from the data set discussed in Exercises 1.6 and 1.29 is provided below. gender age maritalStatus grossIncome smoke amtWeekends amtWeekdays Female Male Female Female Female 51 24 33 17 76 Married Single Married Single Married$2,600 to $5,200$10,400 to $15,600$10,400 to $15,600$5,200 to $10,400$5,200 to $10,400 Yes Yes Yes Yes Yes 20 cig/day 20 cig/day 20 cig/day 20 cig/day 20 cig/day 20 cig/day 15 cig/day 10 cig/day 15 cig/day 20 cig/day 1. (a) Find the mean amount of cigarettes smoked on weekdays and weekends by these 5 respondents. 2. (b) Find the standard deviation of the amount of cigarettes smoked on weekdays and on weekends by these 5 respondents. Is the variability higher on weekends or on weekdays? 1.32 Factory defective rate. A factory quality control manager decides to investigate the percentage of defective items produced each day. Within a given work week (Monday through Friday) the percentage of defective items produced was 2%, 1.4%, 4%, 3%, 2.2%. 1. (a) Calculate the mean for these data. 2. (b) Calculate the standard deviation for these data, showing each step in detail. 1.33 Medians and IQRs. For each part, compare distributions (1) and (2) based on their medians and IQRs. You do not need to calculate these statistics; simply state how the medians and IQRs compare. Make sure to explain your reasoning. (a) (1) 3, 5, 6, 7, 9 (2) 3, 5, 6, 7, 20 (b) (1) 3, 5, 6, 7, 9 (2) 3, 5, 8, 7, 9 (c) (1) 1, 2, 3, 4, 5 (2) 6, 7, 8, 9, 10 (d) (1) 0, 10, 50, 60, 100 (2) 0, 100, 500, 600, 1000 1.34 Means and SDs. For each part, compare distributions (1) and (2) based on their means and standard deviations. You do not need to calculate these statistics; simply state how the means and the standard deviations compare. Make sure to explain your reasoning. Hint: It may be useful to sketch dot plots of the distributions. (a) (1) 3, 5, 5, 5, 8, 11, 11, 11, 13 (2) 3, 5, 5, 5, 8, 11, 11, 11, 20 (b) (1) -20, 0, 0, 0, 15, 25, 30, 30 (2) -40, 0, 0, 0, 15, 25, 30, 30 (c) (1) 0, 2, 4, 6, 8, 10 (2) 20, 22, 24, 26, 28, 30 (d) (1) 100, 200, 300, 400, 500 (2) 0, 50, 300, 550, 600 1.35 Box plot. Create a box plot for the data given in Exercise 1.30. The ve number summary provided below may be useful. Min Q1 Q2 (Median) Q3 Max 57 72.5 78.5 82.5 94 1.36 Infant mortality. The infant mortality rate is defined as the number of infant deaths per 1,000 live births. This rate is often used as an indicator of the level of health in a country. The relative frequency histogram below shows the distribution of estimated infant death rates in 2012 for 222 countries.68 1. (a) Estimate Q1, the median, and Q3 from the histogram. 2. (b) Would you expect the mean of this data set to be smaller or larger than the median? Explain your reasoning. 1.37 Matching histograms and box plots. Describe the distribution in the histograms below and match them to the box plots. 68CIA Factbook, Country Comparison: Infant Mortality Rate, 2012. 1.38 Air quality. Daily air quality is measured by the air quality index (AQI) reported by the Environmental Protection Agency. This index reports the pollution level and what associated health effects might be a concern. The index is calculated for ve major air pollutants regulated by the Clean Air Act. and takes values from 0 to 300, where a higher value indicates lower air quality. AQI was reported for a sample of 91 days in 2011 in Durham, NC. The relative frequency histogram below shows the distribution of the AQI values on these days.69 1. (a) Estimate the median AQI value of this sample. 2. (b) Would you expect the mean AQI value of this sample to be higher or lower than the median? Explain your reasoning. 3. (c) Estimate Q1, Q3, and IQR for the distribution. 1.39 Histograms and box plots. Compare the two plots below. What characteristics of the distribution are apparent in the histogram and not in the box plot? What characteristics are apparent in the box plot but not in the histogram? 69US Environmental Protection Agency, AirData, 2011. 1.40 Marathon winners. The histogram and box plots below show the distribution of finishing times for male and female winners of the New York Marathon between 1980 and 1999. 1. (a) What features of the distribution are apparent in the histogram and not the box plot? What features are apparent in the box plot but not in the histogram? 2. (b) What may be the reason for the bimodal distribution? Explain. 3. (c) Compare the distribution of marathon times for men and women based on the box plot shown below. 1. (d) The time series plot shown below is another way to look at these data. Describe what is visible in this plot but not in the others. 1.41 Robust statistics. The first histogram below shows the distribution of the yearly incomes of 40 patrons at a college coffee shop. Suppose two new people walk into the coffee shop: one making$225,000 and the other $250,000. The second histogram shows the new income distribution. Summary statistics are also provided. (1) (2) n Min. 1st Qu. Median Mean 3rd Qu. Max. SD 40 60,680 63,620 65,240 65,090 66,160 69,890 2,122 42 60,680 63,710 65,350 73,300 66,540 250,000 3,7321 1. (a) Would the mean or the median best represent what we might think of as a typical income for the 42 patrons at this coffee shop? What does this say about the robustness of the two measures? 2. (b) Would the standard deviation or the IQR best represent the amount of variability in the incomes of the 42 patrons at this coffee shop? What does this say about the robustness of the two measures? 1.42 Distributions and appropriate statistics. For each of the following, describe whether you expect the distribution to be symmetric, right skewed, or left skewed. Also specify whether the mean or median would best represent a typical observation in the data, and whether the variability of observations would be best represented using the standard deviation or IQR. 1. (a) Housing prices in a country where 25% of the houses cost below$350,000, 50% of the houses cost below $450,000, 75% of the houses cost below$1,000,000 and there are a meaningful number of houses that cost more than $6,000,000. 2. (b) Housing prices in a country where 25% of the houses cost below$300,000, 50% of the houses cost below $600,000, 75% of the houses cost below$900,000 and very few houses that cost more than \$1,200,000. 3. (c) Number of alcoholic drinks consumed by college students in a given week. 4. (d) Annual salaries of the employees at a Fortune 500 company. 1.43 Commuting times, Part I. The histogram to the right shows the distribution of mean commuting times in 3,143 US counties in 2010. Describe the distribution and comment on whether or not a log transformation may be advisable for these data. 1.44 Hispanic population, Part I. The histogram below shows the distribution of the percentage of the population that is Hispanic in 3,143 counties in the US in 2010. Also shown is a histogram of logs of these values. Describe the distribution and comment on why we might want to use log-transformed values in analyzing or modeling these data. 1.45 Commuting times, Part II. Exercise 1.43 displays histograms of mean commuting times in 3,143 US counties in 2010. Describe the spatial distribution of commuting times using the map below. 1.46 Hispanic population, Part II. Exercise 1.44 displays histograms of the distribution of the percentage of the population that is Hispanic in 3,143 counties in the US in 2010. 1. (a) What features of this distribution are apparent in the map but not in the histogram? 2. (b) What features are apparent in the histogram but not the map? 3. (c) Is one visualization more appropriate or helpful than the other? Explain your reasoning. Considering categorical data 1.47 Antibiotic use in children. The bar plot and the pie chart below show the distribution of pre-existing medical conditions of children involved in a study on the optimal duration of antibiotic use in treatment of tracheitis, which is an upper respiratory infection. 1. (a) What features are apparent in the bar plot but not in the pie chart? 2. (b) What features are apparent in the pie chart but not in the bar plot? 3. (c) Which graph would you prefer to use for displaying these categorical data? 1.48 Views on immigration. 910 randomly sampled registered voters from Tampa, FL were asked if they thought workers who have illegally entered the US should be (i) allowed to keep their jobs and apply for US citizenship, (ii) allowed to keep their jobs as temporary guest workers but not allowed to apply for US citizenship, or (iii) lose their jobs and have to leave the country. The results of the survey by political ideology are shown below.70 Political ideology Conservative Moderate Liberal Total (i) Apply for citizenship (ii) Guest worker (iii) Leave the country (iv) Not sure 57 121 179 15 120 113 126 4 101 28 45 1 278 262 350 20 Total 372 363 175 910 1. (a) What percent of these Tampa, FL voters identify themselves as conservatives? 2. (b) What percent of these Tampa, FL voters are in favor of the citizenship option? 3. (c) What percent of these Tampa, FL voters identify themselves as conservatives and are in favor of the citizenship option? 4. (d) What percent of these Tampa, FL voters who identify themselves as conservatives are also in favor of the citizenship option? What percent of moderates and liberal share this view? 5. (e) Do political ideology and views on immigration appear to be independent? Explain your reasoning. 1.49 Views on the DREAM Act. The same survey from Exercise 1.48 also asked respondents if they support the DREAM Act, a proposed law which would provide a path to citizenship for people brought illegally to the US as children. Based on the mosaic plot shown on the right, are views on the DREAM Act and political ideology independent? 1.50 Heart transplants, Part I. The Stanford University Heart Transplant Study was conducted to determine whether an experimental heart transplant program increased lifespan. Each patient entering the program was designated an official heart transplant candidate, meaning that he was gravely ill and would most likely benefit from a new heart. Some patients got a transplant and some did not. The variable transplant indicates which group the patients were in; patients in the treatment group got a transplant and those in the control group did not. Another variable called survived was used to indicate whether or not the patient was alive at the end of the study. Figures may be found on the next page.71 1. (a) Based on the mosaic plot, is survival independent of whether or not the patient got a transplant? Explain your reasoning. 2. (b) What do the box plots suggest about the efficacy (effectiveness) of transplants? 70SurveyUSA, News Poll #18927, data collected Jan 27-29, 2012. 71B. Turnbull et al. "Survivorship of Heart Transplant Data". In: Journal of the American Statistical Association 69 (1974), pp. 74-80. Case study: gender discrimination 1.51 Side effects of Avandia, Part I. Rosiglitazone is the active ingredient in the controversial type 2 diabetes medicine Avandia and has been linked to an increased risk of serious cardiovascular problems such as stroke, heart failure, and death. A common alternative treatment is pioglitazone, the active ingredient in a diabetes medicine called Actos. In a nationwide retrospective observational study of 227,571 Medicare bene ciaries aged 65 years or older, it was found that 2,593 of the 67,593 patients using rosiglitazone and 5,386 of the 159,978 using pioglitazone had serious cardiovascular problems. These data are summarized in the contingency table below.72 Cardiovascular problems Yes No Total Rosiglitazone Pioglitazone 2,593 5,386 65,000 154,592 67,593 159,978 Total 7,979 219,592 227,571 Determine if each of the following statements is true or false. If false, explain why. Be careful: The reasoning may be wrong even if the statement's conclusion is correct. In such cases, the statement should be considered false. 1. (a) Since more patients on pioglitazone had cardiovascular problems (5,386 vs. 2,593), we can conclude that the rate of cardiovascular problems for those on a pioglitazone treatment is higher. 2. (b) The data suggest that diabetic patients who are taking rosiglitazone are more likely to have cardiovascular problems since the rate of incidence was (2,593 / 67,593 = 0.038) 3.8% for patients on this treatment, while it was only (5,386 / 159,978 = 0.034) 3.4% for patients on pioglitazone. 3. (c) The fact that the rate of incidence is higher for the rosiglitazone group proves that rosiglitazone causes serious cardiovascular problems. 4. (d) Based on the information provided so far, we cannot tell if the difference between the rates of incidences is due to a relationship between the two variables or due to chance. 72D.J. Graham et al. "Risk of acute myocardial infarction, stroke, heart failure, and death in elderly Medicare patients treated with rosiglitazone or pioglitazone". In: JAMA 304.4 (2010), p. 411. issn: 0098-7484. 1.52 Heart transplants, Part II. Exercise 1.50 introduces the Stanford Heart Transplant Study. Of the 34 patients in the control group, 4 were alive at the end of the study. Of the 69 patients in the treatment group, 24 were alive. The contingency table below summarizes these results. Group Control Treatment Total Alive Dead 4 30 24 45 28 75 Total 34 69 103 (a) What proportion of patients in the treatment group and what proportion of patients in the control group died? (b) One approach for investigating whether or not the treatment is effective is to use a randomization technique. i. What are the claims being tested? ii. The paragraph below describes the set up for such approach, if we were to do it without using statistical software. Fill in the blanks with a number or phrase, whichever is appropriate. We write alive on ------------- cards representing patients who were alive at the end of the study, and dead on --------- cards representing patients who were not. Then, we shuffle these cards and split them into two groups: one group of size ------------representing treatment, and another group of size --------------- representing control. We calculate the difference between the proportion of dead cards in the treatment and control groups (treatment - control) and record this value. We repeat this many times to build a distribution centered at ------------------ . Lastly, we calculate the fraction of simulations where the simulated differences in proportions are ----------------- . If this fraction is low, we conclude that it is unlikely to have observed such an outcome by chance and that the null hypothesis (independence model) should be rejected in favor of the alternative. iii. What do the simulation results shown below suggest about the effectiveness of the transplant program? 1.53 Side effects of Avandia, Part II. Exercise 1.51 introduces a study that compares the rates of serious cardiovascular problems for diabetic patients on rosiglitazone and pioglitazone treatments. The table below summarizes the results of the study. Cardiovascular problems Yes No Total Rosiglitazone Pioglitazone 2,593 5,386 65,000 154,592 67,593 159,978 Total 7,979 219,592 227,571 1. (a) What proportion of all patients had cardiovascular problems? 2. (b) If the type of treatment and having cardiovascular problems were independent, about how many patients in the rosiglitazone group would we expect to have had cardiovascular problems? 3. (c) We can investigate the relationship between outcome and treatment in this study using a randomization technique. While in reality we would carry out the simulations required for randomization using statistical software, suppose we actually simulate using index cards. In order to simulate from the independence model, which states that the outcomes were independent of the treatment, we write whether or not each patient had a cardiovascular problem on cards, shuffled all the cards together, then deal them into two groups of size 67,593 and 159,978. We repeat this simulation 1,000 times and each time record the number of people in the rosiglitazone group who had cardiovascular problems. Below is a relative frequency histogram of these counts. 1. i. What are the claims being tested? 2. ii. Compared to the number calculated in part (b), which would provide more support for the alternative hypothesis, more or fewer patients with cardiovascular problems in the rosiglitazone group? 3. iii. What do the simulation results suggest about the relationship between taking rosiglitazone and having cardiovascular problems in diabetic patients? 1.54 Sinusitis and antibiotics, Part II. Researchers studying the effect of antibiotic treatment compared to symptomatic treatment for acute sinusitis randomly assigned 166 adults diagnosed with sinusitis into two groups (as discussed in Exercise 1.2). Participants in the antibiotic group received a 10-day course of an antibiotic, and the rest received symptomatic treatments as a placebo. These pills had the same taste and packaging as the antibiotic. At the end of the 10-day period patients were asked if they experienced improvement in symptoms since the beginning of the study. The distribution of responses is summarized below.73 Self reported improvement in symptoms Yes No Total Antibiotic Placebo 66 65 19 16 85 81 Total 131 35 166 1. (a) What type of a study is this? 2. (b) Does this study make use of blinding? 3. (c) At first glance, does antibiotic or placebo appear to be more effective for the treatment of sinusitis? Explain your reasoning using appropriate statistics. 4. (d) There are two competing claims that this study is used to compare: the independence model and the alternative model. Write out these competing claims in easy-to-understand language and in the context of the application. Hint: The researchers are studying the effectiveness of antibiotic treatment. 5. (e) Based on your nding in (c), does the evidence favor the alternative model? If not, then explain why. If so, what would you do to check if whether this is strong evidence? 73J.M. Garbutt et al. "Amoxicillin for Acute Rhinosinusitis: A Randomized Controlled Trial". In: JAMA: The Journal of the American Medical Association 307.7 (2012), pp. 685-692.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./01%3A_Introduction_to_Data/1.E%3A_Introduction_to_Data_%28Exercises%29.txt
Probability forms a foundation for statistics. You might already be familiar with many aspects of probability, however, formalization of the concepts is new for most. This chapter aims to introduce probability on familiar terms using processes most people have seen before. 02: Probability Example 2.1 A "die", the singular of dice, is a cube with six faces numbered 1, 2, 3, 4, 5, and 6. What is the chance of getting 1 when rolling a die? If the die is fair, then the chance of a 1 is as good as the chance of any other number. Since there are six outcomes, the chance must be 1-in-6 or, equivalently, 1=6. Example 2.2 What is the chance of getting a 1 or 2 in the next roll? 1 and 2 constitute two of the six equally likely possible outcomes, so the chance of getting one of these two outcomes must be 2=6 = 1=3. Example 2.3 What is the chance of getting either 1, 2, 3, 4, 5, or 6 on the next roll? 100%. The outcome must be one of these numbers. Example 2.4 What is the chance of not rolling a 2? Since the chance of rolling a 2 is $\frac {1}{6}$ or $16.\bar {6}$%, the chance of not rolling a 2 must be 100% -$16.\bar {6}$% = $83.\bar {3}$% or $\frac {5}{6}$. Alternatively, we could have noticed that not rolling a 2 is the same as getting a 1, 3, 4, 5, or 6, which makes up ve of the six equally likely outcomes and has probability $\frac {5}{6}$. Example 2.5 Consider rolling two dice. If $\frac {1}{6}^{th}$of the time the rst die is a 1 and $\frac {1}{6}^{th}$of those times the second die is a 1, what is the chance of getting two 1s? If $16.\bar {6}$% of the time the rst die is a 1 and $\frac {1}{6}^{th}$ of those times the second die is also a 1, then the chance that both dice are 1 is $\frac {1}{6} x \frac {1}{6}$ or $\frac {1}{36}$. Probability We use probability to build tools to describe and understand apparent randomness. We often frame probability in terms of a random process giving rise to an outcome. $\text {Roll a die} \rightarrow \text {1, 2, 3, 4, 5, or 6}$ $\text {Flip a coin} \rightarrow \text {H or T}$ Rolling a die or ipping a coin is a seemingly random process and each gives rise to an outcome. Probability The probability of an outcome is the proportion of times the outcome would occur if we observed the random process an infinite number of times. Probability is defined as a proportion, and it always takes values between 0 and 1 (inclusively). It may also be displayed as a percentage between 0% and 100%. Probability can be illustrated by rolling a die many times. Let ^pn be the proportion of outcomes that are 1 after the rst n rolls. As the number of rolls increases, $\hat {p}_n$ will converge to the probability of rolling a 1, p = $\frac {1}{6}$. Figure 2.1 shows this convergence for 100,000 die rolls. The tendency of ^pn to stabilize around p is described by the Law of Large Numbers. Law of Large Numbers As more observations are collected, the proportion $\hat {p}_n$ of occurrences with a particular outcome converges to the probability p of that outcome. Occasionally the proportion will veer off from the probability and appear to defy the Law of Large Numbers, as $\hat {p}_n$ does many times in Figure 2.1. However, these deviations become smaller as the number of rolls increases. Above we write p as the probability of rolling a 1. We can also write this probability as $\text {P (rolling a 1)}$ As we become more comfortable with this notation, we will abbreviate it further. For instance, if it is clear that the process is "rolling a die", we could abbreviate P(rolling a 1) as P(1). Exercise 2.6 Random processes include rolling a die and flipping a coin. (a) Think of another random process. (b) Describe all the possible outcomes of that process. For instance, rolling a die is a random process with potential outcomes 1, 2, ..., 6.1 What we think of as random processes are not necessarily random, but they may just be too difficult to understand exactly. The fourth example in the footnote solution to Exercise 2.6 suggests a roommate's behavior is a random process. However, even if a roommate's behavior is not truly random, modeling her behavior as a random process can be useful. TIP: Modeling a process as random It can be helpful to model a process as random even if it is not truly random. Disjoint or Mutually Exclusive Outcomes Two outcomes are called disjoint or mutually exclusive if they cannot both happen. For instance, if we roll a die, the outcomes 1 and 2 are disjoint since they cannot both occur. On the other hand, the outcomes 1 and "rolling an odd number" are not disjoint since both occur if the outcome of the roll is a 1. The terms disjoint and mutually exclusive are equivalent and interchangeable. Calculating the probability of disjoint outcomes is easy. When rolling a die, the outcomes 1 and 2 are disjoint, and we compute the probability that one of these outcomes will occur by adding their separate probabilities: $P ( 1 or 2) = P(1) + P(2) = \frac {1}{6} + \frac {1}{6} = \frac {1}{3}$ What about the probability of rolling a 1, 2, 3, 4, 5, or 6? Here again, all of the outcomes are disjoint so we add the probabilities: \begin{align} \text {P (1 or 2 or 3 or 4 or 5 or 6)} &= P(1) + P(2) + P(3) + P(4) + P(5) + P(6) \[5pt] &= \frac {1}{6} + \frac {1}{6} + \frac {1}{6} + \frac {1}{6} + \frac {1}{6} + \frac {1}{6} \[5pt] &= 1 \end{align} The Addition Rule guarantees the accuracy of this approach when the outcomes are disjoint. 1Here are four examples. (i) Whether someone gets sick in the next month or not is an apparently random process with outcomes sick and not. (ii) We can generate a random process by randomly picking a person and measuring that person's height. The outcome of this process will be a positive number. (iii) Whether the stock market goes up or down next week is a seemingly random process with possible outcomes up, down, and no_change. Alternatively, we could have used the percent change in the stock market as a numerical outcome. (iv) Whether your roommate cleans her dishes tonight probably seems like a random process with possible outcomes cleans dishes and leaves dishes. Addition Rule of disjoint outcomes If A1 and A2 represent two disjoint outcomes, then the probability that one of them occurs is given by $P (A_1 or A_2 ) = P (A_1) + P (A_2)$ If there are many disjoint outcomes A1, ..., Ak, then the probability that one of these outcomes will occur is $P (A_1) + P( A_2) + \dots + P (A_k) \label {(2.7)}$ Exercise 2.8 We are interested in the probability of rolling a 1, 4, or 5. (a) Explain why the outcomes 1, 4, and 5 are disjoint. (b) Apply the Addition Rule for disjoint outcomes to determine P(1 or 4 or 5).2 Exercise 2.9 In the email data set in Chapter 1, the number variable described whether no number (labeled none), only one or more small numbers (small), or whether at least one big number appeared in an email (big). Of the 3,921 emails, 549 had no numbers, 2,827 had only one or more small numbers, and 545 had at least one big number. (a) Are the outcomes none, small, and big disjoint? (b) Determine the proportion of emails with value small and big separately. (c) Use the Addition Rule for disjoint outcomes to compute the probability a randomly selected email from the data set has a number in it, small or big.3 Statisticians rarely work with individual outcomes and instead consider sets or collections of outcomes. Let A represent the event where a die roll results in 1 or 2 and B represent the event that the die roll is a 4 or a 6. We write A as the set of outcomes f1, 2g and B = {4, 6}. These sets are commonly called events. Because A and B have no elements in common, they are disjoint events. A and B are represented in Figure 2.2. The Addition Rule applies to both disjoint outcomes and disjoint events. The probability that one of the disjoint events A or B occurs is the sum of the separate probabilities: $P(A or B) = P(A) + P(B) = \frac {1}{3} + \frac {1}{3} = \frac {2}{3}$ Exercise 2.10 (a) Verify the probability of event A, P(A), is $\frac {1}{3}$ using the Addition Rule. (b) Do the same for event B.4 Table 2.3: Representations of the 52 unique cards in a deck. 2$\clubsuit$ 3$\clubsuit$ 4$\clubsuit$ 5$\clubsuit$ 6$\clubsuit$ 7$\clubsuit$ 8$\clubsuit$ 9$\clubsuit$ 10$\clubsuit$ J$\clubsuit$ Q$\clubsuit$ K$\clubsuit$ A$\clubsuit$ 2$\diamondsuit$ 3$\diamondsuit$ 4$\diamondsuit$ 5$\diamondsuit$ 6$\diamondsuit$ 7$\diamondsuit$ 8$\diamondsuit$ 9$\diamondsuit$ 10$\diamondsuit$ J$\diamondsuit$ Q$\diamondsuit$ K$\diamondsuit$ A$\diamondsuit$ 2$\heartsuit$ 3$\heartsuit$ 4$\heartsuit$ 5$\heartsuit$ 6$\heartsuit$ 7$\heartsuit$ 8$\heartsuit$ 9$\heartsuit$ 10$\heartsuit$ J$\heartsuit$ Q$\heartsuit$ K$\heartsuit$ A$\heartsuit$ 2$\spadesuit$ 3$\spadesuit$ 4$\spadesuit$ 5$\spadesuit$ 6$\spadesuit$ 7$\spadesuit$ 8$\spadesuit$ 9$\spadesuit$ 10$\spadesuit$ J$\spadesuit$ Q$\spadesuit$ K$\spadesuit$ A$\spadesuit$ Exercise 2.11 (a) Using Figure 2.2 as a reference, what outcomes are represented by event D? (b) Are events B and D disjoint? (c) Are events A and D disjoint?5 Exercise 2.12 In Exercise 2.11, you con rmed B and D from Figure 2.2 are disjoint. Compute the probability that either event B or event D occurs.6 Probabilities when events are not disjoint Let's consider calculations for two events that are not disjoint in the context of a regular deck of 52 cards, represented in Table 2.3. If you are unfamiliar with the cards in a regular deck, please see the footnote.7 Exercise 2.13 (a) What is the probability that a randomly selected card is a diamond? (b) What is the probability that a randomly selected card is a face card?8 Venn diagrams are useful when outcomes can be categorized as "in" or "out" for two or three variables, attributes, or random processes. The Venn diagram in Figure 2.4 uses a circle to represent diamonds and another to represent face cards. If a card is both a diamond and a face card, it falls into the intersection of the circles. If it is a diamond but not a face card, it will be in part of the left circle that is not in the right circle (and so on). The total number of cards that are diamonds is given by the total number of cards in the diamonds circle: 10 + 3 = 13. The probabilities are also shown (e.g. $\frac {10}{52} = 0.1923$). Exercise 2.14 Using the Venn diagram, verify P(face card) = $\frac {12}{52} = \frac {3}{13}$.9 Let A represent the event that a randomly selected card is a diamond and B represent the event that it is a face card. How do we compute P(A or B)? Events A and B are not disjoint { the cards J $\diamondsuit$, Q$\diamondsuit$, and K$\diamondsuit$, fall into both categories - so we cannot use the Addition Rule for disjoint events. Instead we use the Venn diagram. We start by adding the probabilities of the two events: $P(A) + P(B) = P( \diamondsuit) + P(face card) = \frac {12}{52} + \frac {13}{52}$ However, the three cards that are in both events were counted twice, once in each probability. We must correct this double counting: $P(A or B) = P(face card or \diamondsuit)$ $= P(face card) + P( \diamondsuit) P(face card and \diamondsuit) \label{2.15}$ $= \frac {12}{52} + \frac {13}{52} - \frac { 3}{52}$ $= \frac {22}{52} = \frac {11}{26}$ Equation \ref{2.15} is an example of the General Addition Rule. General Addition Rule If A and B are any two events, disjoint or not, then the probability that at least one of them will occur is $P(A or B) = P(A) + P(B) -P(A and B) \label {(2.16)}$ where P(A and B) is the probability that both events occur. TIP: "or" is inclusive When we write "or" in statistics, we mean "and/or" unless we explicitly state otherwise. Thus, A or B occurs means A, B, or both A and B occur. Exercise 2.17 (a) If A and B are disjoint, describe why this implies P(A and B) = 0. (b) Using part (a), verify that the General Addition Rule simpli es to the simpler Addition Rule for disjoint events if A and B are disjoint.10 10(a) If A and B are disjoint, A and B can never occur simultaneously. (b) If A and B are disjoint, then the last term of Equation (2.16) is 0 (see part (a)) and we are left with the Addition Rule for disjoint events. Exercise 2.18 In the email data set with 3,921 emails, 367 were spam, 2,827 contained some small numbers but no big numbers, and 168 had both characteristics. Create a Venn diagram for this setup.11 Exercise 2.19 (a) Use your Venn diagram from Exercise 2.18 to determine the probability a randomly drawn email from the email data set is spam and had small numbers (but not big numbers). (b) What is the probability that the email had either of these attributes?12 Probability Distributions A probability distribution is a table of all disjoint outcomes and their associated probabilities. Table 2.5 shows the probability distribution for the sum of two dice. Table 2.5: Probability distribution for the sum of two dice. Dice sum 2 3 4 5 6 7 8 9 10 11 12 Probability 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36 Rules for probability distributions A probability distribution is a list of the possible outcomes with corresponding probabilities that satis es three rules: 1. The outcomes listed must be disjoint. 2. Each probability must be between 0 and 1. 3. The probabilities must total 1. Exercise 2.20 Table 2.6 suggests three distributions for household income in the United States. Only one is correct. Which one must it be? What is wrong with the other two?13 Chapter 1 emphasized the importance of plotting data to provide quick summaries. Probability distributions can also be summarized in a bar plot. For instance, the distribution of US household incomes is shown in Figure 2.7 as a bar plot.14 The probability distribution for the sum of two dice is shown in Table 2.5 and plotted in Figure 2.8. In these bar plots, the bar heights represent the probabilities of outcomes. If the outcomes are numerical and discrete, it is usually (visually) convenient to make a bar plot that resembles a histogram, as in the case of the sum of two dice. Another example of plotting the bars at their respective locations is shown in Figure 2.20 on page 96. 11Both the counts and corresponding probabilities (e.g. 2659/3921 = 0.678) are shown. Notice that the number of emails represented in the left circle corresponds to 2659 + 168 = 2827, and the number represented in the right circle is 168 + 199 = 367. 12(a) The solution is represented by the intersection of the two circles: 0.043. (b) This is the sum of the three disjoint probabilities shown in the circles: 0.678 + 0.043 + 0.051 = 0.772. 13The probabilities of (a) do not sum to 1. The second probability in (b) is negative. This leaves (c), which sure enough satis es the requirements of a distribution. One of the three was said to be the actual distribution of US household incomes, so it must be (c). 14It is also possible to construct a distribution plot when income is not arti cially binned into four groups. Continuous distributions are considered in Section 2.5. Income range (\$1000s) 0-25 25-50 50-100 100+ (a) 0.18 0.39 0.33 0.16 (b) 0.38 -0.27 0.52 0.37 (c) 0.28 0.27 0.29 0.16 Table 2.6: Proposed distributions of US household incomes (Exercise 2.20). Complement of an Event Rolling a die produces a value in the set {1, 2, 3, 4, 5, 6}. This set of all possible outcomes is called the sample space (S) for rolling a die. We often use the sample space to examine the scenario where an event does not occur. Let D = f2, 3g represent the event that the outcome of a die roll is 2 or 3. Then the complement of D represents all outcomes in our sample space that are not in D, which is denoted by Dc = f1, 4, 5, 6g. That is, Dc is the set of all possible outcomes not already included in D. Figure 2.9 shows the relationship between D, Dc, and the sample space S. Exercise 2.21 (a) Compute $P(D^c)$ = P(rolling a 1, 4, 5, or 6). (b) What is P(D) + $P(D^c)$?15 Exercise 2.22 Events A = {1, 2}and B = {4, 6}are shown in Figure 2.2 on page 71. (a) Write out what $A^c$and $B^c$represent. (b) Compute $P(A^c)$ and $P(B^c)$. (c) Compute P(A) + $P(A^c)$ and P(B) + $P(B^c)$.16 A complement of an event A is constructed to have two very important properties: (i) every possible outcome not in A is in $A^c$, and (ii) A and $A^c$are disjoint. Property (i) implies $P(A or A^c) = 1 \label {2.23}$ That is, if the outcome is not in A, it must be represented in Ac. We use the Addition Rule for disjoint events to apply Property (ii): $P(A or A^c) = P(A) + P(A^c) \label {2.24}$ Combining Equations (2.23) and (2.24) yields a very useful relationship between the probability of an event and its complement. Definition: Complement The complement of event A is denoted Ac, and Ac represents all outcomes not in A. A and Ac are mathematically related: $P(A) + P(A^c) = 1; i.e. P(A) = 1 - P (A^c) \label {2.25}$ In simple examples, computing A or Ac is feasible in a few steps. However, using the complement can save a lot of time as problems grow in complexity. 15(a) The outcomes are disjoint and each has probability 1/6, so the total probability is 4/6 = 2/3. (b) We can also see that P(D) = 1/6 + 1/6 = 1/3. Since D and Dc are disjoint, P(D) + P(Dc) = 1. 16Brief solutions: (a) A = {3, 4, 5, 6} and B = {1, 2, 3, 5}. (b) Noting that each outcome is disjoint, add the individual outcome probabilities to get P(Ac) = 2/3 and P(Bc) = 2/3. (c) A and Ac are disjoint, and the same is true of B and Bc. Therefore, P(A) + P(Ac) = 1 and P(B) + P(Bc) = 1. Exercise $1$ Exercise 2.26 Let A represent the event where we roll two dice and their total is less than 12. (a) What does the event Ac represent? (b) Determine $P(A^c)$ from Table 2.5 on page 74. (c) Determine P(A).17 Exercise $1$ Exercise 2.27 Consider again the probabilities from Table 2.5 and rolling two dice. Find the following probabilities: (a) The sum of the dice is not 6. (b) The sum is at least 4. That is, determine the probability of the event B = {4, 5, ..., 12}. (c) The sum is no more than 10. That is, determine the probability of the event D = {2, 3,..., 10}.18 Independence Just as variables and observations can be independent, random processes can be independent, too. Two processes are independent if knowing the outcome of one provides no useful information about the outcome of the other. For instance, ipping a coin and rolling a die are two independent processes { knowing the coin was heads does not help determine the outcome of a die roll. On the other hand, stock prices usually move up or down together, so they are not independent. Example 2.5 provides a basic example of two independent processes: rolling two dice. We want to determine the probability that both will be 1. Suppose one of the dice is red and the other white. If the outcome of the red die is a 1, it provides no information about the outcome of the white die. We rst encountered this same question in Example 2.5 (page 68), where we calculated the probability using the following reasoning: $\frac {1}{6}^{th}$of the time the red die is a 1, and $\frac {1}{6}^{th}$ of those times the white die will also be 1. This is illustrated in Figure 2.10. Because the rolls are independent, the probabilities of the corresponding outcomes can be multiplied to get the nal answer: $\frac {1}{6} \times \frac {1}{6} = \frac {1}{36}$. This can be generalized to many independent processes. 17(a) The complement of A: when the total is equal to 12. (b) P(Ac) = 1/36. (c) Use the probability of the complement from part (b), P(Ac) = 1/36, and Equation (2.25): P(less than 12) = 1 - P(12) = 1 - 1/36 = 35/36. 18(a) First find P(6) = 5/36, then use the complement: P(not 6) = 1 - P(6) = 31/36. (b) First find the complement, which requires much less effort: P(2 or 3) = 1/36 + 2/36 = 1/12. Then calculate P(B) = 1 - P(Bc) = 1 - 1/12 = 11/12. (c) As before, finding the complement is the clever way to determine P(D). First nd P(Dc) = P(11 or 12) = 2/36 + 1/36 = 1/12. Then calculate P(D) = 1 - P(Dc) = 11/12. Example $1$ Example 2.28 What if there was also a blue die independent of the other two? What is the probability of rolling the three dice and getting all 1s? The same logic applies from Example 2.5. If $\frac {1}{36}^{th}$of the time the white and red dice are both 1, then $\frac {1}{6}^{th}$of those times the blue die will also be 1, so multiply: $P(white = 1 \text{ and } red = 1 \text{ and } blue = 1) = P(white = 1) x P(red = 1) x P(blue = 1)$ $= \frac {1}{6} x \frac {1}{6} x \frac {1}{6} = \frac {1}{216}$ Examples 2.5 and 2.28 illustrate what is called the Multiplication Rule for independent processes. Multiplication Rule for independent processes If A and B represent events from two different and independent processes, then the probability that both A and B occur can be calculated as the product of their separate probabilities: $P(A and B) = P(A) \times P(B) \label {(2.29)}$ Similarly, if there are k events A1, ..., Ak from k independent processes, then the probability they all occur is $P (A_1) x P (A_2) \times \dots x P (A_k)$ Exercise $1$ About 9% of people are left-handed. Suppose 2 people are selected at random from the U.S. population. Because the sample size of 2 is very small relative to the population, it is reasonable to assume these two people are independent. 1. (a) What is the probability that both are left-handed? 2. (b) What is the probability that both are right-handed?19 Solution (a) The probability the rst person is left-handed is 0:09, which is the same for the second person. We apply the Multiplication Rule for independent processes to determine the probability that both will be left-handed: $0.09 \times 0.09 = 0.0081$. (b) It is reasonable to assume the proportion of people who are ambidextrous (both right and left handed) is nearly 0, which results in P(right-handed) = 1 - 0.09 = 0.91. Using the same reasoning as in part (a), the probability that both will be right-handed is $0.91 \times 0.91 = 0.8281$. Exercise $1$ Suppose 5 people are selected at random.20 1. What is the probability that all are right-handed? 2. What is the probability that all are left-handed? 3. What is the probability that not all of the people are right-handed? Solution 20(a) The abbreviations RH and LH are used for right-handed and left-handed, respectively. Since each are independent, we apply the Multiplication Rule for independent processes: $P(all ve are RH) = P( first = RH, second = RH, \dots , fifth = RH)$ $= P( first = RH) \times P(second = RH) \times \dots \times P( fifth = RH)$ $= 0.91\times 0.91 \times 0.91\times 0.91\times 0.91 = 0.624$ (b) Using the same reasoning as in (a), $0.09 \times 0.09 \times 0.09 \times 0.09 \times 0.09 = 0.0000059$ (c) Use the complement, P(all ve are RH), to answer this question: $P(not all RH) = 1 - P(all RH) = 1 - 0.624 = 0.376$ Suppose the variables handedness and gender are independent, i.e. knowing someone's gender provides no useful information about their handedness and vice-versa. Then we can compute whether a randomly selected person is right-handed and female using the Multiplication Rule: $P(right-handed and female) = P(right-handed) \times P(female)$ $= 0.91 \times 0.50 = 0:455$ The actual proportion of the U.S. population that is female is about 50%, and so we use 0.5 for the probability of sampling a woman. However, this probability does differ in other countries. Exercise $1$ Three people are selected at random. 1. What is the probability that the first person is male and right-handed? 2. What is the probability that the first two people are male and right-handed?. 3. What is the probability that the third person is female and left-handed? 4. What is the probability that the first two people are male and right-handed and the third person is female and left-handed? Solution Brief answers are provided. (a) This is the same as P(a randomly selected person is male and righthanded) = 0.455. (b) 0.207. (c) 0.045. (d) 0.0093. Sometimes we wonder if one outcome provides useful information about another outcome. The question we are asking is, are the occurrences of the two events independent? We say that two events A and B are independent if they satisfy Equation \ref{2.29}. Example $1$ If we shuffle up a deck of cards and draw one, is the event that the card is a heart independent of the event that the card is an ace? The probability the card is a heart is $\frac {1}{4}$ and the probability that it is an ace is $\frac {1}{13}$. The probability the card is the ace of hearts is $\frac {1}{52}$. We check whether Equation 2.29 is satisfied: $P (\heartsuit) x P (ace) = \frac {1}{4} \times \frac {1}{13} = \frac {1}{52} = P ( \heartsuit and ace)$ Because the equation holds, the event that the card is a heart and the event that the card is an ace are independent events.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.01%3A_Defining_Probability.txt
Are students more likely to use marijuana when their parents used drugs? The drug use data set contains a sample of 445 cases with two variables, student and parents, and is summarized in Table $1$ The student variable is either uses or not, where a student is labeled as uses if she has recently used marijuana. The parents variable takes the value used if at least one of the parents used drugs, including alcohol. Table $1$: Contingency table summarizing the drug use data set. parents used not Total uses 125 94 219 not 85 141 226 Total 210 235 445 23Ellis GJ and Stone LH. 1979. Marijuana Use in College: An Evaluation of a Modeling Explanation. Youth and Society 10:323-334. Example $1$ If at least one parent used drugs, what is the chance their child (student) uses? Solution We will estimate this probability using the data. Of the 210 cases in this data set where parents = used, 125 represent cases where student = uses: $P (student = uses given parents = used) = \dfrac {125}{210} = 0.60$ Exercise $1$ A student is randomly selected from the study and she does not use drugs. What is the probability that at least one of her parents used? Solution If the student does not use drugs, then she is one of the 226 students in the second row. Of these 226 students, 85 had at least one parent who used drugs: $P (parents = used given student = not) = \dfrac {85}{226} = 0.376$ Marginal and Joint Probabilities Table $2$ includes row and column totals for each variable separately in the drug use data set. These totals represent marginal probabilities for the sample, which are the probabilities based on a single variable without conditioning on any other variables. For instance, a probability based solely on the student variable is a marginal probability: $P (student = uses) = \dfrac {219}{445} = 0.492$ A probability of outcomes for two or more variables or processes is called a joint probability: $P(student = uses and parents = not) = \dfrac {94}{445} = 0.21$ It is common to substitute a comma for "and" in a joint probability, although either is acceptable. Table $2$: Probability table summarizing parental and student drug use. parents: used parents: not Total student: uses 0.28 0.21 0.49 student: not 0.19 0.32 0.51 Total 0.47 0.53 1.00 Table $3$: A joint probability distribution for the drug use data set. Joint outcome Probability parents = used, student = uses 0.28 parents = used, student = not 0.19 parents = not, student = uses 0.21 parents = not, student = not 0.32 Total 1.00 Definition: Marginal and joint probabilities If a probability is based on a single variable, it is a marginal probability. The probability of outcomes for two or more variables or processes is called a joint probability. We use table proportions to summarize joint probabilities for the drug use sample. These proportions are computed by dividing each count in Table $1$ by 445 to obtain the proportions in Table $1$. The joint probability distribution of the parents and student variables is shown in Table 2.14. Exercise $1$ Verify Table $3$ represents a probability distribution: events are disjoint, all probabilities are non-negative, and the probabilities sum to 1.24 We can compute marginal probabilities using joint probabilities in simple cases. For example, the probability a random student from the study uses drugs is found by summing the outcomes from Table $3$ where student = uses: $P ( \underline {student = uses} )$ $= P (parents = used, \underline {student = uses} + P (parent = not, \underline {student = uses}$ $= 0.28 + 0.21 = 0.49$ Defining Conditional Probability There is some connection between drug use of parents and of the student: drug use of one is associated with drug use of the other (This is an observational study and no causal conclusions may be reached). In this section, we discuss how to use information about associations between two variables to improve probability estimation. The probability that a random student from the study uses drugs is 0.49. Could we update this probability if we knew that this student's parents used drugs? Absolutely. To do so, we limit our view to only those 210 cases where parents used drugs and look at the fraction where the student uses drugs: $P (student = uses given parents = used) = \dfrac {125}{210} = 0.60$ 24Each of the four outcome combination are disjoint, all probabilities are indeed non-negative, and the sum of the probabilities is 0.28 + 0.19 + 0.21 + 0.32 = 1.00. We call this a conditional probability because we computed the probability under a condition: parents = used. There are two parts to a conditional probability, the outcome of interest and the condition. It is useful to think of the condition as information we know to be true, and this information usually can be described as a known outcome or event. We separate the text inside our probability notation into the outcome of interest and the condition: $P (student = uses given parents = used) = P ( student = uses | parents = used ) \label {2.37}$ $= \dfrac {125}{210} = 0.60$ The vertical bar "|" is read as given. In Equation \ref{2.37}, we computed the probability a student uses based on the condition that at least one parent used as a fraction: \begin{align} P (student = uses | parents = used) &= \dfrac { \text {# times student = uses given parents = used}}{ \text {# times parents = used}} \label {2.38} \[5pt] &= \dfrac {125}{210} \[5pt] &= 0.60 \end{align} We considered only those cases that met the condition, parents = used, and then we computed the ratio of those cases that satis ed our outcome of interest, the student uses. Counts are not always available for data, and instead only marginal and joint probabilities may be provided. For example, disease rates are commonly listed in percentages rather than in a count format. We would like to be able to compute conditional probabilities even when no counts are available, and we use Equation (2.38) as an example demonstrating this technique. We considered only those cases that satis ed the condition, parents = used. Of these cases, the conditional probability was the fraction who represented the outcome of interest, student = uses. Suppose we were provided only the information in Table 2.13 on the preceding page ,i. e. only probability data. Then if we took a sample of 1000 people, we would anticipate about 47% or $0.47 X 1000 =470$ would meet our information criterion. Similarly, we would expect about 28% or $0.28 X 1000 = 280$ to meet both the information criterion and represent our outcome of interest. Thus, the conditional probability could be computed: $P (student = uses | parents = used) = \dfrac { \text {# times student = uses given parents = used}}{ \text {# times parents = used}}$ $\dfrac {280}{470} = \dfrac {0.28}{0.47} = 0.60 \label {2.39}$ In Equation (2.39), we examine exactly the fraction of two probabilities, 0.28 and 0.47, which we can write as $P (student = uses and parents = used) and P(parents = used)$. The fraction of these probabilities represents our general formula for conditional probability. Definition: Conditional Probability The conditional probability of the outcome of interest A given condition B is computed as the following: $P (A|B) = \dfrac { P (A and B)}{P (B) } \label {2.40}$ Exercise 2.41 (a)Write out the following statement in conditional probability notation: "The probability a random case has parents = not if it is known that student = not ". Notice that the condition is now based on the student, not the parent. (b) Determine the probability from part (a). Table 2.13 on page 81 may be helpful.26 Exercise 2.42 (a) Determine the probability that one of the parents had used drugs if it is known the student does not use drugs. (b) Using the answers from part (a) and Exercise 2.41(b), compute $P (parents = used | student = not ) + P (parents = not | student = not )$ (c) Provide an intuitive argument to explain why the sum in (b) is 1.27 Exercise 2.43 The data indicate that drug use of parents and children are associated. Does this mean the drug use of parents causes the drug use of the students?28 Smallpox in Boston, 1721 The smallpox data set provides a sample of 6,224 individuals from the year 1721 who were exposed to smallpox in Boston.29 Doctors at the time believed that inoculation, which involves exposing a person to the disease in a controlled form, could reduce the likelihood of death. Each case represents one person with two variables: inoculated and result. The variable inoculated takes two levels: yes or no, indicating whether the person was inoculated or not. The variable result has outcomes lived or died. These data are summarized in Tables 2.15 and 2.16. Exercise 2.44 Write out, in formal notation, the probability a randomly selected person who was not inoculated died from smallpox, and nd this probability.30 26(a) P(parent = notjstudent = not). (b) Equation (2.40) for conditional probability indicates we should rst nd P(parents = not and student = not) = 0:32 and P(student = not) = 0:51. Then the ratio represents the conditional probability: 0.32/0.51 = 0.63. 27(a) This probability is $\dfrac {P(parents = used and student = not)}{P(student = not)} = \dfrac {0.19}{0.51} = 0.37$. (b) The total equals 1. (c) Under the condition the student does not use drugs, the parents must either use drugs or not. The complement still appears to work when conditioning on the same information. 28No. This was an observational study. Two potential confounding variables include income and region. Can you think of others? 29Fenner F. 1988. Smallpox and Its Eradication (History of International Public Health, No. 6). Geneva: World Health rganization. ISBN 92-4-156110-6. 30P(result = died | inoculated = no) = $\dfrac {P(result = died and inoculated = no)}{P(inoculated = no)} = \dfrac {0.1356}{0.9608} = 0.1411$. Table 2.15: Contingency table for the smallpox data set. inoculated yes no Total lived 238 5136 5374 died 6 844 850 Total 244 5980 6224 Table 2.16: Table proportions for the smallpox data, computed by dividing each count by the table total, 6224. inoculated yes no Total lived 0.0382 0.8252 0.8634 died 0.0010 0.1356 0.1366 Total 0.0392 0.9608 1.0000 Exercise $1$ Determine the probability that an inoculated person died from smallpox. How does this result compare with the result of Exercise 2.44? Solution P(result = died | inoculated = yes) = $\dfrac {P(result = died and inoculated = yes)}{P(inoculated = yes)} = \dfrac {0.0010}{0.0392} = 0.0255$. The death rate for individuals who were inoculated is only about 1 in 40 while the death rate is about 1 in 7 for those who were not inoculated. Exercise $1$ The people of Boston self-selected whether or not to be inoculated. (a) Is this study observational or was this an experiment? (b) Can we infer any causal connection using these data? (c) What are some potential confounding variables that might influence whether someone lived or died and also a ect whether that person was inoculated? Solution Brief answers: (a) Observational. (b) No, we cannot infer causation from this observational study. (c) Accessibility to the latest and best medical care. There are other valid answers for part (c). General Multiplication Rule Section 2.1.6 introduced the Multiplication Rule for independent processes. Here we provide the General Multiplication Rule for events that might not be independent. General Multiplication Rule If A and B represent two outcomes or events, then $P (A and B) = P (A | B) \times P (B)$ It is useful to think of A as the outcome of interest and B as the condition. This General Multiplication Rule is simply a rearrangement of the definition for conditional probability in Equation (2.40) on page 83. Example $1$ Consider the smallpox data set. Suppose we are given only two pieces of information: 96.08% of residents were not inoculated, and 85.88% of the residents who were not inoculated ended up surviving. How could we compute the probability that a resident was not inoculated and lived? Solution We will compute our answer using the General Multiplication Rule and then verify it using Table 2.16. We want to determine $P (result = lived and inoculated = no )$ and we are given that $P (\text{result = lived | inoculated = no}) = 0.8588$ $P (\text{inoculated = no}) = 0.9608$ Among the 96.08% of people who were not inoculated, 85.88% survived: $P (\text{result = lived and inoculated = no}) = 0.8588 X 0.9608 = 0.8251$ This is equivalent to the General Multiplication Rule. We can con rm this probability in Table 2.16 at the intersection of no and lived (with a small rounding error). Exercise $1$ Use P(inoculated = yes) = 0:0392 and P(result = lived | inoculated = yes) = 0:9754 to determine the probability that a person was both inoculated and lived.33 33The answer is 0.0382, which can be veri ed using Table 2.16. Exercise $1$ If 97.45% of the people who were inoculated lived, what proportion of inoculated people must have died?34 34There were only two possible outcomes: lived or died. This means that 100% - 97.45% = 2.55% of the people who were inoculated died. Sum of conditional probabilities Let A1, ..., Ak represent all the disjoint outcomes for a variable or process. Then if B is an event, possibly for another variable or process, we have: $P ( A_1 | B ) + \dots + P ( A_k | B ) = 1$ The rule for complements also holds when an event and its complement are conditioned on the same information: $P ( A | B ) = 1 - P ( A^c | B )$ Exercise $1$ Based on the probabilities computed above, does it appear that inoculation is effective at reducing the risk of death from smallpox?35 35The samples are large relative to the difference in death rates for the "inoculated" and "not inoculated" groups, so it seems there is an association between inoculated and outcome. However, as noted in the solution to Exercise 2.46, this is an observational study and we cannot be sure if there is a causal connection. (Further research has shown that inoculation is effective at reducing death rates.) Independence Considerations in Conditional Probability If two processes are independent, then knowing the outcome of one should provide no information about the other. We can show this is mathematically true using conditional probabilities. Exercise 2.51 Let X and Y represent the outcomes of rolling two dice. (a) What is the probability that the rst die, X, is 1? (b) What is the probability that both X and Y are 1? (c) Use the formula for conditional probability to compute P(Y = 1 | X = 1). (d) What is P(Y = 1)? Is this different from the answer from part (c)? Explain.36 We can show in Exercise 2.51(c) that the conditioning information has no influence by using the Multiplication Rule for independence processes: $P ( Y = 1 | X = 1) = \dfrac {P (Y = 1 and X = 1 )}{P ( X = 1)}$ $= \dfrac { P (Y = 1) X P ( X = 1)}{P (X = 1)}$ $= P (Y = 1)$ Exercise $1$ Ron is watching a roulette table in a casino and notices that the last ve outcomes were black. He figures that the chances of getting black six times in a row is very small (about 1=64) and puts his paycheck on red. What is wrong with his reasoning?37 Tree diagrams Tree diagrams are a tool to organize outcomes and probabilities around the structure of the data. They are most useful when two or more processes occur in a sequence and each process is conditioned on its predecessors. The smallpox data t this description. We see the population as split by inoculation: yes and no. Following this split, survival rates were observed for each group. This structure is reflected in the tree diagram shown in Figure 2.17. The first branch for inoculation is said to be the primary branch while the other branches are secondary. Tree diagrams are annotated with marginal and conditional probabilities, as shown in Figure 2.17. This tree diagram splits the smallpox data by inoculation into the yes and no groups with respective marginal probabilities 0.0392 and 0.9608. The secondary branches are conditioned on the rst, so we assign conditional probabilities to these branches. For example, the top branch in Figure 2.17 is the probability that result = lived conditioned on the information that inoculated = yes. We may (and usually do) construct joint probabilities at the end of each branch in our tree by multiplying the numbers we come 36Brief solutions: (a) 1/6. (b) 1/36. (c) $\dfrac {P(Y = 1 and X= 1)}{P(X= 1)} = \dfrac {1/36}{1/6} = 1/6$. (d) The probability is the same as in part (c): P(Y = 1) = 1/6. The probability that Y = 1 was unchanged by knowledge about X, which makes sense as X and Y are independent. 37He has forgotten that the next roulette spin is independent of the previous spins. Casinos do employ this practice; they post the last several outcomes of many betting games to trick unsuspecting gamblers into believing the odds are in their favor. This is called the gambler's fallacy. across as we move from left to right. These joint probabilities are computed using the General Multiplication Rule: $P ( inoculated = yes and result = lives )$ $= P (inoculated = yes ) X P(rest = | inoculated = yes )$ $= 0.0392 X 0.9754 = 0.0382$ Example 2.53 Consider the midterm and nal for a statistics class. Suppose 13% of students earned an A on the midterm. Of those students who earned an A on the midterm, 47% received an A on the nal, and 11% of the students who earned lower than an A on the midterm received an A on the nal. You randomly pick up a final exam and notice the student received an A. What is the probability that this student earned an A on the midterm? The end-goal is to nd P(midterm = A | final = A). To calculate this conditional probability, we need the following robabilities: $P ( midterm = A and final = A ) and P (final = A)$ However, this information is not provided, and it is not obvious how to calculate these probabilities. Since we aren't sure how to proceed, it is useful to organize the information into a tree diagram, as shown in Figure 2.18. When constructing a tree diagram, variables provided with marginal probabilities are often used to create the tree's primary branches; in this case, the marginal probabilities are provided for midterm grades. The nal grades, which correspond to the conditional probabilities provided, will be shown on the secondary branches. With the tree diagram constructed, we may compute the required probabilities: $P (midterm = A and final = A ) = 0.0611$ $P \underline {(final = A)}$ $= P (midterm = other and \underline {final = A}) + P (midterm = A and \underline {final = A})$ $= 0.0611 + 0.0957 = 0.1568$ The marginal probability, P(final = A), was calculated by adding up all the joint probabilities on the right side of the tree that correspond to final = A. We may now nally take the ratio of the two probabilities: $\text {P (midterm = A | final = A)} = \dfrac {P(midterm = A and final)}{P (final = A)}$ $= \dfrac {0.0611}{0.1568} = 0.3897$ The probability the student also earned an A on the midterm is about 0.39. Exercise 2.54 After an introductory statistics course, 78% of students can successfully construct tree diagrams. Of those who can construct tree diagrams, 97% passed, while only 57% of those students who could not construct tree diagrams passed. (a) Organize this information into a tree diagram. (b) What is the probability that a randomly selected student passed? (c) Compute the probability a student is able to construct a tree diagram if it is known that she passed.38 Bayes' Theorem In many instances, we are given a conditional probability of the form $P(statement about variable 1 | statement about variable 2)$ but we would really like to know the inverted conditional probability: $P(statement about variable 2 | statement about variable 1)$ Tree diagrams can be used to find the second conditional probability when given the first. However, sometimes it is not possible to draw the scenario in a tree diagram. In these cases, we can apply a very useful and general formula: Bayes' Theorem. 38(a) The tree diagram is shown below. (b) Identify which two joint probabilities represent students who passed, and add them: P(passed) = 0.7566 + 0.1254 = 0.8820. (c) P(construct tree diagram | passed) = $\dfrac {0.7566}{0.8820} = 0.8578$. We first take a critical look at an example of inverting conditional probabilities where we still apply a tree diagram. Example $1$ n Canada, about 0.35% of women over 40 will be diagnosed with breast cancer in any given year. A common screening test for cancer is the mammogram, but this test is not perfect. In about 11% of patients with breast cancer, the test gives a false negative: it indicates a woman does not have breast cancer when she does have breast cancer. Similarly, the test gives a false positive in 7% of patients who do not have breast cancer: it indicates these patients have breast cancer when they actually do not.39 If we tested a random woman over 40 for breast cancer using a mammogram and the test came back positive { that is, the test suggested the patient has cancer { what is the probability that the patient actually has breast cancer? Solution 39The probabilities reported here were obtained using studies reported at www.breastcancer.org and www.ncbi.nlm.nih.gov/pmc/articles/PMC1173421. Notice that we are given sufficient information to quickly compute the probability of testing positive if a woman has breast cancer (1:00 - 0:11 = 0:89). However, we seek the inverted probability of cancer given a positive test result. Watch out for the non-intuitive medical language: a positive test result suggests the possible presence of cancer in a mammogram screening. This inverted probability may be broken into two pieces: $P (has BC | mammogram^+) = \frac {P (has BC and mammogram^+)}{P (mammogram)}$ where "has BC" is an abbreviation for the patient actually having breast cancer and $"mammogram^+"$ means the mammogram screening was positive. A tree diagram is useful for identifying each probability and is shown in Figure 2.19. The probability the patient has breast cancer and the mammogram is positive is ${ P(has BC and mammogram}^+\text{)} = P(mammogram^+ | has BC) P(has BC)$ $= 0.89 \times 0.0035 = 0.00312$ The probability of a positive test result is the sum of the two corresponding scenarios: $P (mammogram^+ ) = P (mammogram^+ and has BC) + P(mammogram^+and no BC)$ $= P(has BC)P(mammogram^+ | has BC) + P(no BC)P(mammogram^+ | no BC)$ $= 0.00035 X 0.89 + 0.9965 \times 0.07 = 0.07288$ Then if the mammogram screening is positive for a patient, the probability the patient has breast cancer is $P (has BC | mammogram^+) = \frac {P(has BC and mammogram^+)}{P(mammogram^+)}$ $= \frac {0.00312}{0.07288} \approx {0.0428}$ That is, even if a patient has a positive mammogram screening, there is still only a 4% chance that she has breast cancer.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.02%3A_Conditional_Probability_I.txt
Example 2.55 highlights why doctors often run more tests regardless of a first positive test result. When a medical condition is rare, a single positive test isn't generally definitive. Consider again the last equation of Example 2.55. Using the tree diagram, we can see that the numerator (the top of the fraction) is equal to the following product: $P((has BC and mammogram^+) = P(mammogram^+ | has BC)P(has BC)$ The denominator - the probability the screening was positive - is equal to the sum of probabilities for each positive screening scenario: $P(mammogram^+) = P(mammogram^+ and no BC) + P(mammogram^+ and has BC)$ In the example, each of the probabilities on the right side was broken down into a product of a conditional probability and marginal probability using the tree diagram. $P(\underline {mammogram^+}) = P(\underline {mammogram^+} and no BC) + P(\underline {mammogram^+} and has BC$ $= P(mammogram^+ | no BC)P(no BC) + P(mammogram^+| has BC)P(has BC)$ We can see an application of Bayes' Theorem by substituting the resulting probability expressions into the numerator and denominator of the original conditional probability. $P (has BC | mammogram^+) = \frac {P (mammogram^+ | has BC) P (has BC)}{P(mammogram^+ | no BC) + P(mammogram^+ | has BC)P(has BC)}$ Bayes' Theorem: inverting probabilities Consider the following conditional probability for variable 1 and variable 2: $P (outcome A_1 of variable 1| outcome B of variable 2)$ Bayes' Theorem states that this conditional probability can be identified as the following fraction: $\frac {P(B|A_1)P(A_1)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + \dots + P(B|A_k)P(A_k)} \label {2.56}$ where A2, A3, ..., and Ak represent all other possible outcomes of the rst variable. Bayes' Theorem is just a generalization of what we have done using tree diagrams. The numerator identi es the probability of getting both A1 and B. The denominator is the marginal probability of getting B. This bottom component of the fraction appears long and complicated since we have to add up probabilities from all of the different ways to get B. We always completed this step when using tree diagrams. However, we usually did it in a separate step so it didn't seem as complex. To apply Bayes' Theorem correctly, there are two preparatory steps: (1) First identify the marginal probabilities of each possible outcome of the first variable: $P(A1), P(A2), ..., P(Ak).$ (2) Then identify the probability of the outcome B, conditioned on each possible scenario for the first variable: $P(B|A1), P(B|A2), ..., P(B|Ak).$ Once each of these probabilities are identi ed, they can be applied directly within the formula. TIP: Only use Bayes' Theorem when tree diagrams are difficult Drawing a tree diagram makes it easier to understand how two variables are connected. Use Bayes' Theorem only when there are so many scenarios that drawing a tree diagram would be complex. Exercise 2.57 Jose visits campus every Thursday evening. However, some days the parking garage is full, often due to college events. There are academic events on 35% of evenings, sporting events on 20% of evenings, and no events on 45% of evenings. When there is an academic event, the garage fills up about 25% of the time, and it lls up 70% of evenings with sporting events. On evenings when there are no events, it only fills up about 5% of the time. If Jose comes to campus and finds the garage full, what is the probability that there is a sporting event? Use a tree diagram to solve this problem.40 Example 2.58 Here we solve the same problem presented in Exercise 2.57, except this time we use Bayes' Theorem. The outcome of interest is whether there is a sporting event (call this A1), and the condition is that the lot is full (B). Let A2 represent an academic event and A3 represent there being no event on campus. Then the given probabilities can be written as $P (A_1) = 0.2 P (A_2) = 0.35 P (A_3) = 0.45$ $P (B|A_1) = 0.7 P (B|A_2) = 0.25 P (B|A_3) = 0.05$ Bayes' Theorem can be used to compute the probability of a sporting event (A1) under the condition that the parking lot is full (B): $P(A_1|B) = \frac {P(B|A_1)P(A_1)}{P(B|A_1)P(A_1) + P(B|A_2)P(A_2) + P(B|A_3)P(A_3)}$ $= \frac {(0.7)(0.2)}{(0.7)(0.2) + (0.25)(0.35) + (0.05)(0.45)}$ $= 0.56$ Based on the information that the garage is full, there is a 56% probability that a sporting event is being held on campus that evening. Exercise 2.59 Use the information in the previous exercise and example to verify the probability that there is an academic event conditioned on the parking lot being full is 0.35.41 40The tree diagram, with three primary branches, is shown below. Next, we identify two probabilities from the tree diagram. (1) The probability that there is a sporting event and the garage is full: 0.14. (2) The probability the garage is full: $0.0875 + 0.14 + 0.0225 = 0.25$. Then the solution is the ratio of these probabilities: $\frac {0.14}{0.25} = 0.56$. If the garage is full, there is a 56% probability that there is a sporting event. 41Short answer: $P(A2|B) = \frac {P(B|A2)P(A1)}{P(B|A1)P(A1) + P(B|A2)P(A2) + P(B|A3)P(A3)}$ $= \frac {(0.25)(0.35)}{(0.7)(0.2) + (0.25)(0.35) + (0.05)(0.45)}$ $= 0.35$ Exercise $1$ Exercise 2.60 In Exercise 2.57 and 2.59, you found that if the parking lot is full, the probability a sporting event is 0.56 and the probability there is an academic event is 0.35. Using this information, compute P(no event j the lot is full).42 The last several exercises offered a way to update our belief about whether there is a sporting event, academic event, or no event going on at the school based on the information that the parking lot was full. This strategy of updating beliefs using Bayes' Theorem is actually the foundation of an entire section of statistics called Bayesian statistics. While Bayesian statistics is very important and useful, we will not have time to cover much more of it in this book.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.03%3A_Conditional_Probability_II.txt
Example $1$ Professors sometimes select a student at random to answer a question. If each student has an equal chance of being selected and there are 15 people in your class, what is the chance that she will pick you for the next question? Solution If there are 15 people to ask and none are skipping class, then the probability is 1=15, or about 0:067. Example $2$ If the professor asks 3 questions, what is the probability that you will not be selected? Assume that she will not pick the same person twice in a given lecture. Solution For the first question, she will pick someone else with probability 14=15. When she asks the second question, she only has 14 people who have not yet been asked. Thus, if you were not picked on the first question, the probability you are again not picked is 13=14. Similarly, the probability you are again not picked on the third question is 12=13, and the probability of not being picked for any of the three questions is $P \text{ (not picked in 3 questions)} = P (Q1 = not-picked, Q2 = not-picked, Q3 = not-picked)$ $= \dfrac {14}{15} \times \dfrac {13}{14} \times \dfrac {12}{13} = \dfrac {12}{15} = 0.80$ Exercise $1$ What rule permitted us to multiply the probabilities in Example $2$? Answer The three probabilities we computed were actually one marginal probability, P(Q1=not picked), and two conditional probabilities: P(Q2 = not picked | Q1 = not picked) P(Q3 = not picked | Q1 = not picked, Q2 = not picked) Using the General Multiplication Rule, the product of these three probabilities is the probability of not being picked in 3 questions. 42Each probability is conditioned on the same information that the garage is full, so the complement may be used: $1.00 - 0.56 - 0.35 = 0.09$. Example $3$ Suppose the professor randomly picks without regard to who she already selected, i.e. students can be picked more than once. What is the probability that you will not be picked for any of the three questions? Solution Each pick is independent, and the probability of not being picked for any individual question is 14=15. Thus, we can use the Multiplication Rule for independent processes. $P \text{ (not picked in 3 questions)} = P (Q1 = not-picked, Q2 = not-picked, Q3 = not-picked)$ $= \dfrac {14}{15} \times \dfrac {14}{15} \times \dfrac {14}{15} = 0.813$ You have a slightly higher chance of not being picked compared to when she picked a new person for each question. However, you now may be picked more than once. Exercise $2$ Under the setup of Example $3$, what is the probability of being picked to answer all three questions? Solution P(not being picked on any of the three questions) = $\dfrac { 1}{15}^3 = 0.00030$. If we sample from a small population without replacement, we no longer have independence between our observations. In Example $2$, the probability of not being picked for the second question was conditioned on the event that you were not picked for the first question. In Example $4$, the professor sampled her students with replacement: she repeatedly sampled the entire class without regard to who she already picked. Exercise $2$ Your department is holding a raffle. They sell 30 tickets and offer seven prizes. 1. They place the tickets in a hat and draw one for each prize. The tickets are sampled without replacement, i.e. the selected tickets are not placed back in the hat. What is the probability of winning a prize if you buy one ticket? 2. What if the tickets are sampled with replacement? Answer (a) First determine the probability of not winning. The tickets are sampled without replacement, which means the probability you do not win on the first draw is 29/30, 28/29 for the second, ..., and 23/24 for the seventh. The probability you win no prize is the product of these separate probabilities: 23/30. That is, the probability of winning a prize is 1 - 23/30 = 7/30 = 0.233. (b) When the tickets are sampled with replacement, there are seven independent draws. Again we first nd the probability of not winning a prize: $\dfrac {29}{30}^7 = 0.789$. Thus, the probability of winning (at least) one prize when drawing with replacement is 0.211. Exercise $3$ Compare your answers in Exercise $2$. How much influence does the sampling method have on your chances of winning a prize? Answer There is about a 10% larger chance of winning a prize when using sampling without replacement. However, at most one prize may be won under this sampling procedure. Had we repeated Exercise $2$ with 300 tickets instead of 30, we would have found something interesting: the results would be nearly identical. The probability would be 0.0233 without replacement and 0.0231 with replacement. When the sample size is only a small fraction of the population (under 10%), observations are nearly independent even when sampling without replacement.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.04%3A_Sampling_from_a_Small_Population.txt
Example 2.68 Two books are assigned for a statistics class: a textbook and its corresponding study guide. The iversity bookstore determined 20% of enrolled students do not buy either book, 55% buy the textbook, and 25% buy both books, and these percentages are relatively constant from one term to another. If there are 100 students enrolled, how many books should the bookstore expect to sell to this class? Around 20 students will not buy either book (0 books total), about 55 will buy one book (55 books total), and approximately 25 will buy two books (totaling 50 books for these 25 students). The bookstore should expect to sell about 105 books for this class. Exercise 2.69 Would you be surprised if the bookstore sold slightly more or less than 105 books?47 Example 2.70 The textbook costs $137 and the study guide$33. How much revenue should the bookstore expect from this class of 100 students? Solution About 55 students will just buy a textbook, providing revenue of $137 X 55 = 7, 535$ The roughly 25 students who buy both the textbook and the study guide would pay a total of $(137 + 33) X 25 = 170 X 25 = 4,250$ Thus, the bookstore should expect to generate about $7; 535 +$4; 250 = $11; 785 from these 100 students for this one class. However, there might be some sampling variability so the actual amount may differ by a little bit. Example 2.71 What is the average revenue per student for this course? The expected total revenue is$11,785, and there are 100 students. Therefore the expected revenue per student is $11; 785=100 =$117:85. Expectation We call a variable or process with a numerical outcome a random variable, and we usually represent this random variable with a capital letter such as X, Y , or Z. The amount of money a single student will spend on her statistics books is a random variable, and we represent it by X. Random variable A random process or variable with a numerical outcome. 47If they sell a little more or a little less, this should not be a surprise. Hopefully Chapter 1 helped make clear that there is natural variability in observed data. For example, if we would ip a coin 100 times, it will not usually come up heads exactly half the time, but it will probably be close. Table 2.21: The probability distribution for the random variable X, representing the bookstore's revenue from a single student. i 1 2 3 Total $x_i$ $0$137 $170 - $P(X = x_i)$ 0.20 0.55 0.25 1.00 The possible outcomes of X are labeled with a corresponding lower case letter x and subscripts. For example, we write x1 =$0, x2 = $137, and x3 =$170, which occur with probabilities 0:20, 0:55, and 0:25. The distribution of X is summarized in Figure 2.20 and Table 2.21. We computed the average outcome of X as $117.85 in Example 2.71. We call this average the expected E(X) value of X, denoted by E(X). The expected value of a random variable is computed by adding each outcome weighted by its probability: $E (X) = 0 X P (X = 0) + 137 X P (X = 137) + 170 X P (X = 170)$ $= 0 X 0.20 + 137 X 0.55 + 170 X 0.25 = 117.85$ Expected value of a Discrete Random Variable If X takes outcomes x1, ..., xk with probabilities P(X = x1), ..., P(X = xk), the expected value of X is the sum of each outcome multiplied by its corresponding probability: $E(X) = x_1 X P (X = x_1) + \dots + x_k X P ( X = x_k)$ $= \sum \limits_{i=1}^k x_i P (X = x_i)$ The Greek letter $\mu$may be used in place of the notation E(X). The expected value for a random variable represents the average outcome. For example, E(X) = 117.85 represents the average amount the bookstore expects to make from a single student, which we could also write as $\mu = 117.85$. It is also possible to compute the expected value of a continuous random variable (see Section 2.5). However, it requires a little calculus and we save it for a later class.48 In physics, the expectation holds the same meaning as the center of gravity. The distribution can be represented by a series of weights at each outcome, and the mean represents the balancing point. This is represented in Figures 2.20 and 2.22. The idea of a center of gravity also expands to continuous probability distributions. Figure 2.23 shows a continuous probability distribution balanced atop a wedge placed at the mean. 48 $\mu = \int$ x f(x)dx where f(x) represents a function for the density curve. Variability in random variables Suppose you ran the university bookstore. Besides how much revenue you expect to generate, you might also want to know the volatility (variability) in your revenue. The variance and standard deviation can be used to describe the variability of a random variable. Section 1.6.4 introduced a method for nding the variance and standard deviation for a data set. We first computed deviations from the mean $(x_i - \mu )$, squared those deviations, and took an average to get the variance. In the case of a random variable, we again compute squared deviations. However, we take their sum weighted by their corresponding probabilities, just like we did for the expectation. This weighted sum of squared deviations equals the variance, and we calculate the standard deviation by taking the square root of the variance, just as we did in Section 1.6.4. General variance formula If X takes outcomes $x_1, \dots, x_k$with probabilities P(X = x1), ..., P$(X = x_k)$ and expected value $\mu$= E(X), then the variance of X, denoted by V ar (X) or the symbol $\sigma^2$, is $\sigma^2 = {(x_1 - \mu)}^2 X P(X = x_1) + \dots + {(x_k - \mu)^2} X P(X = x_k)$ $= \sum \limits_{j=1}^k {(x_j - \mu )}^2 P (X = x_j) \tag {2.73}$ The standard deviation of X, labeled $\sigma$, is the square root of the variance. Example 2.74 Compute the expected value, variance, and standard deviation of X , the revenue of a single statistics student for the bookstore. It is useful to construct a table that holds computations for each outcome separately, then add up the results. i 1 2 3 Total $x_i$$0 $137$170 $P(X = x_i)$ 0.20 0.55 0.25 $x_i\times P(X = x_i)$ 0 75.35 42.50 117.85 Thus, the expected value is $\mu$ = 117:85, which we computed earlier. The variance can be constructed by extending this table: i 1 2 3 Total $x_i$ $0$137 $170 $P(X = x_i)$ 0.20 0.55 0.25 $x_i\times P(X = x_i)$ 0 75.35 42.50 117.85 $x_i -\mu$ -117.85 19.15 52.15 $(x_i -\mu)^2$ 13888.62 366.72 2719.62 $(x_i -\mu)^2 \times P(X - x_i)$ 2777.7 201.7 679.9 3659.3 The variance of X is $\sigma^2= 3659.3$, which means the standard deviation is $\sigma = \sqrt {3659.3} = 60.49$ Exercise 2.75 The bookstore also offers a chemistry textbook for$159 and a book supplement for $41. From past experience, they know about 25% of chemistry students just buy the textbook while 60% buy both the textbook and supplement.49 1. What proportion of students don't buy either book? Assume no students buy the supplement without the textbook. 2. Let Y represent the revenue from a single student. Write out the probability distribution of Y , i.e. a table for each outcome and its associated probability. 3. Compute the expected revenue from a single chemistry student. 4. Find the standard deviation to describe the variability associated with the revenue from a single student. Linear combinations of random variables So far, we have thought of each variable as being a complete story in and of itself. Sometimes it is more appropriate to use a combination of variables. For instance, the amount of time a person spends commuting to work each week can be broken down into several daily commutes. Similarly, the total gain or loss in a stock portfolio is the sum of the gains and losses in its components. Example 2.76 John travels to work ve days a week. We will use X1 to represent his travel time on Monday, X2 to represent his travel time on Tuesday, and so on. Write an equation using X1, ..., X5 that represents his travel time for the week, denoted by W. His total weekly travel time is the sum of the ve daily values: $W = X1 + X2 + X3 + X4 + X5$ Breaking the weekly travel timeW into pieces provides a framework for undefirstanding each source of randomness and is useful for modeling W. Example 2.77 It takes John an average of 18 minutes each day to commute to work. What would you expect his average commute time to be for the week? We were told that the average (i.e. expected value) of the commute time is 18 minutes per day: E(Xi) = 18. To get the expected time for the sum of the ve days, we can add up the expected time for each individual day: $E(W) = E(X1 + X2 + X3 + X4 + X5)$ $= E(X1) + E(X2) + E(X3) + E(X4) + E(X5)$ $= 18 + 18 + 18 + 18 + 18 = 90 minutes$ 49(a) 100% - 25% - 60% = 15% of students do not buy any books for the class. Part (b) is represented by the first two lines in the table below. The expectation for part (c) is given as the total on the line $y_i P(Y = y_i)$. The result of part (d) is the square-root of the variance listed on in the total on the last line: $\sigma = \sqrt {V ar (Y)} = 69.28$. The expectation of the total time is equal to the sum of the expected individual times. More generally, the expectation of a sum of random variables is always the sum of the expectation for each random variable. Exercise 2.78 Elena is selling a TV at a cash auction and also intends to buy a toaster oven in the auction. If X represents the profit for selling the TV and Y represents the cost of the toaster oven, write an equation that represents the net change in Elena's cash.50 Exercise 2.79 Based on past auctions, Elena gures she should expect to make about$175 on the TV and pay about $23 for the toaster oven. In total, how much should she expect to make or spend?51 Exercise 2.80 Would you be surprised if John's weekly commute wasn't exactly 90 minutes or if Elena didn't make exactly$152? Explain.52 Two important concepts concerning combinations of random variables have so far been introduced. Fifirst, a nal value can sometimes be described as the sum of its parts in an equation. Second, intuition suggests that putting the individual average values into this equation gives the average value we would expect in total. This second point needs clari cation - it is guaranteed to be true in what are called linear combinations of random variables. A linear combination of two random variables X and Y is a fancy phrase to describe a combination $aX + bY$ where a and b are some xed and known numbers. For John's commute time, there were ve random variables - one for each work day - and each random variable could be written as having a xed coefficient of 1: $1X1 + 1X2 + 1X3 + 1X4 + 1X5$ For Elena's net gain or loss, the X random variable had a coefficient of +1 and the Y random variable had a coefficient of -1. When considering the average of a linear combination of random variables, it is safe to plug in the mean of each random variable and then compute the nal result. For a few examples of nonlinear combinations of random variables - cases where we cannot simply plug in the means - see the footnote.53 50She will make X dollars on the TV but spend Y dollars on the toaster oven: X - Y . 51E(X - Y ) = E(X) - E(Y ) = 175 - 23 = $152. She should expect to make about$152. 52No, since there is probably some variability. For example, the tra�c will vary from one day to next, and auction prices will vary depending on the quality of the merchandise and the interest of the attendees. 53If X and Y are random variables, consider the following combinations: $X^{1+Y} , X x Y ,\frac{ X}{Y}$ . In such cases, plugging in the average value for each random variable and computing the result will not generally lead to an accurate average value for the end result. Linear combinations of random variables and the average result If X and Y are random variables, then a linear combination of the random variables is given by $aX + bY \tag {2.81}$ where a and b are some xed numbers. To compute the average value of a linear combination of random variables, plug in the average of each individual random variable and compute the result: $a x E(X) + b x E(Y )$ Recall that the expected value is the same as the mean, e.g. E(X) = $\mu _X$. Example 2.82 Leonard has invested $6000 in Google Inc. (stock ticker: GOOG) and$2000 in Exxon Mobil Corp. (XOM). If X represents the change in Google's stock next month and Y represents the change in Exxon Mobil stock next month, write an equation that describes how much money will be made or lost in Leonard's stocks for the month. For simplicity, we will suppose X and Y are not in percents but are in decimal form (e.g. if Google's stock increases 1%, then X = 0:01; or if it loses 1%, then X = -0:01). Then we can write an equation for Leonard's gain as $6000 x X + 2000 x Y$ If we plug in the change in the stock value for X and Y , this equation gives the change in value of Leonard's stock portfolio for the month. A positive value represents a gain, and a negative value represents a loss. Exercise 2.83 Suppose Google and Exxon Mobil stocks have recently been rising 2.1% and 0.4% per month, respectively. Compute the expected change in Leonard's stock portfolio for next month.54 Exercise 2.84 You should have found that Leonard expects a positive gain in Exercise 2.83. However, would you be surprised if he actually had a loss this month?55 Variability in linear combinations of random variables Quantifying the average outcome from a linear combination of random variables is helpful, but it is also important to have some sense of the uncertainty associated with the total outcome of that combination of random variables. The expected net gain or loss of Leonard's stock portfolio was considered in Exercise 2.83. However, there was no quantitative discussion of the volatility of this portfolio. For instance, while the average monthly gain might be about $134 according to the data, that gain is not guaranteed. Figure 2.24 shows the monthly changes in a portfolio like Leonard's during the 36 months from 2009 to 2011. The gains and losses vary widely, and quantifying these uctuations is important when investing in stocks. 54E($6000 X + $2000 Y ) =$6000 0:021 + $2000 0:004 =$134. 55No. While stocks tend to rise over time, they are often volatile in the short term. Table 2.25: The mean, standard deviation, and variance of the GOOG and XOM stocks. These statistics were estimated from historical stock data, so notation used for sample statistics has been used. Mean ($\bar {x}$) Standard deviation (s) Variance (s2) GOOG 0.0210 0.0846 0.0072 XOM 0.0038 0.0519 0.0027 Just as we have done in many previous cases, we use the variance and standard deviation to describe the uncertainty associated with Leonard's monthly returns. To do so, the variances of each stock's monthly return will be useful, and these are shown in Table 2.25. The stocks' returns are nearly independent. Here we use an equation from probability theory to describe the uncertainty of Leonard's monthly returns; we leave the proof of this method to a dedicated probability course. The variance of a linear combination of random variables can be computed by plugging in the variances of the individual random variables and squaring the coefficients of the random variables: $V ar(aX + bY ) = a^2 V ar(X) + b^2 V ar(Y )$ It is important to note that this equality assumes the random variables are independent; if independence doesn't hold, then more advanced methods are necessary. This equation can be used to compute the variance of Leonard's monthly return: $V ar(6000 X + 2000 Y ) = 6000^2 V ar(X) + 2000^2 V ar(Y ) = 36, 000, 000 X 0.0072 + 4, 000, 000 X 0.0027$ $= 270, 000$ The standard deviation is computed as the square root of the variance: $\sqrt {270, 000} = 520$. While an average monthly return of $134 on an$8000 investment is nothing to scoff at, the monthly returns are so volatile that Leonard should not expect this income to be very stable. Variability of linear combinations of random variables The variance of a linear combination of random variables may be computed by squaring the constants, substituting in the variances for the random variables, and computing the result: $V ar(aX + bY ) = a^2 X V ar(X) + b^2 X V ar(Y )$ This equation is valid as long as the random variables are independent of each other. The standard deviation of the linear combination may be found by taking the square root of the variance. Example 2.85 Suppose John's daily commute has a standard deviation of 4 minutes. What is the uncertainty in his total commute time for the week? Solution The expression for John's commute time was $X_1 + X_2 + X_3 + X_4 + X_5$ Each coefficient is 1, and the variance of each day's time is $4^2 = 16$. Thus, the variance of the total weekly commute time is $variance = 1^2 X 16 + 1^2 X 16 + 1^2 X 16 + 1^2 X 16 + 1^2 X 16 = 5 X 16 = 80$ $standard deviation = \sqrt {variance} = \sqrt {80} = 8.94$ The standard deviation for John's weekly work commute time is about 9 minutes. Exercise 2.86 The computation in Example 2.85 relied on an important assumption: the commute time for each day is independent of the time on other days of that week. Do you think this is valid? Explain.56 Exercise 2.87 Consider Elena's two auctions from Exercise 2.78 on page 100. Suppose these auctions are approximately independent and the variability in auction prices associated with the TV and toaster oven can be described using standard deviations of $25 and$8. Compute the standard deviation of Elena's net gain.57 Consider again Exercise 2.87. The negative coefficient for Y in the linear combination was eliminated when we squared the coefficients. This generally holds true: negatives in a linear combination will have no impact on the variability computed for a linear combination, but they do impact the expected value computations. 56One concern is whether tra�c patterns tend to have a weekly cycle (e.g. Fridays may be worse than other days). If that is the case, and John drives, then the assumption is probably not reasonable. However, if John walks to work, then his commute is probably not a ected by any weekly traffic cycle. 57The equation for Elena can be written as $(1) x X + (-1) x Y$ The variances of X and Y are 625 and 64. We square the coefficients and plug in the variances: $(1)^2 X V ar(X) + (-1)^2 X V ar(Y ) = 1 X 625 + 1 X 64 = 689$ The variance of the linear combination is 689, and the standard deviation is the square root of 689: about \$26.25.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.05%3A_Random_Variables.txt
Example 2.88 Figure 2.26 shows a few different hollow histograms of the variable height for 3 million US adults from the mid-90's.58 How does changing the number of bins allow you to make different interpretations of the data? Adding more bins provides greater detail. This sample is extremely large, which is why much smaller bins still work well. Usually we do not use so many bins with smaller sample sizes since small counts per bin mean the bin heights are very volatile. Example 2.89 What proportion of the sample is between 180 cm and 185 cm tall (about 5'11" to 6'1")? We can add up the heights of the bins in the range 180 cm and 185 and divide by the sample size. For instance, this can be done with the two shaded bins shown in Figure 2.27. The two bins in this region have counts of 195,307 and 156,239 people, resulting in the following estimate of the probability: $\frac {195307 + 156239}{3,000,000} = 0.1172$ This fraction is the same as the proportion of the histogram's area that falls in the range 180 to 185 cm. 58This sample can be considered a simple random sample from the US population. It relies on the USDA Food Commodity Intake Database. From histograms to continuous distributions Examine the transition from a boxy hollow histogram in the top-left of Figure 2.26 to the much smoother plot in the lower-right. In this last plot, the bins are so slim that the hollow histogram is starting to resemble a smooth curve. This suggests the population height as a continuous numerical variable might best be explained by a curve that represents the outline of extremely slim bins. This smooth curve represents a probability density function (also called a density or distribution), and such a curve is shown in Figure 2.28 overlaid on a histogram of the sample. A density has a special property: the total area under the density's curve is 1. Probabilities from continuous distributions We computed the proportion of individuals with heights 180 to 185 cm in Example 2.89 as a fraction: $\frac {number of people between 180 and 185}{total sample size}$ We found the number of people with heights between 180 and 185 cm by determining the fraction of the histogram's area in this region. Similarly, we can use the area in the shaded region under the curve to nd a probability (with the help of a computer): $P(height between 180 and 185) = area between 180 and 185 = 0.1157$ The probability that a randomly selected person is between 180 and 185 cm is 0.1157. This is very close to the estimate from Example 2.89: 0.1172. Exercise 2.90 Three US adults are randomly selected. The probability a single adult is between 180 and 185 cm is 0.1157.59 (a) What is the probability that all three are between 180 and 185 cm tall? (b) What is the probability that none are between 180 and 185 cm? Example 2.91 What is the probability that a randomly selected person is exactly 180 cm? Assume you can measure perfectly. This probability is zero. A person might be close to 180 cm, but not exactly 180 cm tall. This also makes sense with the de nition of probability as area; there is no area captured between 180 cm and 180 cm. Exercise 2.92 Suppose a person's height is rounded to the nearest centimeter. Is there a chance that a random person's measured height will be 180 cm?60 59Brief answers: (a) $0.1157 X 0.1157 X 0.1157 = 0.0015$. (b) $(1 - 0.1157)^3 = 0.692$ 60This has positive probability. Anyone between 179.5 cm and 180.5 cm will have a measured height of 180 cm. This is probably a more realistic scenario to encounter in practice versus Example 2.91.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.06%3A_Continuous_Distributions.txt
2.1: Defining Probability 2.1 True or false. Determine if the statements below are true or false, and explain your reasoning. 1. (a) If a fair coin is tossed many times and the last eight tosses are all heads, then the chance that the next toss will be heads is somewhat less than 50%. 2. (b) Drawing a face card (jack, queen, or king) and drawing a red card from a full deck of playing cards are mutually exclusive events. 3. (c) Drawing a face card and drawing an ace from a full deck of playing cards are mutually exclusive events. 2.2 Roulette wheel. The game of roulette involves spinning a wheel with 38 slots: 18 red, 18 black, and 2 green. A ball is spun onto the wheel and will eventually land in a slot, where each slot has an equal chance of capturing the ball.61 1. (a) You watch a roulette wheel spin 3 consecutive times and the ball lands on a red slot each time. What is the probability that the ball will land on a red slot on the next spin? 2. (b) You watch a roulette wheel spin 300 consecutive times and the ball lands on a red slot each time. What is the probability that the ball will land on a red slot on the next spin? 3. (c) Are you equally con dent of your answers to parts (a) and (b)? Why or why not? 2.3 Four games, one winner. Below are four versions of the same game. Your archnemisis gets to pick the version of the game, and then you get to choose how many times to ip a coin: 10 times or 100 times. Identify how many coin ips you should choose for each version of the game. Explain your reasoning. 1. (a) If the proportion of heads is larger than 0.60, you win \$1. 2. (b) If the proportion of heads is larger than 0.40, you win \$1. 3. (c) If the proportion of heads is between 0.40 and 0.60, you win \$1. 4. (d) If the proportion of heads is smaller than 0.30, you win \$1. 2.4 Backgammon. Backgammon is a board game for two players in which the playing pieces are moved according to the roll of two dice. Players win by removing all of their pieces from the board, so it is usually good to roll high numbers. You are playing backgammon with a friend and you roll two 6s in your rst roll and two 6s in your second roll. Your friend rolls two 3s in his rst roll and again in his second row. Your friend claims that you are cheating, because rolling double 6s twice in a row is very unlikely. Using probability, show that your rolls were just as likely as his. 2.5 Coin fips. If you ip a fair coin 10 times, what is the probability of (a) getting all tails? (b) getting all heads? (c) getting at least one tails? 2.6 Dice rolls. If you roll a pair of fair dice, what is the probability of (a) getting a sum of 1? (b) getting a sum of 5? (c) getting a sum of 12? 61Photo by H�akan Dahlstrom on Flickr, Roulette wheel. 2.7 Swing voters. A 2012 Pew Research survey asked 2,373 randomly sampled registered voters their political affiliation (Republican, Democrat, or Independent) and whether or not they identify as swing voters. 35% of respondents identi ed as Independent, 23% identi ed as swing voters, and 11% identi ed as both.62 1. (a) Are being Independent and being a swing voter disjoint, i.e. mutually exclusive? 2. (b) Draw a Venn diagram summarizing the variables and their associated probabilities. 3. (c) What percent of voters are Independent but not swing voters? 4. (d) What percent of voters are Independent or swing voters? 5. (e) What percent of voters are neither Independent nor swing voters? 6. (f) Is the event that someone is a swing voter independent of the event that someone is a political Independent? 2.8 Poverty and language. The American Community Survey is an ongoing survey that provides data every year to give communities the current information they need to plan investments and services. The 2010 American Community Survey estimates that 14.6% of Americans live below the poverty line, 20.7% speak a language other that English at home, and 4.2% fall into both categories.63 1. (a) Are living below the poverty line and speaking a language other than English at home disjoint? 2. (b) Draw a Venn diagram summarizing the variables and their associated probabilities. 3. (c) What percent of Americans live below the poverty line and only speak English at home? 4. (d) What percent of Americans live below the poverty line or speak a language other than English at home? 5. (e) What percent of Americans live above the poverty line and only speak English at home? 6. (f) Is the event that someone lives below the poverty line independent of the event that the person speaks a language other than English at home? 2.9 Disjoint vs. independent. In parts (a) and (b), identify whether the events are disjoint, independent, or neither (events cannot be both disjoint and independent). 1. (a) You and a randomly selected student from your class both earn A's in this course. 2. (b) You and your class study partner both earn A's in this course. 3. (c) If two events can occur at the same time, must they be dependent? 2.10 Guessing on an exam. In a multiple choice exam, there are 5 questions and 4 choices for each question (a, b, c, d). Nancy has not studied for the exam at all and decides to randomly guess the answers. What is the probability that: 1. (a) the first question she gets right is the 5th question? 2. (b) she gets all of the questions right? 3. (c) she gets at least one question right? 62Pew Research Center, With Voters Focused on Economy, Obama Lead Narrows, data collected between April 4-15, 2012. 63U.S. Census Bureau, 2010 American Community Survey 1-Year Estimates, Characteristics of People by Language Spoken at Home. 2.11 Educational attainment of couples. The table below shows the distribution of education level attained by US residents by gender based on data collected during the 2010 American Community Survey.64 Gender Male Female Less than 9th grade 9th to 12th grade, no diploma High school graduate, GED, or alternative Some college, no degree Associate's degree Bachelor's degree Graduate or professional degree 0.06 0.10 0.30 0.22 0.06 0.16 0.09 0.06 0.09 0.20 0.24 0.08 0.17 0.09 Total 1.00 1.00 1. (a) What is the probability that a randomly chosen man has at least a Bachelor's degree? 2. (b) What is the probability that a randomly chosen woman has at least a Bachelor's degree? 3. (c) What is the probability that a man and a woman getting married both have at least a Bachelor's degree? Note any assumptions you must make to answer this question. 4. (d) If you made an assumption in part (c), do you think it was reasonable? If you didn't make an assumption, double check your earlier answer and then return to this part. 2.12 School absences. Data collected at elementary schools in DeKalb County, GA suggest that each year roughly 25% of students miss exactly one day of school, 15% miss 2 days, and 28% miss 3 or more days due to sickness.65 1. (a) What is the probability that a student chosen at random doesn't miss any days of school due to sickness this year? 2. (b) What is the probability that a student chosen at random misses no more than one day? 3. (c) What is the probability that a student chosen at random misses at least one day? 4. (d) If a parent has two kids at a DeKalb County elementary school, what is the probability that neither kid will miss any school? Note any assumption you must make to answer this question. 5. (e) If a parent has two kids at a DeKalb County elementary school, what is the probability that that both kids will miss some school, i.e. at least one day? Note any assumption you make. 6. (f) If you made an assumption in part (d) or (e), do you think it was reasonable? If you didn't make any assumptions, double check your earlier answers. 2.13 Grade distributions. Each row in the table below is a proposed grade distribution for a class. Identify each as a valid or invalid probability distribution, and explain your reasoning. Grades A B C D F (a) 0.3 0.3 0.3 0.2 0.1 (b) 0 0 1 0 0 (c) 0.3 0.3 0.3 0 0 (d) 0.3 0.5 0.2 0.1 -0.1 (e) 0.2 0.4 0.2 0.1 0.1 (f) 0 -0.1 1.1 0 0 64U.S. Census Bureau, 2010 American Community Survey 1-Year Estimates, Educational Attainment. 65S.S. Mizan et al. \Absence, Extended Absence, and Repeat Tardiness Related to Asthma Status among Elementary School Children". In: Journal of Asthma 48.3 (2011), pp. 228-234. 2.14 Weight and health coverage, Part I. The Behavioral Risk Factor Surveillance System (BRFSS) is an annual telephone survey designed to identify risk factors in the adult population and report emerging health trends. The following table summarizes two variables for the respondents: weight status using body mass index (BMI) and health coverage, which describes whether each respondent had health insurance.66 Weight Status Neither overweight nor obese (BMI < 25) Overweight (\(25 \le BMI < 30\)) Obese (BMI \(\ge\) 30) Total Yes 134,801 141,699 107,301 383,801 No 15,098 15,327 14,412 44,837 Total 149,899 157,026 121,713 428,638 1. (a) If we draw one individual at random, what is the probability that the respondent is overweight and doesn't have health coverage? 2. (b) If we draw one individual at random, what is the probability that the respondent is overweight or doesn't have health coverage? 2.2: Conditional Probability I and 2.3: Conditional Probability II 2.15 Joint and conditional probabilities. P(A) = 0.3, P(B) = 0.7 1. (a) Can you compute P(A and B) if you only know P(A) and P(B)? 2. (b) Assuming that events A and B arise from independent random processes, 1. i. what is P(A and B)? 2. ii. what is P(A or B)? 3. iii. what is P(A|B)? 3. (c) If we are given that P(A and B) = 0.1, are the random variables giving rise to events A and B independent? 4. (d) If we are given that P(A and B) = 0.1, what is P(A|B)? 2.16 PB & J. Suppose 80% of people like peanut butter, 89% like jelly, and 78% like both. Given that a randomly sampled person likes peanut butter, what's the probability that he also likes jelly? 2.17 Global warming. A 2010 Pew Research poll asked 1,306 Americans \From what you've read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?". The table below shows the distribution of responses by party and ideology, where the counts have been replaced with relative frequencies.67 Response Earth is warming Not warming Don't Know Refuse Total Conservative Republican 0.11 0.20 0.02 0.33 Mod/Lib Republican 0.06 0.06 0.01 0.13 Mod/Cons Democrat 0.25 0.07 0.02 0.34 Liberal Democrat 0.18 0.01 0.01 0.20 Total 0.60 0.34 0.06 1.00 66Office of Surveillance, Epidemiology, and Laboratory Services Behavioral Risk Factor Surveillance System, BRFSS 2010 Survey Data. 67Pew Research Center, Majority of Republicans No Longer See Evidence of Global Warming, data collected on October 27, 2010. 1. (a) What is the probability that a randomly chosen respondent believes the earth is warming or is a liberal Democrat? 2. (b) What is the probability that a randomly chosen respondent believes the earth is warming given that he is a liberal Democrat? 3. (c) What is the probability that a randomly chosen respondent believes the earth is warming given that he is a conservative Republican? 4. (d) Does it appear that whether or not a respondent believes the earth is warming is independent of their party and ideology? Explain your reasoning. 5. (e) What is the probability that a randomly chosen respondent is a moderate/liberal Republican given that he does not believe that the earth is warming? 2.18 Weight and health coverage, Part II. Exercise 2.14 introduced a contingency table summarizing the relationship between weight status, which is determined based on body mass index (BMI), and health coverage for a sample of 428,638 Americans. In the table below, the counts have been replaced by relative frequencies (probability estimates). Weight Status Neither overweight nor obese (BMI < 25) Overweight (\(25 \le BMI < 30\)) Obese (BMI \(\ge\) 30) Total Yes 0.3145 0.3306 0.2503 0.8954 No 0.0352 0.0358 0.0336 0.1046 Total 0.3497 0.3664 0.2839 1.0000 1. (a) What is the probability that a randomly chosen individual is obese? 2. (b) What is the probability that a randomly chosen individual is obese given that he has health coverage? 3. (c) What is the probability that a randomly chosen individual is obese given that he doesn't have health coverage? 4. (d) Do being overweight and having health coverage appear to be independent? 2.19 Burger preferences. A 2010 SurveyUSA poll asked 500 Los Angeles residents, "What is the best hamburger place in Southern California? Five Guys Burgers? In-N-Out Burger? Fat Burger? Tommy's Hamburgers? Umami Burger? Or somewhere else?" The distribution of responses by gender is shown below.68 Gender Male Female Total Five Guys Burgers In-N-Out Burger Fat Burger Tommy's Hamburgers Umami Burger Other Not Sure 5 162 10 27 5 26 13 6 181 12 27 1 20 5 11 343 22 54 6 46 18 Total 284 252 500 1. (a) What is the probability that a randomly chosen male likes In-N-Out the best? 2. (b) What is the probability that a randomly chosen female likes In-N-Out the best? 3. (c) What is the probability that a man and a woman who are dating both like In-N-Out the best? Note any assumption you make and evaluate whether you think that assumption is reasonable. 4. (d) What is the probability that a randomly chosen person likes Umami best or that person is female? 68SurveyUSA, Results of SurveyUSA News Poll #17718, data collected on December 2, 2010. 2.20 Assortative mating. Assortative mating is a nonrandom mating pattern where individuals with similar genotypes and/or phenotypes mate with one another more frequently than what would be expected under a random mating pattern. Researchers studying this topic collected data on eye colors of 204 Scandinavian men and their female partners. The table below summarizes the results. For simplicity, we only include heterosexual relationships in this exercise.69 Partner (female) Blue Brown Green Total Blue Brown Green 78 19 11 23 23 9 13 12 16 114 54 36 Total 108 55 41 204 1. (a) What is the probability that a randomly chosen male respondent or his partner has blue eyes? 2. (b) What is the probability that a randomly chosen male respondent with blue eyes has a partner with blue eyes? 3. (c) What is the probability that a randomly chosen male respondent with brown eyes has a partner with blue eyes? What about the probability of a randomly chosen male respondent with green eyes having a partner with blue eyes? 4. (d) Does it appear that the eye colors of male respondents and their partners are independent? Explain your reasoning. 2.21 Drawing box plots. After an introductory statistics course, 80% of students can successfully construct box plots. Of those who can construct box plots, 86% passed, while only 65% of those students who could not construct box plots passed. 1. (a) Construct a tree diagram of this scenario. 2. (b) Calculate the probability that a student is able to construct a box plot if it is known that he passed. 2.22 Predisposition for thrombosis. A genetic test is used to determine if people have a predisposition for thrombosis, which is the formation of a blood clot inside a blood vessel that obstructs the ow of blood through the circulatory system. It is believed that 3% of people actually have this predisposition. The genetic test is 99% accurate if a person actually has the predisposition, meaning that the probability of a positive test result when a person actually has the predisposition is 0.99. The test is 98% accurate if a person does not have the predisposition. What is the probability that a randomly selected person who tests positive for the predisposition by the test actually has the predisposition? 2.23 HIV in Swaziland. Swaziland has the highest HIV prevalence in the world: 25.9% of this country's population is infected with HIV.70 The ELISA test is one of the rst and most accurate tests for HIV. For those who carry HIV, the ELISA test is 99.7% accurate. For those who do not carry HIV, the test is 92.6% accurate. If an individual from Swaziland has tested positive, what is the probability that he carries HIV? 2.24 Exit poll. Edison Research gathered exit poll results from several sources for the Wisconsin recall election of Scott Walker. They found that 53% of the respondents voted in favor of Scott Walker. Additionally, they estimated that of those who did vote in favor for Scott Walker, 37% had a college degree, while 44% of those who voted against Scott Walker had a college degree. Suppose we randomly sampled a person who participated in the exit poll and found that he had a college degree. What is the probability that he voted in favor of Scott Walker?71 69B. Laeng et al. \Why do blue-eyed men prefer women with the same eye color?" In: Behavioral Ecology and Sociobiology 61.3 (2007), pp. 371{384. 70Source: CIA Factbook, Country Comparison: HIV/AIDS - Adult Prevalence Rate. 71New York Times, Wisconsin recall exit polls. 2.25 It's never lupus. Lupus is a medical phenomenon where antibodies that are supposed to attack foreign cells to prevent infections instead see plasma proteins as foreign bodies, leading to a high risk of blood clotting. It is believed that 2% of the population suffer from this disease. The test is 98% accurate if a person actually has the disease. The test is 74% accurate if a person does not have the disease. There is a line from the Fox television show House that is often used after a patient tests positive for lupus: \It's never lupus." Do you think there is truth to this statement? Use appropriate probabilities to support your answer. 2.26 Twins. About 30% of human twins are identical, and the rest are fraternal. Identical twins are necessarily the same sex { half are males and the other half are females. One-quarter of fraternal twins are both male, one-quarter both female, and one-half are mixes: one male, one female. You have just become a parent of twins and are told they are both girls. Given this information, what is the probability that they are identical? 2.4: Sampling from a Small Population 2.27 Urns and marbles, Part I. Imagine you have an urn containing 5 red, 3 blue, and 2 orange marbles in it. 1. (a) What is the probability that the rst marble you draw is blue? 2. (b) Suppose you drew a blue marble in the rst draw. If drawing with replacement, what is the probability of drawing a blue marble in the second draw? 3. (c) Suppose you instead drew an orange marble in the rst draw. If drawing with replacement, what is the probability of drawing a blue marble in the second draw? 4. (d) If drawing with replacement, what is the probability of drawing two blue marbles in a row? 5. (e) When drawing with replacement, are the draws independent? Explain. 2.28 Socks in a drawer. In your sock drawer you have 4 blue, 5 gray, and 3 black socks. Half asleep one morning you grab 2 socks at random and put them on. Find the probability you end up wearing 1. (a) 2 blue socks 2. (b) no gray socks 3. (c) at least 1 black sock 4. (d) a green sock 5. (e) matching socks 2.29 Urns and marbles, Part II. Imagine you have an urn containing 5 red, 3 blue, and 2 orange marbles. 1. (a) Suppose you draw a marble and it is blue. If drawing without replacement, what is the probability the next is also blue? 2. (b) Suppose you draw a marble and it is orange, and then you draw a second marble without replacement. What is the probability this second marble is blue? 3. (c) If drawing without replacement, what is the probability of drawing two blue marbles in a row? 4. (d) When drawing without replacement, are the draws independent? Explain. 2.30 Books on a bookshelf. The table below shows the distribution of books on a bookcase based on whether they are nonfiction or ction and hardcover or paperback. Format Hardcover Paperback Total Fiction Nonfiction 13 15 59 8 72 23 Total 28 67 95 1. (a) Find the probability of drawing a hardcover book rst then a paperback ction book second when drawing without replacement. 2. (b) Determine the probability of drawing a ction book rst and then a hardcover book second, when drawing without replacement. 3. (c) Calculate the probability of the scenario in part (b), except this time complete the calculations under the scenario where the rst book is placed back on the bookcase before randomly drawing the second book. 4. (d) The nal answers to parts (b) and (c) are very similar. Explain why this is the case. 2.31 Student outfits. In a classroom with 24 students, 7 students are wearing jeans, 4 are wearing shorts, 8 are wearing skirts, and the rest are wearing leggings. If we randomly select 3 students without replacement, what is the probability that one of the selected students is wearing leggings and the other two are wearing jeans? Note that these are mutually exclusive clothing options. 2.32 The birthday problem. Suppose we pick three people at random. For each of the following questions, ignore the special case where someone might be born on February 29th, and assume that births are evenly distributed throughout the year. 1. (a) What is the probability that the rst two people share a birthday? 2. (b) What is the probability that at least two people share a birthday? 2.5: Random Variables 2.33 College smokers. At a university, 13% of students smoke. 1. (a) Calculate the expected number of smokers in a random sample of 100 students from this university. 2. (b) The university gym opens at 9am on Saturday mornings. One Saturday morning at 8:55am there are 27 students outside the gym waiting for it to open. Should you use the same approach from part (a) to calculate the expected number of smokers among these 27 students? 2.34 Card game. Consider the following card game with a well-shuffled deck of cards. If you draw a red card, you win nothing. If you get a spade, you win \$5. For any club, you win \$10 plus an extra \$20 for the ace of clubs. 1. (a) Create a probability model for the amount you win at this game. Also, nd the expected winnings for a single game and the standard deviation of the winnings. 2. (b) What is the maximum amount you would be willing to pay to play this game? Explain. 2.35 Another card game. In a new card game, you start with a well-shuffed full deck and draw 3 cards without replacement. If you draw 3 hearts, you win \$50. If you draw 3 black cards, you win \$25. For any other draws, you win nothing. 1. (a) Create a probability model for the amount you win at this game, and nd the expected winnings. Also compute the standard deviation of this distribution. 2. (b) If the game costs \$5 to play, what would be the expected value and standard deviation of the net pro t (or loss)? (Hint: profit = winnings - cost; X - 5) 3. (c) If the game costs \$5 to play, should you play this game? Explain. 2.36 Is it worth it? Andy is always looking for ways to make money fast. Lately, he has been trying to make money by gambling. Here is the game he is considering playing: The game costs \$2 to play. He draws a card from a deck. If he gets a number card (2-10), he wins nothing. For any face card (jack, queen or king), he wins \$3. For any ace, he wins \$5, and he wins an extra \$20 if he draws the ace of clubs. 1. (a) Create a probability model and nd Andy's expected pro t per game. 2. (b) Would you recommend this game to Andy as a good way to make money? Explain. 2.37 Portfolio return. A portfolio's value increases by 18% during a nancial boom and by 9% during normal times. It decreases by 12% during a recession. What is the expected return on this portfolio if each scenario is equally likely? 2.38 A game of roulette, Part I. The game of roulette involves spinning a wheel with 38 slots: 18 red, 18 black, and 2 green. A ball is spun onto the wheel and will eventually land in a slot, where each slot has an equal chance of capturing the ball. Gamblers can place bets on red or black. If the ball lands on their color, they double their money. If it lands on another color, they lose their money. Suppose you bet \$1 on red. What's the expected value and standard deviation of your winnings? 2.39 A game of roulette, Part II. Exercise 2.38 describes winnings on a game of roulette. 1. (a) Suppose you play roulette and bet \$3 on a single round. What is the expected value and standard deviation of your total winnings? 2. (b) Suppose you bet \$1 in three different rounds. What is the expected value and standard deviation of your total winnings? 3. (c) How do your answers to parts (a) and (b) compare? What does this say about the riskiness of the two games? 2.40 Baggage fees. An airline charges the following baggage fees: \$25 for the rst bag and \$35 for the second. Suppose 54% of passengers have no checked luggage, 34% have one piece of checked luggage and 12% have two pieces. We suppose a negligible portion of people check more than two bags. 1. (a) Build a probability model, compute the average revenue per passenger, and compute the corresponding standard deviation. 2. (b) About how much revenue should the airline expect for a flight of 120 passengers? With what standard deviation? Note any assumptions you make and if you think they are justified. 2.41 Dodgers vs. Padres. You and your friend decide to bet on the Major League Baseball game happening one evening between the Los Angeles Dodgers and the San Diego Padres. Suppose current statistics indicate that the Dodgers have a 0.46 probability of winning this game against the Padres. If your friend bets you \$5 that the Dodgers will win, how much would you need to bet on the Padres to make this a fair game? 2.42 Selling on Ebay. Marcie has been tracking the following two items on Ebay: • A textbook that sells for an average of \$110 with a standard deviation of \$4. • Mario Kart for the Nintendo Wii, which sells for an average of \$38 with a standard deviation of \$5. 1. (a) Marcie wants to sell the video game and buy the textbook. How much net money (profits - losses) would she expect to make or spend? Also compute the standard deviation of how much she would make or spend. 2. (b) Lucy is selling the textbook on Ebay for a friend, and her friend is giving her a 10% commission (Lucy keeps 10% of the revenue). How much money should she expect to make? With what standard deviation? 2.43 Cost of breakfast. Sally gets a cup of coffee and a muffin every day for breakfast from one of the many coffee shops in her neighborhood. She picks a coffee shop each morning at random and independently of previous days. The average price of a cup of coffee is \$1.40 with a standard deviation of 30¢(\$0.30), the average price of a muffin is \$2.50 with a standard deviation of 15¢, and the two prices are independent of each other. 1. (a) What is the mean and standard deviation of the amount she spends on breakfast daily? 2. (b) What is the mean and standard deviation of the amount she spends on breakfast weekly (7 days)? 2.44 Ice cream. Ice cream usually comes in 1.5 quart boxes (48 uid ounces), and ice cream scoops hold about 2 ounces. However, there is some variability in the amount of ice cream in a box as well as the amount of ice cream scooped out. We represent the amount of ice cream in the box as X and the amount scooped out as Y . Suppose these random variables have the following means, standard deviations, and variances mean SD variance X 48 1 1 Y 2 0.25 0.0625 1. (a) An entire box of ice cream, plus 3 scoops from a second box is served at a party. How much ice cream do you expect to have been served at this party? What is the standard deviation of the amount of ice cream served? 2. (b) How much ice cream would you expect to be left in the box after scooping out one scoop of ice cream? That is, nd the expected value of X - Y . What is the standard deviation of the amount left in the box? 3. (c) Using the context of this exercise, explain why we add variances when we subtract one random variable from another. 2.6: Continuous Distributions 2.45 Cat weights. The histogram shown below represents the weights (in kg) of 47 female and 97 male cats.72 1. (a) What fraction of these cats weigh less than 2.5 kg? 2. (b) What fraction of these cats weigh between 2.5 and 2.75 kg? 3. (c) What fraction of these cats weigh between 2.75 and 3.5 kg? 72W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Fourth Edition. http://www.stats.ox.ac.uk/pub/MASS4. New York: Springer, 2002. 2.46 Income and gender. The relative frequency table below displays the distribution of annual total personal income (in 2009 ination-adjusted dollars) for a representative sample of 96,420,486 Americans. These data come from the American Community Survey for 2005-2009. This sample is comprised of 59% males and 41% females.73 Income Total \$1 to \$9,999 or loss \$10,000 to \$14,999 \$15,000 to \$24,999 \$25,000 to \$34,999 \$35,000 to \$49,999 \$50,000 to \$64,999 \$65,000 to \$74,999 \$75,000 to \$99,999 \$100,000 or more 2.2% 4.7% 15.8% 18.3% 21.2% 13.9% 5.8% 8.4% 9.7% 1. (a) Describe the distribution of total personal income. 2. (b) What is the probability that a randomly chosen US resident makes less than \$50,000 per year? 3. (c) What is the probability that a randomly chosen US resident makes less than \$50,000 per year and is female? Note any assumptions you make. 4. (d) The same data source indicates that 71.8% of females make less than \$50,000 per year. Use this value to determine whether or not the assumption you made in part (c) is valid. Contributors and Attributions David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./02%3A_Probability/2.E%3A_Probability_%28Exercises%29.txt
• 3.1: Normal Distribution Among all the distributions we see in practice, one is overwhelmingly the most common. The symmetric, unimodal, bell curve is ubiquitous throughout statistics. Indeed it is so common, that people often know it as the normal curve or normal distribution. • 3.2: Evaluating the Normal Approximation Many processes can be well approximated by the normal distribution. While using a normal model can be extremely convenient and helpful, it is important to remember normality is always an approximation. Testing the appropriateness of the normal assumption is a key step in many data analyses. • 3.3: Geometric Distribution (Special Topic) How long should we expect to flip a coin until it turns up heads? Or how many times should we expect to roll a die until we get a 1? These questions can be answered using the geometric distribution. We first formalize each trial - such as a single coin flip or die toss - using the Bernoulli distribution, and then we combine these with our tools from probability to construct the geometric distribution. • 3.4: Binomial Distribution (Special Topic) The binomial distribution describes the probability of having exactly k successes in n independent Bernoulli trials with probability of a success p. • 3.5: More Discrete Distributions (Special Topic) The "Negative binomial distribution" and "Poisson distribution" are discussed in this section. • 3.E: Distributions of Random Variables (Exercises) Exercises for Chapter 3 of the "OpenIntro Statistics" textmap by Diez, Barr and Çetinkaya-Rundel. 03: Distributions of Random Variables Among all the distributions we see in practice, one is overwhelmingly the most common. The symmetric, unimodal, bell curve is ubiquitous throughout statistics. Indeed it is so common, that people often know it as the normal curve or normal distribution, shown in Figure $1$. It is also known as the Gaussian distribution after Frederic Gauss, the first person to formalize its mathematical expression. Variables such as SAT scores and heights of US adult males closely follow the normal distribution. Many variables are nearly normal, but none are exactly normal. Thus the normal distribution, while not perfect for any single problem, is very useful for a variety of problems. We will use it in data exploration and to solve important problems in statistics. Normal Distribution Model The normal distribution model always describes a symmetric, unimodal, bell shaped curve. However, these curves can look different depending on the details of the model. Specifically, the normal distribution model can be adjusted using two parameters: mean and standard deviation. As you can probably guess, changing the mean shifts the bell curve to the left or right, while changing the standard deviation stretches or constricts the curve. Figure $2$ shows the normal distribution with mean 0 and standard deviation 1 in the left panel and the normal distributions with mean 19 and standard deviation 4 in the right panel. Figure $2$ shows these distributions on the same axis. If a normal distribution has mean $\mu$ and standard deviation $\sigma$, we may write the distribution as $N (\mu, \sigma)$. The two distributions in Figure $3$ can be written as $N (\mu = 0, \sigma = 0)$ and $N (\mu = 19, \sigma = 4).$ Because the mean and standard deviation describe a normal distribution exactly, they are called the distribution's parameters. Example $1$ Write down the short-hand for a normal distribution with 1. mean 5 and standard deviation 3, 2. mean -100 and standard deviation 10, and 3. mean 2 and standard deviation 9. Solution 1. N($\mu$ = 5, $\sigma$ = 3) 2. N($\mu$ = -100, $\sigma$ = 10) 3. N($\mu$ = 2, $\sigma$ = 9). Standardizing with Z Scores Example $2$ Table $1$ shows the mean and standard deviation for total scores on the SAT and ACT. The distribution of SAT and ACT scores are both nearly normal. Suppose Ann scored 1800 on her SAT and Tom scored 24 on his ACT. Who performed better? Table $1$: Mean and standard deviation for the SAT and ACT. SAT ACT Mean 1500 21 SD 300 5 Solution We use the standard deviation as a guide. Ann is 1 standard deviation above average on the SAT: 1500 + 300 = 1800. Tom is 0.6 standard deviations above the mean on the ACT: 21 + 0.6 X 5 = 24. In Figure $4$, we can see that Ann tends to do better with respect to everyone else than Tom did, so her score was better. Example $2$ used a standardization technique called a Z score, a method most commonly employed for nearly normal observations but that may be used with any distribution. The Z score of an observation Z is defined as the number of standard deviations it falls above or below the mean. If the observation is one standard deviation above the mean, its Z score is 1. If it is 1.5 standard deviations below the mean, then its Z score is -1.5. Definition: The Z score The Z score of an observation is the number of standard deviations it falls above or below the mean. We compute the Z score for an observation $x$ that follows a distribution with mean $\mu$ and standard deviation $\sigma$ using $Z = \dfrac {x - \mu}{\sigma}$ Using $\mu_{SAT}$ = 1500, $\sigma_{SAT}$ = 300, and $x_{Ann}$ = 1800, we find Ann's Z score in Example $2$: $Z_{Ann} = \dfrac {x_{Ann} - \sigma_{SAT}}{\mu_{SAT}} = \dfrac {1800 - 1500}{300} = 1 \nonumber$ Exercise $1$ Use Tom's ACT score, 24, along with the ACT mean and standard deviation to compute his Z score. Answer $Z_{Tom} = \dfrac {x_{Tom} - \mu_{ACT}}{\sigma_{ACT}} = \dfrac {24 - 21}{5} = 0.6$ Observations above the mean always have positive Z scores while those below the mean have negative Z scores. If an observation is equal to the mean (e.g. SAT score of 1500 in Example $2$), then the Z score is 0. Exercise $2$ Let X represent a random variable from N($\mu$ = 3; $\sigma$ = 2), and suppose we observe x = 5:19. 1. Find the Z score of x. 2. Use the Z score to determine how many standard deviations above or below the mean x falls. Answer a Its Z score is given by $Z = \dfrac {x - \mu}{\sigma} = \dfrac {5.19 - 3}{2} = \dfrac {2.19}{2} = 1.095$. Answer b The observation x is 1.095 standard deviations above the mean. We know it must be above the mean since Z is positive. Exercise $3$ Head lengths of brushtail possums follow a nearly normal distribution with mean 92.6 mm and standard deviation 3.6 mm. Compute the Z scores for possums with head lengths of 95.4 mm and 85.8 mm. Answer For $x_1 = 95:4 mm: Z_1 = \dfrac {x_1 - \mu}{\sigma} = \dfrac {95.4 - 92.6}{3.6} = 0.78$. For $x_2 = 85:8 mm: Z_2 = \dfrac {85:8 - 92:6}{3:6} = -1:89$. We can use Z scores to roughly identify which observations are more unusual than others. One observation $x_1$ is said to be more unusual than another observation $x_2$ if the absolute value of its Z score is larger than the absolute value of the other observation's Z score: $|Z_1| > |Z_2|$. This technique is especially insightful when a distribution is symmetric. Exercise $4$ Which of the observations in Exercise $3$ is more unusual? Answer Because the absolute value of Z score for the second observation is larger than that of the first, the second observation has a more unusual head length. Normal Probability Table Example $4$: Ann from Example $2$ earned a score of 1800 on her SAT with a corresponding Z = 1. She would like to know what percentile she falls in among all SAT test-takers. Solution Ann's percentile is the percentage of people who earned a lower SAT score than Ann. We shade the area representing those individuals in Figure $5$. The total area under the normal curve is always equal to 1, and the proportion of people who scored below Ann on the SAT is equal to the area shaded in Figure $5$: 0.8413. In other words, Ann is in the 84th percentile of SAT takers. We can use the normal model to find percentiles. A normal probability table, which lists Z scores and corresponding percentiles, can be used to identify a percentile based on the Z score (and vice versa). Statistical software can also be used, A normal probability table is given in Appendix B.1 on page 407 and abbreviated in Table 3.8. We use this table to identify the percentile corresponding to any particular Z score. For instance, the percentile of Z = 0:43 is shown in row 0:4 and column 0:03 in Table 3.8: 0.6664, or the 66:64th percentile. Generally, we round Z to two decimals, identify the proper row in the normal probability table up through the first decimal, and then determine the column representing the second decimal value. The intersection of this row and column is the percentile of the observation. We can also find the Z score associated with a percentile. For example, to identify Z for the 80th percentile, we look for the value closest to 0.8000 in the middle portion of the table: 0.7995. We determine the Z score for the 80th percentile by combining the row and column Z values: 0.84. Exercise $5$ Determine the proportion of SAT test takers who scored better than Ann on the SAT. Answer If 84% had lower scores than Ann, the number of people who had better scores must be 16%. (Generally ties are ignored when the normal model, or any other continuous distribution, is used.) Normal Probability Examples Cumulative SAT scores are approximated well by a normal model, $N(\mu = 1500; \sigma = 300)$. Example $5$ Shannon is a randomly selected SAT taker, and nothing is known about Shannon's SAT aptitude. What is the probability Shannon scores at least 1630 on her SATs? Solution First, always draw and label a picture of the normal distribution. (Drawings need not be exact to be useful.) We are interested in the chance she scores above 1630, so we shade this upper tail: Table $1$: A section of the normal probability table. The percentile for a normal random variable with Z = 0.43 has been highlighted, and the percentile closest to 0.8000 has also been highlighted. Second decimal place of Z Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359 0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753 0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141 0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517 0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879 0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224 0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549 0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852 0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133 0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389 1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621 1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830 $\vdots$ $\vdots$ The picture shows the mean and the values at 2 standard deviations above and below the mean. The simplest way to find the shaded area under the curve makes use of the Z score of the cutoff value. With $\mu$ = 1500, $\sigma$ = 300, and the cutoff value x = 1630, the Z score is computed as $Z = \dfrac {x - \mu}{\sigma} = \dfrac {1630 - 1500}{300} = \dfrac{130}{300} = 0.43$ We look up the percentile of Z = 0:43 in the normal probability table shown in Table 3.8 or in Appendix B.1 on page 407, which yields 0.6664. However, the percentile describes those who had a Z score lower than 0.43. To find the area above Z = 0.43, we compute one minus the area of the lower tail: The probability Shannon scores at least 1630 on the SAT is 0.3336. TIP: always draw a picture first and find the Z score second For any normal probability situation, always always always draw and label the normal curve and shade the area of interest first. The picture will provide an estimate of the probability. After drawing a figure to represent the situation, identify the Z score for the observation of interest. Exercise $5$ If the probability of Shannon scoring at least 1630 is 0.3336, then what is the probability she scores less than 1630? Draw the normal curve representing this exercise, shading the lower region instead of the upper one. Answer We found the probability in Example $5$: 0.6664. A picture for this exercise is represented by the shaded area below "0.6664" in Example 3.9. Example $6$ Edward earned a 1400 on his SAT. What is his percentile? Solution First, a picture is needed. Edward's percentile is the proportion of people who do not get as high as a 1400. These are the scores to the left of 1400. Identifying the mean $\mu$ = 1500, the standard deviation $\sigma$ = 300, and the cutoff for the tail area x = 1400 makes it easy to compute the Z score: $Z = \dfrac {x - \mu}{\sigma} = \dfrac {1400 - 1500}{300} = - 0.33 \nonumber$ Using the normal probability table, identify the row of -0.3 and column of 0.03, which corresponds to the probability 0.3707. Edward is at the 37th percentile. Exercise $\PageIndex{6A}$ Use the results of Example $6$ to compute the proportion of SAT takers who did better than Edward. Also draw a new picture. Answer If Edward did better than 37% of SAT takers, then about 63% must have done better than him. TIP: areas to the right The normal probability table in most books gives the area to the left. If you would like the area to the right, first find the area to the left and then subtract this amount from one. Exercise $\PageIndex{6b}$ Stuart earned an SAT score of 2100. Draw a picture for each part. 1. What is his percentile? 2. What percent of SAT takers did better than Stuart? Answer TBA Exercise $\PageIndex{6C}$ Based on a sample of 100 men, (USDA Food Commodity Intake Database) the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0" and standard deviation 3.3". Mike is 5'7" and Jim is 6'4". 1. What is Mike's height percentile? 2. What is Jim's height percentile? Also draw one picture for each part. Answer First put the heights into inches: 67 and 76 inches. Figures are shown below. (a) $Z_{Mike} = \dfrac {67-70}{3.3} = -0.91 \rightarrow 0.1814$. (b)$Z_{Jim} = \dfrac {76-70}{3.3} = 1.82 \rightarrow 0.9656$ The last several problems have focused on finding the probability or percentile for a particular observation. What if you would like to know the observation corresponding to a particular percentile? Example $7$: Based on a sample of 100 men, (USDA Food Commodity Intake Database) the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0" and standard deviation 3.3". Erik's height is at the 40th percentile. How tall is she? Solution As always, first draw the picture. In this case, the lower tail probability is known (0.40), which can be shaded on the diagram. We want to find the observation that corresponds to this value. As a first step in this direction, we determine the Z score associated with the 40th percentile. Because the percentile is below 50%, we know Z will be negative. Looking in the negative part of the normal probability table, we search for the probability inside the table closest to 0.4000. We find that 0.4000 falls in row -0.2 and between columns 0.05 and 0.06. Since it falls closer to 0.05, we take this one: Z = -0.25. Knowing $Z_{Erik} = -0.25$ and the population parameters $\mu$ = 70 and $\sigma$ = 3.3 inches, the Z score formula can be set up to determine Erik's unknown height, labeled $x_{Erik}$: $-0.25 = Z_{Erik} = \dfrac {x_{Erik} - \mu}{\sigma} = \dfrac {x_{Erik} - 70}{3.3}$ Solving for $x_{Erik}$ yields the height 69.18 inches. That is, Erik is about 5'9" (this is notation for 5-feet, 9-inches). Exercise $7$ 1. What is the 95th percentile for SAT scores? 2. What is the 97.5th percentile of the male heights? As always with normal probability problems, first draw a picture. Answer Remember: draw a picture first, then find the Z score. (We leave the pictures to you.) The Z score can be found by using the percentiles and the normal probability table. (a) We look for 0.95 in the probability portion (middle part) of the normal probability table, which leads us to row 1.6 and (about) column 0.05, i.e. $Z_{95} = 1.65. Knowing Z_{95} = 1.65, \mu = 1500, and \sigma = 300$, we setup the Z score formula: $1.65 = \dfrac {x_{95} -1500}{300}$. We solve for $x_{95}: x_{95} = 1995$. (b) Similarly, we find $Z_{97:5} = 1.96$, again setup the Z score formula for the heights, and calculate $x_{97.5} = 76.5$. Example $8$: Based on a sample of 100 men, (USDA Food Commodity Intake Database) the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0" and standard deviation 3.3". What is the adult male height at the 82nd percentile? Solution Again, we draw the figure first. Next, we want to find the Z score at the 82nd percentile, which will be a positive value. Looking in the Z table, we find Z falls in row 0:9 and the nearest column is 0.02, i.e. Z = 0.92. Finally, the height x is found using the Z score formula with the known mean $\mu$, standard deviation $\sigma$, and Z score Z = 0.92: $0.92 = Z = \dfrac {x -\mu}{\sigma} = \dfrac {x - 70}{3.3} \nonumber$ This yields 73.04 inches or about 6'1" as the height at the 82nd percentile. Exercise $8$ Based on a sample of 100 men, (USDA Food Commodity Intake Database) the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0" and standard deviation 3.3". 1. What is the probability that a randomly selected male adult is at least 6'2" (74 inches)? 2. What is the probability that a male adult is shorter than 5'9" (69 inches)? Answer Numerical answers: (a) 0.1131. (b) 0.3821. Example $9$: Based on a sample of 100 men, (USDA Food Commodity Intake Database) the heights of male adults between the ages 20 and 62 in the US is nearly normal with mean 70.0" and standard deviation 3.3". What is the probability that a random adult male is between 5'9" and 6'2"? Solution These heights correspond to 69 inches and 74 inches. First, draw the figure. The area of interest is no longer an upper or lower tail. The total area under the curve is 1. If we find the area of the two tails that are not shaded (from Exercise 3.18, these areas are 0.3821 and 0.1131), then we can find the middle area: That is, the probability of being between 5'9" and 6'2" is 0.5048. Exercise $\PageIndex{9A}$ What percent of SAT takers get between 1500 and 2000? Answer This is an abbreviated solution. (Be sure to draw a figure!) First find the percent who get below 1500 and the percent that get above 2000: $Z_{1500} = 0:00 \rightarrow 0.5000$ (area below), $Z_{2000} = 1.67 \rightarrow 0.0475$ (area above). Final answer: 1:0000 - 0:5000 - 0.0475 = 0.4525. Exercise $\PageIndex{9B}$ What percent of adult males are between 5'5" and 5'7"? Answer 5'5" is 65 inches. 5'7" is 67 inches. Numerical solution: 1.000-0.0649-0.8183 = 0.1168, i.e. 11.68%. 68-95-99.7 Rule Here, we present a useful rule of thumb for the probability of falling within 1, 2, and 3 standard deviations of the mean in the normal distribution. This will be useful in a wide range of practical settings, especially when trying to make a quick estimate without a calculator or Z table. Exercise $\PageIndex{10A}$ Use the Z table to con rm that about 68%, 95%, and 99.7% of observations fall within 1, 2, and 3, standard deviations of the mean in the normal distribution, respectively. For instance, first find the area that falls between Z = -1 and Z = 1, which should have an area of about 0.68. Similarly there should be an area of about 0.95 between Z = -2 and Z = 2. Answer First draw the pictures. To nd the area between Z = -1 and Z = 1, use the normal probability table to determine the areas below Z = -1 and above Z = 1. Next verify the area between Z = -1 and Z = 1 is about 0.68. Repeat this for Z = -2 to Z = 2 and also for Z = -3 to Z = 3. It is possible for a normal random variable to fall 4, 5, or even more standard deviations from the mean. However, these occurrences are very rare if the data are nearly normal. The probability of being further than 4 standard deviations from the mean is about 1-in-30,000. For 5 and 6 standard deviations, it is about 1-in-3.5 million and 1-in-1 billion, respectively. Exercise $\PageIndex{10B}$ SAT scores closely follow the normal model with mean $\mu$ = 1500 and standard deviation $\sigma$ = 300. (a) About what percent of test takers score 900 to 2100? (b) What percent score between 1500 and 2100? Answer (a) 900 and 2100 represent two standard deviations above and below the mean, which means about 95% of test takers will score between 900 and 2100. (b) Since the normal model is symmetric, then half of the test takers from part (a) ( \(\frac {95%}{2} = 47:5% of all test takers) will score 900 to 1500 while 47.5% score between 1500 and 2100. Contributors and Attributions • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.01%3A_Normal_Distribution.txt
Many processes can be well approximated by the normal distribution. We have already seen two good examples: SAT scores and the heights of US adult males. While using a normal model can be extremely convenient and helpful, it is important to remember normality is always an approximation. Testing the appropriateness of the normal assumption is a key step in many data analyses. Example 3.15 suggests the distribution of heights of US males is well approximated by the normal model. We are interested in proceeding under the assumption that the data are normally distributed, but first we must check to see if this is reasonable. There are two visual methods for checking the assumption of normality, which can be implemented and interpreted quickly. The first is a simple histogram with the best tting normal curve overlaid on the plot, as shown in the left panel of Figure 3.10. The sample mean $\bar {x}$ and standard deviation s are used as the parameters of the best tting normal curve. The closer this curve ts the histogram, the more reasonable the normal model assumption. Another more common method is examining a normal probability plot.19, shown in the right panel of Figure 3.10. The closer the points are to a perfect straight line, the more con dent we can be that the data follow the normal model. We outline the construction of the normal probability plot in Section 3.2.2 Example 3.24 Three data sets of 40, 100, and 400 samples were simulated from a normal distribution, and the histograms and normal probability plots of the data sets are shown in Figure 3.11. These will provide a benchmark for what to look for in plots of real data. The left panels show the histogram (top) and normal probability plot (bottom) for the simulated data set with 40 observations. The data set is too small to really see clear structure in the histogram. The normal probability plot also reects this, where there are some deviations from the line. However, these deviations are not strong. The middle panels show diagnostic plots for the data set with 100 simulated observations. The histogram shows more normality and the normal probability plot shows a better fit. While there is one observation that deviates noticeably from the line, it is not particularly extreme. 19Also commonly called a quantile-quantile plot. The data set with 400 observations has a histogram that greatly resembles the normal distribution, while the normal probability plot is nearly a perfect straight line. Again in the normal probability plot there is one observation (the largest) that deviates slightly from the line. If that observation had deviated 3 times further from the line, it would be of much greater concern in a real data set. Apparent outliers can occur in normally distributed data but they are rare. Notice the histograms look more normal as the sample size increases, and the normal probability plot becomes straighter and more stable. Example Example 3.25 Are NBA player heights normally distributed? Consider all 435 NBA players from the 2008-9 season presented in Figure 3.12.20 We first create a histogram and normal probability plot of the NBA player heights. The histogram in the left panel is slightly left skewed, which contrasts with the symmetric normal distribution. The points in the normal probability plot do not appear to closely follow a straight line but show what appears to be a "wave". We can compare these characteristics to the sample of 400 normally distributed observations in Example 3.24 and see that they represent much stronger deviations from the normal model. NBA player heights do not appear to come from a normal distribution. 20These data were collected from http://www.nba.com. Example Example 3.26 Can we approximate poker winnings by a normal distribution? We consider the poker winnings of an individual over 50 days. A histogram and normal probability plot of these data are shown in Figure 3.13. The data are very strongly right skewed in the histogram, which corresponds to the very strong deviations on the upper right component of the normal probability plot. If we compare these results to the sample of 40 normal observations in Example 3.24, it is apparent that these data show very strong deviations from the normal model. Exercise 3.27 Determine which data sets represented in Figure 3.14 plausibly come from a nearly normal distribution. Are you con dent in all of your conclusions? There are 100 (top left), 50 (top right), 500 (bottom left), and 15 points (bottom right) in the four plots.21 Exercise 3.28 Figure 3.15 shows normal probability plots for two distributions that are skewed. One distribution is skewed to the low end (left skewed) and the other to the high end (right skewed). Which is which?22 21Answers may vary a little. The top-left plot shows some deviations in the smallest values in the data set; speci cally, the left tail of the data set has some outliers we should be wary of. The top-right and bottom-left plots do not show any obvious or extreme deviations from the lines for their respective sample sizes, so a normal model would be reasonable for these data sets. The bottom-right plot has a consistent curvature that suggests it is not from the normal distribution. If we examine just the vertical coordinates of these observations, we see that there is a lot of data between -20 and 0, and then about ve observations scattered between 0 and 70. This describes a distribution that has a strong right skew. 22Examine where the points fall along the vertical axis. In the first plot, most points are near the low end with fewer observations scattered along the high end; this describes a distribution that is skewed to the high end. The second plot shows the opposite features, and this distribution is skewed to the low end.Constructing a normal probability plot (special topic) We construct a normal probability plot for the heights of a sample of 100 men as follows: 1. Order the observations. 2. Determine the percentile of each observation in the ordered data set. 3. Identify the Z score corresponding to each percentile. 4. Create a scatterplot of the observations (vertical) against the Z scores (horizontal). If the observations are normally distributed, then their Z scores will approximately correspond to their percentiles and thus to the $z_i$ in Table 3.16. Table 3.16: Construction details for a normal probability plot of 100 men's heights. The first observation is assumed to be at the 0:99th percentile, and the $z_i$ corresponding to a lower tail of 0.0099 is -2:33. To create the plot based on this table, plot each pair of points, ($z_i$, $x_i$). Observation i 1 2 3 $\dots$ 100 $x_i$ 61 63 63 $\dots$ 78 Percentile 0.99% 1.98% 2.97% $\dots$ 99.01% $z_i$ -2.33 -2.06 -1.89 $\dots$ 2.33 Caution: $z_i$ correspond to percentiles The $z_i$ in Table 3.16 are not the Z scores of the observations but only correspond to the percentiles of the observations. Because of the complexity of these calculations, normal probability plots are generally created using statistical software. Contributors and Attributions • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.02%3A_Evaluating_the_Normal_Approximation.txt
How long should we expect to flip a coin until it turns up heads? Or how many times should we expect to roll a die until we get a 1? These questions can be answered using the geometric distribution. We first formalize each trial - such as a single coin flip or die toss - using the Bernoulli distribution, and then we combine these with our tools from probability (Chapter 2) to construct the geometric distribution. Bernoulli Distribution Stanley Milgram began a series of experiments in 1963 to estimate what proportion of people would willingly obey an authority and give severe shocks to a stranger. Milgram found that about 65% of people would obey the authority and give such shocks. Over the years, additional research suggested this number is approximately consistent across communities and time. (Find further information on Milgram's experiment at www.cnr.berkeley.edu/ucce50/ag-labor/7article/article35.htm.) Each person in Milgram's experiment can be thought of as a trial. We label a person a success if she refuses to administer the worst shock. A person is labeled a failure if she administers the worst shock. Because only 35% of individuals refused to administer the most severe shock, we denote the probability of a success with p = 0:35. The probability of a failure is sometimes denoted with q = 1 - p. Thus, success or failure is recorded for each person in the study. When an individual trial only has two possible outcomes, it is called a Bernoulli random variable. Bernoulli random variable (descriptive) A Bernoulli random variable has exactly two possible outcomes. We typically label one of these outcomes a "success" and the other outcome a "failure". We may also denote a success by 1 and a failure by 0. TIP: "success" need not be something positive We chose to label a person who refuses to administer the worst shock a "success" and all others as "failures". However, we could just as easily have reversed these labels. The mathematical framework we will build does not depend on which outcome is labeled a success and which a failure, as long as we are consistent. Bernoulli random variables are often denoted as 1 for a success and 0 for a failure. In addition to being convenient in entering data, it is also mathematically handy. Suppose we observe ten trials: $0 1 1 1 1 0 1 1 0 0$ Then the sample proportion, $\hat {p}$, is the sample mean of these observations: $\hat {p} = \dfrac {\text {# of successes}}{\text {# of trials}} = \dfrac {0 + 1 + 1 + 1 + 1 + 0 + 1 + 1 + 0 + 0}{10} = 0.6$ This mathematical inquiry of Bernoulli random variables can be extended even further. Because 0 and 1 are numerical outcomes, we can define the mean and standard deviation of a Bernoulli random variable: p is the true probability of a success, then the mean of a Bernoulli random variable X is given by $\mu = E[X] = P(X = 0) 0 + P(X = 1) 1$ $= (1 - p) X 0 + p X 1 = 0 + p = p$ Similarly, the variance of $X$ can be computed: $\sigma^2 = P(X = 0)(0 - p)^2 + P(X = 1)(1 - p)^2$ $= (1 - p)p^2 + p(1- p)^2 = p(1- p)$ The standard deviation is $\sigma = \sqrt {p(1 - p)}$ Bernoulli random variable (mathematical) If X is a random variable that takes value 1 with probability of success p and 0 with probability 1 - p, then X is a Bernoulli random variable with mean and standard deviation $\mu = p \sigma = \sqrt {p(1-p)}$ In general, it is useful to think about a Bernoulli random variable as a random process with only two outcomes: a success or failure. Then we build our mathematical framework using the numerical labels 1 and 0 for successes and failures, respectively. Geometric Distribution Example $1$ illustrates what is called the geometric distribution, which describes the waiting time until a success for independent and identically distributed (iid) Bernoulli random variables. In this case, the independence aspect just means the individuals in the example don't affect each other, and identical means they each have the same probability of success. Example $1$ Dr. Smith wants to repeat Milgram's experiments, but she only wants to sample people until she finds someone who will not inflict the worst shock. (This is hypothetical since, in reality, this sort of study probably would not be permitted any longer under current ethical standards). If the probability a person will not give the most severe shock is still 0.35 and the subjects are independent, what are the chances that she will stop the study after the first person? The second person? The third? What about if it takes her n - 1 individuals who will administer the worst shock before finding her first success, i.e. the first success is on the nth person? (If the first success is the fifth person, then we say n = 5.) Solution The probability of stopping after the first person is just the chance the first person will not administer the worst shock: 1 - 0:65 = 0:35. The probability it will be the second person is $P(\text{second person is the first to not administer the worst shock})$ $= P(\text{the first will, the second won't}) = (0:65)(0:35) = 0:228$ Likewise, the probability it will be the third person is (0:65)(0:65)(0:35) = 0:148. If the first success is on the nth person, then there are $n - 1$ failures and finally 1 success, which corresponds to the probability $(0.65)^{n-1}(0:35)$. This is the same as $(1 - 0:35)^{n-1}(0:35)$. The geometric distribution from Example $1$ is shown in Figure 3.16. In general, the probabilities for a geometric distribution decrease exponentially fast. While this text will not derive the formulas for the mean (expected) number of trials needed to find the first success or the standard deviation or variance of this distribution, we present general formulas for each. Geometric Distribution If the probability of a success in one trial is $p$ and the probability of a failure is $1 - p$, then the probability of finding the first success in the nth trial is given by $(1 - p)_{n-1}p \label{3.30}$ The mean (i.e. expected value), variance, and standard deviation of this wait time are given by $\mu = \dfrac {1}{7} {\sigma}^2 = \dfrac {1 - p}{p^2} \sigma = \sqrt {1 - p}{p^2} \label {3.31}$ It is no accident that we use the symbol $\mu$ for both the mean and expected value. The mean and the expected value are one and the same. The left side of Equation \ref{3.31} says that, on average, it takes $\dfrac {1}{p}$ trials to get a success. This mathematical result is consistent with what we would expect intuitively. If the probability of a success is high (e.g. 0.8), then we don't usually wait very long for a success: $\dfrac {1}{0.8} = 1.25$ trials on average. If the probability of a success is low (e.g. 0.1), then we would expect to view many trials before we see a success: $\dfrac {1}{0.1} = 10$ trials. Exercise $1$ The probability that an individual would refuse to administer the worst shock is said to be about 0.35. If we were to examine individuals until we found one that did not administer the shock, how many people should we expect to check? The first expression in Equation \ref{3.31} may be useful. Answer We would expect to see about 1=0:35 = 2:86 individuals to find the first success. Example $2$ What is the chance that Dr. Smith will find the first success within the first 4 people? This is the chance it is the first (n = 1), second (n = 2), third (n = 3), or fourth (n = 4) person as the first success, which are four disjoint outcomes. Because the individuals in the sample are randomly sampled from a large population, they are independent. We compute the probability of each case and add the separate results: $P(n = 1, 2, 3, or 4)$ $= P(n = 1) + P(n = 2) + P(n = 3) + P(n = 4)$ $= (0:65)^{1-1}(0:35) + (0:65)^{2-1}(0:35) + (0:65)^{3-1}(0:35) + (0:65)^{4-1}(0:35)$ $= 0:82$ There is an 82% chance that she will end the study within 4 people. Exercise $2$ Determine a more clever way to solve Example 3.33. Show that you get the same result. Answer First find the probability of the complement: P(no success in first 4 trials) = $0.65^4$ = 0.18. Next, compute one minus this probability: 1 - P(no success in 4 trials) = 1 - 0.18 = 0.82. Example $3$ Suppose in one region it was found that the proportion of people who would administer the worst shock was "only" 55%. If people were randomly selected from this region, what is the expected number of people who must be checked before one was found that would be deemed a success? What is the standard deviation of this waiting time? Solution A success is when someone will not inflict the worst shock, which has probability p = 1 - 0.55 = 0.45 for this region. The expected number of people to be checked is $\dfrac {1}{p} = \dfrac {1}{0.45} = 2.22$ and the standard deviation is $\sqrt {\dfrac {(1 - p)}{p^2}} = 1.65$. Exercise $3$ Using the results from Example 3.35, $\mu$ = 2.22 and $\mu$ = 1.65, would it be appropriate to use the normal model to find what proportion of experiments would end in 3 or fewer trials? Answer No. The geometric distribution is always right skewed and can never be well-approximated by the normal model. The independence assumption is crucial to the geometric distribution's accurate description of a scenario. Mathematically, we can see that to construct the probability of the success on the nth trial, we had to use the Multiplication Rule for Independent Processes. It is no simple task to generalize the geometric model for dependent trials. Contributors and Attributions • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.03%3A_Geometric_Distribution_%28Special_Topic%29.txt
Example $1$: Shock Study Suppose we randomly selected four individuals to participate in the "shock" study. What is the chance exactly one of them will be a success? Let's call the four people Allen (A), Brittany (B), Caroline (C), and Damian (D) for convenience. Also, suppose 35% of people are successes as in the previous version of this example. Solution Let's consider a scenario where one person refuses: \begin{align*} P(A = \text{refuse}; B = \text{shock}; C = \text{shock}; D = \text{shock}) &= P(A = \text{refuse}) P(B = \text{shock}) P(C = \text{shock}) P(D = \text{shock}) \[5pt] &= (0.35)(0.65)(0.65)(0.65) \[5pt] &= (0.35)^1(0.65)^3 \[5pt] &= 0.096 \end{align*} But there are three other scenarios: Brittany, Caroline, or Damian could have been the one to refuse. In each of these cases, the probability is again $P=(0.35)^1(0.65)^3. \nonumber$ These four scenarios exhaust all the possible ways that exactly one of these four people could refuse to administer the most severe shock, so the total probability is $4 \times (0.35)^1(0.65)^3 = 0.38. \nonumber$ Exercise $1$ Verify that the scenario where Brittany is the only one to refuse to give the most severe shock has probability $(0.35)^1(0.65)^3.$ Answer \begin{align*} P(A = shock; B = refuse; C = shock; D = shock) &= (0.65)(0.35)(0.65)(0.65) \[5pt] &= (0.35)^1(0.65)^3.\end{align*} The Binomial Distribution The scenario outlined in Example $1$ is a special case of what is called the binomial distribution. The binomial distribution describes the probability of having exactly k successes in n independent Bernoulli trials with probability of a success p (in Example $1$, n = 4, k = 1, p = 0.35). We would like to determine the probabilities associated with the binomial distribution more generally, i.e. we want a formula where we can use n, k, and p to obtain the probability. To do this, we reexamine each part of the example. There were four individuals who could have been the one to refuse, and each of these four scenarios had the same probability. Thus, we could identify the nal probability as $\text {[# of scenarios]} \times \text {P(single scenario)} \label {3.39}$ The first component of this equation is the number of ways to arrange the k = 1 successes among the n = 4 trials. The second component is the probability of any of the four (equally probable) scenarios. Consider P(single scenario) under the general case of k successes and n-k failures in the n trials. In any such scenario, we apply the Multiplication Rule for independent events: $p^k (1- p)^{n-k}$ This is our general formula for P(single scenario). Secondly, we introduce a general formula for the number of ways to choose k successes in n trials, i.e. arrange k successes and n - k failures: $\binom {n}{k} = \dfrac {n!}{k!(n - k)!} \label{3.4.Y}$ The quantity $\binom {n}{k}$ is read n choose k.30 The exclamation point notation (e.g. k!) denotes a factorial expression. \begin{align} 0! &= 1 \nonumber \[5pt] 1! &= 1 \nonumber \[5pt] 2! &= 2 \times 1 = 2 \nonumber \[5pt] 3! &= 3 \times 2 \times 1 = 6 \nonumber \[5pt] 4! &= 4 \times 3 \times 2 \times 1 = 24 \nonumber \[5pt] & \vdots \nonumber \[5pt] n! &= n \times (n - 1) \times \dots \times 3 \times 2 \times 1 \label{eq3.4.X} \end{align} Substituting Equation \ref{eq3.4.X} into Equation \ref{3.4.Y}, we can compute the number of ways to choose $k = 1$ successes in $n = 4$ trials: \begin{align} \binom {4}{1} &= \dfrac {4!}{1! (4 - 1)!} \[5pt] &= \dfrac {4!}{1! 3!} \[5pt]&= \dfrac {4 \times 3 \times 2 \times 1}{(1)(3 \times 2 \times 1)} \[5pt]&= 4 \end{align} This result is exactly what we found by carefully thinking of each possible scenario in Example $1$. Other notations Other notation for n choose k includes $nC_k$, $C^k_n$, and $C(n, k)$. Substituting $n$ choose $k$ for the number of scenarios and $p^k(1 - p)^{n-k}$ for the single scenario probability in Equation \ref{3.39} yields the general binomial formula (Equation \ref{3.40}). Definition: Binomial distribution Suppose the probability of a single trial being a success is p. Then the probability of observing exactly k successes in n independent trials is given by $\binom {n}{k} p^k (1 - p)^{n - k} = \dfrac {n!}{k!(n - k)!} p^k (1 - p)^{n - k} \label {3.40}$ Additionally, the mean, variance, and standard deviation of the number of observed successes are $\mu = np \sigma^2 = np(1 - p) \sigma = \sqrt {np(1- p)} \label{3.41}$ TIP: Four conditions to check if it is binomial? 1. The trials independent. 2. The number of trials, $n$, is fixed. 3. Each trial outcome can be classified as a success or failure. 4. The probability of a success, $p$, is the same for each trial. Example $2$ What is the probability that 3 of 8 randomly selected students will refuse to administer the worst shock, i.e. 5 of 8 will? Solution We would like to apply the binomial model, so we check our conditions. The number of trials is fixed (n = 8) (condition 2) and each trial outcome can be classi ed as a success or failure (condition 3). Because the sample is random, the trials are independent (condition 1) and the probability of a success is the same for each trial (condition 4). In the outcome of interest, there are k = 3 successes in n = 8 trials, and the probability of a success is p = 0.35. So the probability that 3 of 8 will refuse is given by \begin{align*} \binom {8}{3} {(0.35)}^k (1 - 0.35)^{8 - 3} &= \dfrac {8!}{3!(8 - 3)!} {(0.35)}^k (1 - 0.35)^{8 - 3} \[5pt] &= \dfrac {8!}{3! 5!} {(0.35)}^3 {(0.65)}^5 \end{align*} Dealing with the factorial part: \begin{align*} \dfrac {8!}{3!5!} &= \dfrac {8 \times 7 \times 6 \times 5 \times 4 \times 3 \times 2 \times 1}{(3 \times 2 \times 1)( 5 \times 4 \times 3 \times 2 \times 1 )} \[5pt] &= \dfrac {8 \times 7 \times 6 }{ 3 \times 2 \times 1 } \[5pt] & = 56 \end{align*} Using $(0.35)^3(0.65)^5 \approx 0.005$, the final probability is about 56 * 0.005 = 0.28. TIP: computing binomial probabilities The rst step in using the binomial model is to check that the model is appropriate. The second step is to identify n, p, and k. The final step is to apply the formulas and interpret the results. TIP: computing n choose k In general, it is useful to do some cancelation in the factorials immediately. Alternatively, many computer programs and calculators have built in functions to compute n choose k, factorials, and even entire binomial probabilities. Exercise $\PageIndex{2A}$ If you ran a study and randomly sampled 40 students, how many would you expect to refuse to administer the worst shock? What is the standard deviation of the number of people who would refuse? Equation \ref{3.41} may be useful. Answer We are asked to determine the expected number (the mean) and the standard deviation, both of which can be directly computed from the formulas in Equation \ref{3.41}: $\mu = np = 40 \times 0.35 = 14 \nonumber$ and $\sigma = \sqrt{np(1 - p)} = \sqrt { 40 \times 0.35 \times 0.65} = 0.02. \nonumber$ Because very roughly 95% of observations fall within 2 standard deviations of the mean (see Section 1.6.4), we would probably observe at least 8 but less than 20 individuals in our sample who would refuse to administer the shock. Exercise $\PageIndex{2B}$ The probability that a random smoker will develop a severe lung condition in his or her lifetime is about 0:3. If you have 4 friends who smoke, are the conditions for the binomial model satisfied? Answer One possible answer: if the friends know each other, then the independence assumption is probably not satis ed. For example, acquaintances may have similar smoking habits. Example $3$ Suppose these four friends do not know each other and we can treat them as if they were a random sample from the population. Is the binomial model appropriate? What is the probability that 1. none of them will develop a severe lung condition? 2. One will develop a severe lung condition? 3. That no more than one will develop a severe lung condition? Solution To check if the binomial model is appropriate, we must verify the conditions. (i) Since we are supposing we can treat the friends as a random sample, they are independent. (ii) We have a fixed number of trials (n = 4). (iii) Each outcome is a success or failure. (iv) The probability of a success is the same for each trials since the individuals are like a random sample (p = 0.3 if we say a "success" is someone getting a lung condition, a morbid choice). Compute parts (a) and (b) from the binomial formula in Equation \ref{3.40}: $P(0) = \binom {4}{0}(0.3)^0(0.7)^4 = 1 \times 1 \times 0.7^4 = 0.2401 \nonumber$ $P(1) = \binom {4}{1}(0.3)^1(0.7)^3 = 0.4116. \nonumber$ Note: 0! = 1. Part (c) can be computed as the sum of parts (a) and (b): $P(0)+P(1) = 0.2401+0.4116 = 0.6517.$ That is, there is about a 65% chance that no more than one of your four smoking friends will develop a severe lung condition. Exercise $\PageIndex{3A}$ What is the probability that at least 2 of your 4 smoking friends will develop a severe lung condition in their lifetimes? Answer The complement (no more than one will develop a severe lung condition) as computed in Example $3$ as 0.6517, so we compute one minus this value: 0.3483. Exercise $\PageIndex{3B}$ Suppose you have 7 friends who are smokers and they can be treated as a random sample of smokers. 1. How many would you expect to develop a severe lung condition, i.e. what is the mean? 2. What is the probability that at most 2 of your 7 friends will develop a severe lung condition. Answer a $\mu$ = 0.3 \times 7 = 2.1. Answer b P(0, 1, or 2 develop severe lung condition) = P(k = 0)+P(k = 1)+P(k = 2) = 0:6471. Below we consider the first term in the binomial probability, n choose k under some special scenarios. Exercise $\PageIndex{3C}$ Why is it true that $\binom {n}{0} = 1$ and $\binom {n}{n} = 1$ for any number n? Solution Frame these expressions into words. How many different ways are there to arrange 0 successes and n failures in n trials? (1 way.) How many different ways are there to arrange n successes and 0 failures in n trials? (1 way.) Exercise $\PageIndex{3D}$ How many ways can you arrange one success and n -1 failures in n trials? How many ways can you arrange n -1 successes and one failure in n trials? Solution One success and n - 1 failures: there are exactly n unique places we can put the success, so there are n ways to arrange one success and n - 1 failures. A similar argument is used for the second question. Mathematically, we show these results by verifying the following two equations: $\binom {n}{1} = n, \binom {n}{n - 1} = n$ Normal Approximation to the Binomial Distribution The binomial formula is cumbersome when the sample size (n) is large, particularly when we consider a range of observations. In some cases we may use the normal distribution as an easier and faster way to estimate binomial probabilities. Example $4$ Approximately 20% of the US population smokes cigarettes. A local government believed their community had a lower smoker rate and commissioned a survey of 400 randomly selected individuals. The survey found that only 59 of the 400 participants smoke cigarettes. If the true proportion of smokers in the community was really 20%, what is the probability of observing 59 or fewer smokers in a sample of 400 people? Solution We leave the usual verification that the four conditions for the binomial model are valid as an exercise. The question posed is equivalent to asking, what is the probability of observing $k = 0, 1, \dots, 58, or 59$ smokers in a sample of n = 400 when p = 0.20? We can compute these 60 different probabilities and add them together to nd the answer: $P(k = 0 or k = 1 or \dots or k = 59)$ $= P(k = 0) + P(k = 1) + \dots + P(k = 59)$ $= 0.0041$ If the true proportion of smokers in the community is p = 0.20, then the probability of observing 59 or fewer smokers in a sample of n = 400 is less than 0.0041. The computations in Example 3.50 are tedious and long. In general, we should avoid such work if an alternative method exists that is faster, easier, and still accurate. Recall that calculating probabilities of a range of values is much easier in the normal model. We might wonder, is it reasonable to use the normal model in place of the binomial distribution? Surprisingly, yes, if certain conditions are met. Exercise $4$ Here we consider the binomial model when the probability of a success is p = 0.10. Figure 3.17 shows four hollow histograms for simulated samples from the binomial distribution using four different sample sizes: n = 10, 30, 100, 300. What happens to the shape of the distributions as the sample size increases? What distribution does the last hollow histogram resemble? Solution The distribution is transformed from a blocky and skewed distribution into one that rather resembles the normal distribution in last hollow histogram Normal approximation of the binomial distribution The binomial distribution with probability of success p is nearly normal when the sample size n is sufficiently large that np and n(1 - p) are both at least 10. The approximate normal distribution has parameters corresponding to the mean and standard deviation of the binomial distribution: $\mu = np \sigma = \sqrt {np(1- p)}$ The normal approximation may be used when computing the range of many possible successes. For instance, we may apply the normal distribution to the setting of Example 3.50. Example $5$ How can we use the normal approximation to estimate the probability of observing 59 or fewer smokers in a sample of 400, if the true proportion of smokers is p = 0.20? Solution Showing that the binomial model is reasonable was a suggested exercise in Example 3.50. We also verify that both np and n(1- p) are at least 10: $np = 400 \times 0.20 = 80 n(1 - p) = 400 \times 0.8 = 320$ With these conditions checked, we may use the normal approximation in place of the binomial distribution using the mean and standard deviation from the binomial model: $\mu = np = 80 \sigma = \sqrt {np(1 - p)} = 8$ We want to find the probability of observing fewer than 59 smokers using this model. Exercise $5$ Use the normal model N($\mu = 80, \sigma = 8$) to estimate the probability of observing fewer than 59 smokers. Your answer should be approximately equal to the solution of Example 3.50: 0.0041. Answer Compute the Z score rst: Z = $\dfrac {59-80}{8} = -2.63$. The corresponding left tail area is 0.0043. Caution: The normal approximation may fail on small intervals The normal approximation to the binomial distribution tends to perform poorly when estimating the probability of a small range of counts, even when the conditions are met. The normal Approximation Breaks down on small intervals Suppose we wanted to compute the probability of observing 69, 70, or 71 smokers in 400 when p = 0.20. With such a large sample, we might be tempted to apply the normal approximation and use the range 69 to 71. However, we would nd that the binomial solution and the normal approximation notably differ: $\text {Binomial}: 0.0703 \text {Normal}: 0.0476$ We can identify the cause of this discrepancy using Figure 3.18, which shows the areas representing the binomial probability (outlined) and normal approximation (shaded). Notice that the width of the area under the normal distribution is 0.5 units too slim on both sides of the interval. TIP: Improving the accuracy of the normal approximation to the binomial distribution The normal approximation to the binomial distribution for intervals of values is usually improved if cutoff values are modified slightly. The cutoff values for the lower end of a shaded region should be reduced by 0.5, and the cutoff value for the upper end should be increased by 0.5. The tip to add extra area when applying the normal approximation is most often useful when examining a range of observations. While it is possible to apply it when computing a tail area, the benefit of the modification usually disappears since the total interval is typically quite wide. Contributors and Attributions • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.04%3A_Binomial_Distribution_%28Special_Topic%29.txt
Negative Binomial Distribution The geometric distribution describes the probability of observing the first success on the $n^{th}$ trial. The negative binomial distribution is more general: it describes the probability of observing the $k^{th}$ success on the $n^{th}$ trial. Example $1$ Each day a high school football coach tells his star kicker, Brian, that he can go home after he successfully kicks four 35 yard field goals. Suppose we say each kick has a probability $p$ of being successful. If p is small - e.g. close to 0.1 - would we expect Brian to need many attempts before he successfully kicks his fourth field goal? Solution We are waiting for the fourth success (k = 4). If the probability of a success (p) is small, then the number of attempts (n) will probably be large. This means that Brian is more likely to need many attempts before he gets k = 4 successes. To put this another way, the probability of n being small is low. To identify a negative binomial case, we check 4 conditions. The first three are common to the binomial distribution (see a similar guide for the binomial distribution on page 138). Is it negative binomial? Four conditions to check. 1. The trials are independent. 2. Each trial outcome can be classified as a success or failure. 3. The probability of a success (p) is the same for each trial. 4. The last trial must be a success. Exercise $1$ Suppose Brian is very diligent in his attempts and he makes each 35 yard field goal with probability $p = 0.8$. Take a guess at how many attempts he would need before making his fourth kick. Answer One possible answer: since he is likely to make each field goal attempt, it will take him at least 4 attempts but probably not more than 6 or 7. Example $2$ In yesterday's practice, it took Brian only 6 tries to get his fourth field goal. Write out each of the possible sequence of kicks. Solution Because it took Brian six tries to get the fourth success, we know the last kick must have been a success. That leaves three successful kicks and two unsuccessful kicks (we label these as failures) that make up the first ve attempts. There are ten possible sequences of these first ve kicks, which are shown in Table 3.20. If Brian achieved his fourth success (k = 4) on his sixth attempt (n = 6), then his order of successes and failures must be one of these ten possible sequences. Exercise $2$ Each sequence in Table $1$ has exactly two failures and four successes with the last attempt always being a success. If the probability of a success is p = 0:8, find the probability of the first sequence. Answer The first sequence: $0.2 \times 0.2 \times 0.8 \times 0.8 \times 0.8 \times 0.8 = 0.0164$. Table $1$: The ten possible sequences when the fourth successful kick is on the sixth attempt. Kick Attempt 1 2 3 4 5 6 1 F F $\overset {1}{S}$ $\overset {2}{S}$ $\overset {3}{S}$ $\overset {4}{S}$ 2 F $\overset {1}{S}$ F $\overset {2}{S}$ $\overset {3}{S}$ $\overset {4}{S}$ 3 F $\overset {1}{S}$ $\overset {2}{S}$ F $\overset {3}{S}$ $\overset {4}{S}$ 4 F $\overset {1}{S}$ $\overset {2}{S}$ $\overset {3}{S}$ F $\overset {4}{S}$ 5 $\overset {1}{S}$ F F $\overset {2}{S}$ $\overset {3}{S}$ $\overset {4}{S}$ 6 $\overset {1}{S}$ F $\overset {2}{S}$ F $\overset {3}{S}$ $\overset {4}{S}$ 7 $\overset {1}{S}$ F $\overset {2}{S}$ $\overset {3}{S}$ F $\overset {4}{S}$ 8 $\overset {1}{S}$ $\overset {2}{S}$ F F $\overset {3}{S}$ $\overset {4}{S}$ 9 $\overset {1}{S}$ $\overset {2}{S}$ F $\overset {3}{S}$ F $\overset {4}{S}$ 10 $\overset {1}{S}$ $\overset {2}{S}$ $\overset {3}{S}$ F F $\overset {4}{S}$ If the probability Brian kicks a 35 yard field goal is p = 0:8, what is the probability it takes Brian exactly six tries to get his fourth successful kick? We can write this as $P \text {(it takes Brian six tries to make four field goals)}$ $= P \text {(Brian makes three of his first ve field goals, and he makes the sixth one)}$ $= P \text {(1st sequence OR 2nd sequence OR} \dots \text{OR 10th sequence)}$ where the sequences are from Table 3.20. We can break down this last probability into the sum of ten disjoint possibilities: $P \text {(1st sequence OR 2nd sequence OR } \dots \text{ OR 10th sequence)}$ $= P(\text {1st sequence}) + P(\text {2nd sequence}) + \dots + P(\text {10th sequence})$ The probability of the first sequence was identified in Exercise 3.57 as 0.0164, and each of the other sequences have the same probability. Since each of the ten sequence has the same probability, the total probability is ten times that of any individual sequence. The way to compute this negative binomial probability is similar to how the binomial problems were solved in Section 3.4. The probability is broken into two pieces: $P\text {(it takes Brian six tries to make four field goals)}$ $= \text {[Number of possible sequences]} \times P \text {(Single sequence)}$ Each part is examined separately, then we multiply to get the final result. We first identify the probability of a single sequence. One particular case is to first observe all the failures (n - k of them) followed by the k successes: \begin{align} P \text {(Single sequence)} \[5pt] &= P \text {(n - k failures and then k successes)} \[5pt] &= (1- p)^{n-k}p^k \end{align} We must also identify the number of sequences for the general case. Above, ten sequences were identi ed where the fourth success came on the sixth attempt. These sequences were identified by fixing the last observation as a success and looking for all the ways to arrange the other observations. In other words, how many ways could we arrange $k - 1$ successes in $n - 1$ trials? This can be found using the n choose k coefficient but for n - 1 and k - 1 instead: $\binom {n -1}{k - 1} = \frac {(n -1)!}{(k - 1)! ((n -1) - (k -1))!} = \frac {(n - 1)!}{(k - 1)! (n - k)!}$ This is the number of different ways we can order k-1 successes and n-k failures in n-1 trials. If the factorial notation (the exclamation point) is unfamiliar, see page 138. Negative binomial distribution The negative binomial distribution describes the probability of observing the $k^{th}$ success on the nth trial: $P \text {(the k^{th} success on the nth trial)} = \binom {n - 1}{k - 1} p^k {(1 - p)}^{n - k} \label {3.58}$ where p is the probability an individual trial is a success. All trials are assumed to be independent. Example $3$ Show using Equation \ref{3.58} that the probability Brian kicks his fourth successful field goal on the sixth attempt is 0.164. Solution The probability of a single success is p = 0.8, the number of successes is k = 4, and the number of necessary attempts under this scenario is n = 6. \begin{align*} \binom {n -1}{k - 1} p^k {(1 - p)}^{n - k} &= \frac {(5)!}{(3)! (2)!} {(0.8)}^4{(0.2)}^2 \[5pt] &= 10 \times 0.0164 \[5pt] &= 0.164 \end{align*} Exercise $\PageIndex{3A}$ The negative binomial distribution requires that each kick attempt by Brian is independent. Do you think it is reasonable to suggest that each of Brian's kick attempts are independent? Answer Answers may vary. We cannot conclusively say they are or are not independent. However, many statistical reviews of athletic performance suggests such attempts are very nearly independent. Exercise $\PageIndex{3B}$ Assume Brian's kick attempts are independent. What is the probability that Brian will kick his fourth field goal within 5 attempts? Answer If his fourth field goal (k = 4) is within ve attempts, it either took him four or ve tries (n = 4 or n = 5). We have p = 0:8 from earlier. Use Equation \ref{3.58} to compute the probability of n = 4 tries and n = 5 tries, then add those probabilities together: \begin{align*} P(n = 4 OR n = 5) &= P(n = 4) + P(n = 5) \[5pt] &= \binom {4 - 1}{4 - 1} 0.8^4 + \binom {5 -1}{4 - 1} {(0.8)}^4 (1 - 0.8) = 1 \times 0.41 + 4 \times 0.082 = 0.41 + 0.33 = 0.71 \end{align*} TIP: Binomial versus negative binomial In the binomial case, we typically have a xed number of trials and instead consider the number of successes. In the negative binomial case, we examine how many trials it takes to observe a fixed number of successes and require that the last observation be a success. Exercise $\PageIndex{3C}$ On 70% of days, a hospital admits at least one heart attack patient. On 30% of the days, no heart attack patients are admitted. Identify each case below as a binomial or negative binomial case, and compute the probability.45 1. What is the probability the hospital will admit a heart attack patient on exactly three days this week? 2. What is the probability the second day with a heart attack patient will be the fourth day of the week? 3. What is the probability the fth day of next month will be the first day with a heart attack patient? Answer In each part, p = 0:7. (a) The number of days is xed, so this is binomial. The parameters are k = 3 and n = 7: 0.097. (b) The last "success" (admitting a heart attack patient) is xed to the last day, so we should apply the negative binomial distribution. The parameters are k = 2, n = 4: 0.132. (c) This problem is negative binomial with k = 1 and n = 5: 0.006. Note that the negative binomial case when k = 1 is the same as using the geometric distribution. The Poisson Distribution Example $4$ There are about 8 million individuals in New York City. How many individuals might we expect to be hospitalized for acute myocardial infarction (AMI), i.e. a heart attack, each day? According to historical records, the average number is about 4.4 individuals. However, we would also like to know the approximate distribution of counts. What would a histogram of the number of AMI occurrences each day look like if we recorded the daily counts over an entire year? Solution A histogram of the number of occurrences of AMI on 365 days46 for NYC is shown in Figure $1$. The sample mean (4.38) is similar to the historical average of 4.4. The sample standard deviation is about 2, and the histogram indicates that about 70% of the data fall between 2.4 and 6.4. The distribution's shape is unimodal and skewed to the right. 46These data are simulated. In practice, we should check for an association between successive days. The Poisson distribution is often useful for estimating the number of rare events in a large population over a unit of time. For instance, consider each of the following events, which are rare for any given individual: • having a heart attack, • getting married, and • getting struck by lightning. The Poisson distribution helps us describe the number of such events that will occur in a short unit of time for a fixed population if the individuals within the population are independent. The histogram in Figure $1$ approximates a Poisson distribution with rate equal to 4.4. The rate for a Poisson distribution is the average number of occurrences in a mostlyfixed population per unit of time. In Example 3.63, the time unit is a day, the population is all New York City residents, and the historical rate is 4.4. The parameter in the Poisson distribution is the rate - or how many rare events we expect to observe - and it is typically denoted by $\lambda$ (the Greek letter lambda) or $\mu$. Using the rate, we can describe the probability of observing exactly k rare events in a single unit of time. Poisson distribution Suppose we are watching for rare events and the number of observed events follows a Poisson distribution with rate $\lambda$. Then $P\text {(observe k rare events)} = \frac {\lambda^k e^{-\lambda}}{k!}$ where k may take a value 0, 1, 2, and so on, and k! represents k-factorial, as described on page 138. The letter $e \approx 2:718$ is the base of the natural logarithm. The mean and standard deviation of this distribution are $\lambda$ and $\sqrt {\lambda}$, respectively. We will leave a rigorous set of conditions for the Poisson distribution to a later course. However, we offer a few simple guidelines that can be used for an initial evaluation of whether the Poisson model would be appropriate. TIP: Is it Poisson? A random variable may follow a Poisson distribution if the event being considered is rare, the population is large, and the events occur independently of each other. Even when rare events are not really independent - for instance, Saturdays and Sundays are especially popular for weddings - a Poisson model may sometimes still be reasonable if we allow it to have a different rate for different times. In the wedding example, the rate would be modeled as higher on weekends than on weekdays. The idea of modeling rates for a Poisson distribution against a second variable such as dayOfTheWeek forms the foundation of some more advanced methods that fall in the realm of generalized linear models. In Chapters 7 and 8, we will discuss a foundation of linear models. Contributors and Attributions • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.05%3A_More_Discrete_Distributions_%28Special_Topic%29.txt
Normal distribution 3.1 Area under the curve, I. What percent of a standard normal distribution N($\mu$ = 0, $\sigma$ = 1) is found in each region? Be sure to draw a graph. (a) Z < -1:35 (b) Z > 1:48 (c) -0:4 < Z < 1:5 (d) |Z| > 2 3.2 Area under the curve, II. What percent of a standard normal distribution N($\mu$ = 0, $\sigma$ = 1) is found in each region? Be sure to draw a graph. (a) Z > -1.13 (b) Z < 0.18 (c) Z > 8 (d) |Z| < 0.5 3.3 Scores on the GRE, Part I. A college senior who took the Graduate Record Examination exam scored 620 on the Verbal Reasoning section and 670 on the Quantitative Reasoning section. The mean score for Verbal Reasoning section was 462 with a standard deviation of 119, and the mean score for the Quantitative Reasoning was 584 with a standard deviation of 151. Suppose that both distributions are nearly normal. (a) Write down the short-hand for these two normal distributions. (b) What is her Z score on the Verbal Reasoning section? On the Quantitative Reasoning section? Draw a standard normal distribution curve and mark these two Z scores. (c) What do these Z scores tell you? (d) Relative to others, which section did she do better on? (e) Find her percentile scores for the two exams. (f) What percent of the test takers did better than her on the Verbal Reasoning section? On the Quantitative Reasoning section? (g) Explain why simply comparing her raw scores from the two sections would lead to the incorrect conclusion that she did better on the Quantitative Reasoning section. (h) If the distributions of the scores on these exams are not nearly normal, would your answers to parts (b) - (f) change? Explain your reasoning. 3.4 Triathlon times, Part I. In triathlons, it is common for racers to be placed into age and gender groups. Friends Leo and Mary both completed the Hermosa Beach Triathlon, where Leo competed in the Men, Ages 30 - 34 group while Mary competed in the Women, Ages 25 - 29 group. Leo completed the race in 1:22:28 (4948 seconds), while Mary completed the race in 1:31:53 (5513 seconds). Obviously Leo nished faster, but they are curious about how they did within their respective groups. Can you help them? Here is some information on the performance of their groups: • The nishing times of the Men, Ages 30 - 34 group has a mean of 4313 seconds with a standard deviation of 583 seconds. • The nishing times of the Women, Ages 25 - 29 group has a mean of 5261 seconds with a standard deviation of 807 seconds. • The distributions of nishing times for both groups are approximately Normal. Remember: a better performance corresponds to a faster nish. (a) Write down the short-hand for these two normal distributions. (b) What are the Z scores for Leo's and Mary's nishing times? What do these Z scores tell you? (c) Did Leo or Mary rank better in their respective groups? Explain your reasoning. (d) What percent of the triathletes did Leo nish faster than in his group? (e) What percent of the triathletes did Mary nish faster than in her group? (f) If the distributions of nishing times are not nearly normal, would your answers to parts (b) - (e) change? Explain your reasoning. 3.5 GRE scores, Part II. In Exercise 3.3 we saw two distributions for GRE scores: N($\mu$ = 462, $\sigma$ = 119) for the verbal part of the exam and N($\mu$ = 584,$\sigma$ = 151) for the quantitative part. Use this information to compute each of the following: (a) The score of a student who scored in the 80th percentile on the Quantitative Reasoning section. (b) The score of a student who scored worse than 70% of the test takers in the Verbal Reasoning section. 3.6 Triathlon times, Part II. In Exercise 3.4 we saw two distributions for triathlon times: N($\mu$ = 4313, $\sigma$ = 583) for Men, Ages 30 - 34 and N($\mu$ = 5261, $\sigma$ = 807) for the Women, Ages 25 - 29 group. Times are listed in seconds. Use this information to compute each of the following: (a) The cutoff time for the fastest 5% of athletes in the men's group, i.e. those who took the shortest 5% of time to nish. (b) The cutoff time for the slowest 10% of athletes in the women's group. 3.7 Temperatures in LA, Part I. The average daily high temperature in June in LA is 770F with a standard deviation of 50F. Suppose that the temperatures in June closely follow a normal distribution. (a) What is the probability of observing an 830F temperature or higher in LA during a randomly chosen day in June? (b) How cold are the coldest 10% of the days during June in LA? 3.8 Portfolio returns. The Capital Asset Pricing Model is a nancial model that assumes returns on a portfolio are normally distributed. Suppose a portfolio has an average annual return of 14.7% (i.e. an average gain of 14.7%) with a standard deviation of 33%. A return of 0% means the value of the portfolio doesn't change, a negative return means that the portfolio loses money, and a positive return means that the portfolio gains money. (a) What percent of years does this portfolio lose money, i.e. have a return less than 0%? (b) What is the cutoff for the highest 15% of annual returns with this portfolio? 3.9 Temperatures in LA, Part II. Exercise 3.7 states that average daily high temperature in June in LA is 770F with a standard deviation of 50F, and it can be assumed that they to follow a normal distribution. We use the following equation to convert 0F (Fahrenheit) to 0C (Celsius): $C = (F - 32) \times \frac {5}{9}$ (a) Write the probability model for the distribution of temperature in 0C in June in LA. (b) What is the probability of observing a 28ffiC (which roughly corresponds to 830F) temperature or higher in June in LA? Calculate using the 0C model from part (a). (c) Did you get the same answer or different answers in part (b) of this question and part (a) of Exercise 3.7? Are you surprised? Explain. 3.10 Heights of 10 year olds. Heights of 10 year olds, regardless of gender, closely follow a normal distribution with mean 55 inches and standard deviation 6 inches. (a) What is the probability that a randomly chosen 10 year old is shorter than 48 inches? (b) What is the probability that a randomly chosen 10 year old is between 60 and 65 inches? (c) If the tallest 10% of the class is considered \very tall", what is the height cutoff for "very tall"? (d) The height requirement for Batman the Ride at Six Flags Magic Mountain is 54 inches. What percent of 10 year olds cannot go on this ride? 3.11 Auto insurance premiums. Suppose a newspaper article states that the distribution of auto insurance premiums for residents of California is approximately normal with a mean of $1,650. The article also states that 25% of California residents pay more than$1,800. (a) What is the Z score that corresponds to the top 25% (or the 75th percentile) of the standard normal distribution? (b) What is the mean insurance cost? What is the cutoff for the 75th percentile? (c) Identify the standard deviation of insurance premiums in LA. 3.12 Speeding on the I-5, Part I. The distribution of passenger vehicle speeds traveling on the Interstate 5 Freeway (I-5) in California is nearly normal with a mean of 72.6 miles/hour and a standard deviation of 4.78 miles/hour.47 (a) What percent of passenger vehicles travel slower than 80 miles/hour? (b) What percent of passenger vehicles travel between 60 and 80 miles/hour? (c) How fast to do the fastest 5% of passenger vehicles travel? (d) The speed limit on this stretch of the I-5 is 70 miles/hour. Approximate what percentage of the passenger vehicles travel above the speed limit on this stretch of the I-5. 3.13 Overweight baggage. Suppose weights of the checked baggage of airline passengers follow a nearly normal distribution with mean 45 pounds and standard deviation 3.2 pounds. Most airlines charge a fee for baggage that weigh in excess of 50 pounds. Determine what percent of airline passengers incur this fee. 3.14 Find the SD. Find the standard deviation of the distribution in the following situations. (a) MENSA is an organization whose members have IQs in the top 2% of the population. IQs are normally distributed with mean 100, and the minimum IQ score required for admission to MENSA is 132. (b) Cholesterol levels for women aged 20 to 34 follow an approximately normal distribution with mean 185 milligrams per deciliter (mg/dl). Women with cholesterol levels above 220 mg/dl are considered to have high cholesterol and about 18.5% of women fall into this category. 3.15 Buying books on Ebay. The textbook you need to buy for your chemistry class is expensive at the college bookstore, so you consider buying it on Ebay instead. A look at past auctions suggest that the prices of that chemistry textbook have an approximately normal distribution with mean $89 and standard deviation$15. (a) What is the probability that a randomly selected auction for this book closes at more than $100? (b) Ebay allows you to set your maximum bid price so that if someone outbids you on an auction you can automatically outbid them, up to the maximum bid price you set. If you are only bidding on one auction, what are the advantages and disadvantages of setting a bid price too high or too low? What if you are bidding on multiple auctions? (c) If you watched 10 auctions, roughly what percentile might you use for a maximum bid cutoff to be somewhat sure that you will win one of these ten auctions? Is it possible to nd a cutoff point that will ensure that you win an auction? (d) If you are willing to track up to ten auctions closely, about what price might you use as your maximum bid price if you want to be somewhat sure that you will buy one of these ten books? 47S. Johnson and D. Murray. "Empirical Analysis of Truck and Automobile Speeds on Rural Interstates: Impact of Posted Speed Limits". In: Transportation Research Board 89th Annual Meeting. 2010. 3.16 SAT scores. SAT scores (out of 2400) are distributed normally with a mean of 1500 and a standard deviation of 300. Suppose a school council awards a certi cate of excellence to all students who score at least 1900 on the SAT, and suppose we pick one of the recognized students at random. What is the probability this student's score will be at least 2100? (The material covered in Section 2.2 would be useful for this question.) 3.17 Scores on stats nal, Part I. Below are nal exam scores of 20 Introductory Statistics students. $\begin {matrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \ 57 & 66 & 69 & 71& 72 & 73 & 74 & 77 & 78 & 78 & 79 & 79 & 81 & 81 & 82 & 83 & 83 & 88 & 89 & 94 \end {matrix}$ The mean score is 77.7 points. with a standard deviation of 8.44 points. Use this information to determine if the scores approximately follow the 68-95-99.7% Rule. 3.18 Heights of female college students, Part I. Below are heights of 25 female college students. $\begin {matrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 \ 54 & 55 & 56 & 56 & 57 & 58 & 58 & 59 & 60 & 60 & 60 & 61 & 61 & 62 & 62 & 63 & 63 & 63 & 64 & 65 & 65 & 67 & 67 & 69 & 73 \end {matrix}$ The mean height is 61.52 inches with a standard deviation of 4.58 inches. Use this information to determine if the heights approximately follow the 68-95-99.7% Rule. Evaluating the Normal approximation 3.19 Scores on stats nal, Part II. Exercise 3.17 lists the nal exam scores of 20 Introductory Statistics students. Do these data appear to follow a normal distribution? Explain your reasoning using the graphs provided below. 3.20 Heights of female college students, Part II. Exercise 3.18 lists the heights of 25 female college students. Do these data appear to follow a normal distribution? Explain your reasoning using the graphs provided below. Geometric distribution 3.21 Is it Bernoulli? Determine if each trial can be considered an independent Bernouilli trial for the following situations. (a) Cards dealt in a hand of poker. (b) Outcome of each roll of a die. 3.22 With and without replacement. In the following situations assume that half of the speci ed population is male and the other half is female. (a) Suppose you're sampling from a room with 10 people. What is the probability of sampling two females in a row when sampling with replacement? What is the probability when sampling without replacement? (b) Now suppose you're sampling from a stadium with 10,000 people. What is the probability of sampling two females in a row when sampling with replacement? What is the probability when sampling without replacement? (c) We often treat individuals who are sampled from a large population as independent. Using your ndings from parts (a) and (b), explain whether or not this assumption is reasonable. 3.23 Married women. The 2010 American Community Survey estimates that 47.1% of women ages 15 years and over are married.48 (a) We randomly select three women between these ages. What is the probability that the third woman selected is the only one who is married? (b) What is the probability that all three randomly selected women are married? (c) On average, how many women would you expect to sample before selecting a married woman? What is the standard deviation? (d) If the proportion of married women was actually 30%, how many women would you expect to sample before selecting a married woman? What is the standard deviation? (e) Based on your answers to parts (c) and (d), how does decreasing the probability of an event affect the mean and standard deviation of the wait time until success? 3.24 Defective rate. A machine that produces a special type of transistor (a component of computers) has a 2% defective rate. The production is considered a random process where each transistor is independent of the others. (a) What is the probability that the 10th transistor produced is the rst with a defect? (b) What is the probability that the machine produces no defective transistors in a batch of 100? (c) On average, how many transistors would you expect to be produced before the rst with a defect? What is the standard deviation? (d) Another machine that also produces transistors has a 5% defective rate where each transistor is produced independent of the others. On average how many transistors would you expect to be produced with this machine before the rst with a defect? What is the standard deviation? (e) Based on your answers to parts (c) and (d), how does increasing the probability of an event aff ect the mean and standard deviation of the wait time until success? 48U.S. Census Bureau, 2010 American Community Survey, Marital Status. 3.25 Eye color, Part I. A husband and wife both have brown eyes but carry genes that make it possible for their children to have brown eyes (probability 0.75), blue eyes (0.125), or green eyes (0.125). (a) What is the probability the rst blue-eyed child they have is their third child? Assume that the eye colors of the children are independent of each other. (b) On average, how many children would such a pair of parents have before having a blue-eyed child? What is the standard deviation of the number of children they would expect to have until the rst blue-eyed child? 3.26 Speeding on the I-5, Part II. Exercise 3.12 states that the distribution of speeds of cars traveling on the Interstate 5 Freeway (I-5) in California is nearly normal with a mean of 72.6 miles/hour and a standard deviation of 4.78 miles/hour. The speed limit on this stretch of the I-5 is 70 miles/hour. (a) A highway patrol officer is hidden on the side of the freeway. What is the probability that 5 cars pass and none are speeding? Assume that the speeds of the cars are independent of each other. (b) On average, how many cars would the highway patrol officer expect to watch until the first car that is speeding? What is the standard deviation of the number of cars he would expect to watch? Binomial distribution 3.27 Underage drinking, Part I. The Substance Abuse and Mental Health Services Administration estimated that 70% of 18-20 year olds consumed alcoholic beverages in 2008.49 (a) Suppose a random sample of ten 18-20 year olds is taken. Is the use of the binomial distribution appropriate for calculating the probability that exactly six consumed alcoholic beverages? Explain. (b) Calculate the probability that exactly 6 out of 10 randomly sampled 18-20 year olds consumed an alcoholic drink. (c) What is the probability that exactly four out of the ten 18-20 year olds have not consumed an alcoholic beverage? (d) What is the probability that at most 2 out of 5 randomly sampled 18-20 year olds have consumed alcoholic beverages? (e) What is the probability that at least 1 out of 5 randomly sampled 18-20 year olds have consumed alcoholic beverages? 3.28 Chickenpox, Part I. The National Vaccine Information Center estimates that 90% of Americans have had chickenpox by the time they reach adulthood.50 (a) Suppose we take a random sample of 100 American adults. Is the use of the binomial distribution appropriate for calculating the probability that exactly 97 had chickenpox before they reached adulthood? Explain. (b) Calculate the probability that exactly 97 out of 100 randomly sampled American adults had chickenpox during childhood. (c) What is the probability that exactly 3 out of a new sample of 100 American adults have not had chickenpox in their childhood? (d) What is the probability that at least 1 out of 10 randomly sampled American adults have had chickenpox? (e) What is the probability that at most 3 out of 10 randomly sampled American adults have not had chickenpox? 49SAMHSA, Office of Applied Studies, National Survey on Drug Use and Health, 2007 and 2008. 50National Vaccine Information Center, Chickenpox, The Disease & The Vaccine Fact Sheet. 3.29 Underage drinking, Part II. We learned in Exercise 3.27 that about 70% of 18-20 year olds consumed alcoholic beverages in 2008. We now consider a random sample of fty 18-20 year olds. (a) How many people would you expect to have consumed alcoholic beverages? And with what standard deviation? (b) Would you be surprised if there were 45 or more people who have consumed alcoholic beverages? (c) What is the probability that 45 or more people in this sample have consumed alcoholic beverages? How does this probability relate to your answer to part (b)? 3.30 Chickenpox, Part II. We learned in Exercise 3.28 that about 90% of American adults had chickenpox before adulthood. We now consider a random sample of 120 American adults. (a) How many people in this sample would you expect to have had chickenpox in their childhood? And with what standard deviation? (b) Would you be surprised if there were 105 people who have had chickenpox in their childhood? (c) What is the probability that 105 or fewer people in this sample have had chickenpox in their childhood? How does this probability relate to your answer to part (b)? 3.31 University admissions. Suppose a university announced that it admitted 2,500 students for the following year's freshman class. However, the university has dorm room spots for only 1,786 freshman students. If there is a 70% chance that an admitted student will decide to accept the offer and attend this university, what is the what is the approximate probability that the university will not have enough dormitory room spots for the freshman class? 3.32 Survey response rate. Pew Research reported in 2012 that the typical response rate to their surveys is only 9%. If for a particular survey 15,000 households are contacted, what is the probability that at least 1,500 will agree to respond?51 3.33 Game of dreidel. A dreidel is a four-sided spinning top with the Hebrew letters nun, gimel, hei, and shin, one on each side. Each side is equally likely to come up in a single spin of the dreidel. Suppose you spin a dreidel three times. Calculate the probability of getting52 (a) at least one nun? (b) exactly 2 nuns? (c) exactly 1 hei? (d) at most 2 gimels? 3.34 Arachnophobia. A 2005 Gallup Poll found that that 7% of teenagers (ages 13 to 17) suffer from arachnophobia and are extremely afraid of spiders. At a summer camp there are 10 teenagers sleeping in each tent. Assume that these 10 teenagers are independent of each other.53 (a) Calculate the probability that at least one of them suffers from arachnophobia. (b) Calculate the probability that exactly 2 of them suffer from arachnophobia? (c) Calculate the probability that at most 1 of them suffers from arachnophobia? (d) If the camp counselor wants to make sure no more than 1 teenager in each tent is afraid of spiders, does it seem reasonable for him to randomly assign teenagers to tents? 51The Pew Research Center for the People and the Press, Assessing the Representativeness of Public Opinion Surveys, May 15, 2012. 52Photo by Staccabees on Flickr. 53Gallup Poll, What Frightens America's Youth?, March 29, 2005. 3.35 Eye color, Part II. Exercise 3.25 introduces a husband and wife with brown eyes who have 0.75 probability of having children with brown eyes, 0.125 probability of having children with blue eyes, and 0.125 probability of having children with green eyes. (a) What is the probability that their rst child will have green eyes and the second will not? (b) What is the probability that exactly one of their two children will have green eyes? (c) If they have six children, what is the probability that exactly two will have green eyes? (d) If they have six children, what is the probability that at least one will have green eyes? (e) What is the probability that the rst green eyed child will be the 4th child? (f) Would it be considered unusual if only 2 out of their 6 children had brown eyes? 3.36 Sickle cell anemia. Sickle cell anemia is a genetic blood disorder where red blood cells lose their exibility and assume an abnormal, rigid, "sickle" shape, which results in a risk of various complications. If both parents are carriers of the disease, then a child has a 25% chance of having the disease, 50% chance of being a carrier, and 25% chance of neither having the disease nor being a carrier. If two parents who are carriers of the disease have 3 children, what is the probability that (a) two will have the disease? (b) none will have the disease? (c) at least one will neither have the disease nor be a carrier? (d) the rst child with the disease will the be 3rd child? 3.37 Roulette winnings. In the game of roulette, a wheel is spun and you place bets on where it will stop. One popular bet is that it will stop on a red slot; such a bet has an 18/38 chance of winning. If it stops on red, you double the money you bet. If not, you lose the money you bet. Suppose you play 3 times, each time with a$1 bet. Let Y represent the total amount won or lost. Write a probability model for Y. 3.38 Multiple choice quiz. In a multiple choice quiz there are 5 questions and 4 choices for each question (a, b, c, d). Robin has not studied for the quiz at all, and decides to randomly guess the answers. What is the probability that (a) the rst question she gets right is the 3rd question? (b) she gets exactly 3 or exactly 4 questions right? (c) she gets the majority of the questions right? 3.39 Exploring combinations. The formula for the number of ways to arrange n objects is $n! = n \times (n - 1) \times \dots \times 2 \times 1$. This exercise walks you through the derivation of this formula for a couple of special cases. A small company has ve employees: Anna, Ben, Carl, Damian, and Eddy. There are five parking spots in a row at the company, none of which are assigned, and each day the employees pull into a random parking spot. That is, all possible orderings of the cars in the row of spots are equally likely. (a) On a given day, what is the probability that the employees park in alphabetical order? (b) If the alphabetical order has an equal chance of occurring relative to all other possible orderings, how many ways must there be to arrange the ve cars? (c) Now consider a sample of 8 employees instead. How many possible ways are there to order these 8 employees' cars? 3.40 Male children. While it is often assumed that the probabilities of having a boy or a girl are the same, the actual probability of having a boy is slightly higher at 0.51. Suppose a couple plans to have 3 kids. (a) Use the binomial model to calculate the probability that two of them will be boys. (b) Write out all possible orderings of 3 children, 2 of whom are boys. Use these scenarios to calculate the same probability from part (a) but using the Addition Rule for disjoint events. Con rm that your answers from parts (a) and (b) match. (c) If we wanted to calculate the probability that a couple who plans to have 8 kids will have 3 boys, briey describe why the approach from part (b) would be more tedious than the approach from part (a). More discrete distributions 3.41 Identify the distribution. Calculate the following probabilities and indicate which probability distribution model is appropriate in each case. You roll a fair die 5 times. What is the probability of rolling (a) the rst 6 on the fth roll? (b) exactly three 6s? (c) the third 6 on the fth roll? 3.42 Darts. Calculate the following probabilities and indicate which probability distribution model is appropriate in each case. A very good darts player can hit the bullseye (red circle in the center of the dart board) 65% of the time. What is the probability that he (a) hits the bullseye for the 10th time on the 15th try? (b) hits the bullseye 10 times in 15 tries? (c) hits the rst bullseye on the third try? 3.43 Sampling at school. For a sociology class project you are asked to conduct a survey on 20 students at your school. You decide to stand outside of your dorm's cafeteria and conduct the survey on a random sample of 20 students leaving the cafeteria after dinner one evening. Your dorm is comprised of 45% males and 55% females. (a) Which probability model is most appropriate for calculating the probability that the 4th person you survey is the 2nd female? Explain. (b) Compute the probability from part (a). (c) The three possible scenarios that lead to 4th person you survey being the 2nd female are $(M,M, F, F), (M, F,M, F), (F,M,M, F)$ One common feature among these scenarios is that the last trial is always female. In the first three trials there are 2 males and 1 female. Use the binomial coefficient to con rm that there are 3 ways of ordering 2 males and 1 female. (d) Use the ndings presented in part (c) to explain why the formula for the coefficient for the negative binomial is $\binom {n-1}{k-1}$ while the formula for the binomial coefficient is $\binom {n}{k}$. 3.44 Serving in volleyball. A not-so-skilled volleyball player has a 15% chance of making the serve, which involves hitting the ball so it passes over the net on a trajectory such that it will land in the opposing team's court. Suppose that her serves are independent of each other. (a) What is the probability that on the 10th try she will make her 3rd successful serve? (b) Suppose she has made two successful serves in nine attempts. What is the probability that her 10th serve will be successful? (c) Even though parts (a) and (b) discuss the same scenario, the probabilities you calculated should be different. Can you explain the reason for this discrepancy? 3.45 Customers at a coffee shop, Part I. A coffee shop serves an average of 75 customers per hour during the morning rush. (a) Which distribution we have studied is most appropriate for calculating the probability of a given number of customers arriving within one hour during this time of day? (b) What are the mean and the standard deviation of the number of customers this coffee shop serves in one hour during this time of day? (c) Would it be considered unusually low if only 60 customers showed up to this coffee shop in one hour during this time of day? 3.46 Stenographer's typos, Part I. A very skilled court stenographer makes one typographical error (typo) per hour on average. (a) What probability distribution is most appropriate for calculating the probability of a given number of typos this stenographer makes in an hour? (b) What are the mean and the standard deviation of the number of typos this stenographer makes? (c) Would it be considered unusual if this stenographer made 4 typos in a given hour? 3.47 Customers at a coffee shop, Part II. Exercise 3.45 gives the average number of customers visiting a particular coffee shop during the morning rush hour as 75. Calculate the probability that this coffee shop serves 70 customers in one hour during this time of day? 3.48 Stenographer's typos, Part II. Exercise 3.46 gives the average number of typos of a very skilled court stenographer as 1 per hour. Calculate the probability that this stenographer makes at most 2 typos in a given hour. Contributors and Attributions David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./03%3A_Distributions_of_Random_Variables/3.E%3A_Distributions_of_Random_Variables_%28Exercises%29.txt
Statistical inference is concerned primarily with understanding the quality of parameter estimates. For example, a classic inferential question is, "How sure are we that the estimated mean, $\bar {x}$, is near the true population mean, $\mu$?" While the equations and details change depending on the setting, the foundations for inference are the same throughout all of statistics. We introduce these common themes in Sections 4.1-4.4 by discussing inference about the population mean, $\mu$, and set the stage for other parameters and scenarios in Section 4.5. Some advanced considerations are discussed in Section 4.6. Understanding this chapter will make the rest of this book, and indeed the rest of statistics, seem much more familiar. 04: Foundations for Inference Statistical inference is concerned primarily with understanding the quality of parameter estimates. For example, a classic inferential question is, "How sure are we that the estimated mean, $\bar {x}$, is near the true population mean, $\mu$?" While the equations and details change depending on the setting, the foundations for inference are the same throughout all of statistics. We introduce these common themes in Sections 4.1-4.4 by discussing inference about the population mean, $\mu$, and set the stage for other parameters and scenarios in Section 4.5. Some advanced considerations are discussed in Section 4.6. Understanding this chapter will make the rest of this book, and indeed the rest of statistics, seem muchmore familiar. Throughout the next few sections we consider a data set called run10, which represents all 16,924 runners who nished the 2012 Cherry Blossom 10 mile run in Washington, DC.1 Part of this data set is shown in Table 4.1, and the variables are described in Table 4.2. Table 4.1: Six observations from the run10 data set. ID time age gender state 1 2 3 4 $\vdots$ 16923 16924 92.25 106.35 89.33 113.50 $\vdots$ 122.87 93.30 38.00 33.00 55.00 24.00 $\vdots$ 37.00 27.00 M M F F $\vdots$ F F MD DC VA VA $\vdots$ VA DC Table 4.2: Variables and their descriptions for the run10 data set. variable description time age gender state Ten mile run time, in minutes Age, in years Gender (M for male, F for female) Home state (or country if not from the US) 1http://www.cherryblossom.org Table 4.3: Four observations for the run10Samp data set, which represents a simple random sample of 100 runners from the 2012 Cherry Blossom Run. ID time age gender state 1983 8192 11020 $\vdots$ 1287 88.31 100.67 109.52 $\vdots$ 89.49 59 32 33 $\vdots$ 26 M M F $\vdots$ M MD VA VA $\vdots$ DC These data are special because they include the results for the entire population of runners who nished the 2012 Cherry Blossom Run. We took a simple random sample of this population, which is represented in Table 4.3. We will use this sample, which we refer to as the run10Samp data set, to draw conclusions about the entire population. This is the practice of statistical inference in the broadest sense. Two histograms summarizing the time and age variables in the run10Samp data set are shown in Figure 4.4. 4.02: Variability in Estimates We would like to estimate two features of the Cherry Blossom runners using the sample. 1. How long does it take a runner, on average, to complete the 10 miles? 2. What is the average age of the runners? These questions may be informative for planning the Cherry Blossom Run in future years.2 We will use $x_1, \dots, x_{100}$ to represent the 10 mile time for each runner in our sample, and $y_1, \dots, y_{100}$ will represent the age of each of these participants. 2While we focus on the mean in this chapter, questions regarding variation are often just as important in practice. For instance, we would plan an event very differently if the standard deviation of runner age was 2 versus if it was 20. Point Rstimates We want to estimate the population mean based on the sample. The most intuitive way to go about doing this is to simply take the sample mean. That is, to estimate the average 10 mile run time of all participants, take the average time for the sample: $\bar {x} = \frac {88.22 + 100.58 + \dots + 89.40}{100} = 95.61$ The sample mean $\bar {x}$ = 95.61 minutes is called a point estimate of the population mean: if we can only choose one value to estimate the population mean, this is our best guess. Suppose we take a new sample of 100 people and recompute the mean; we will probably not get the exact same answer that we got using the run10Samp data set. Estimates generally vary from one sample to another, and this sampling variation suggests our estimate may be close, but it will not be exactly equal to the parameter. We can also estimate the average age of participants by examining the sample mean of age: $\bar {y} = \frac {59 + 32 + \dots + 26}{100} = 35.05$ What about generating point estimates of other population parameters, such as the population median or population standard deviation? Once again we might estimate parameters based on sample statistics, as shown in Table 4.5. For example, we estimate the population standard deviation for the running time using the sample standard deviation, 15.78 minutes. Table 4.5: Point estimates and parameter values for the time variable. time estimate parameter mean median st. dev. 95.61 95.37 15.78 94.52 94.03 15.93 Exercise 4.1 Suppose we want to estimate the difference in run times for men and women. If $\bar {x} _{men} = 87.65$ and $\bar {x}_{women} = 102.13$, then what would be a good point estimate for the population difference?3 Exercise 4.2 If you had to provide a point estimate of the population IQR for the run time of participants, how might you make such an estimate using a sample?4 Point Estimates are not Exact Estimates are usually not exactly equal to the truth, but they get better as more data become available. We can see this by plotting a running mean from our run10Samp sample. A running mean is a sequence of means, where each mean uses one more observation in its calculation than the mean directly before it in the sequence. For example, the second mean in the sequence is the average of the rst two observations and the third in the 3We could take the difference of the two sample means: 102.13 - 87.65 = 14.48. Men ran about 14.48 minutes faster on average in the 2012 Cherry Blossom Run. 4To obtain a point estimate of the IQR for the population, we could take the IQR of the sample. sequence is the average of the rst three. The running mean for the 10 mile run time in the run10Samp data set is shown in Figure 4.6, and it approaches the true population average, 94.52 minutes, as more data become available. Sample point estimates only approximate the population parameter, and they vary from one sample to another. If we took another simple random sample of the Cherry Blossom runners, we would nd that the sample mean for the run time would be a little different. It will be useful to quantify how variable an estimate is from one sample to another. If this variability is small (i.e. the sample mean doesn't change much from one sample to another) then that estimate is probably very accurate. If it varies widely from one sample to another, then we should not expect our estimate to be very good. Standard Error of the Mean From the random sample represented in run10Samp, we guessed the average time it takes to run 10 miles is 95.61 minutes. Suppose we take another random sample of 100 individuals and take its mean: 95.30 minutes. Suppose we took another (93.43 minutes) and another (94.16 minutes), and so on. If we do this many many times { which we can do only because we have the entire population data set { we can build up a sampling distribution for the sample mean when the sample size is 100, shown in Figure 4.7. Sampling distribution The sampling distribution represents the distribution of the point estimates based on samples of a fixed size from a certain population. It is useful to think of a particular point estimate as being drawn from such a distribution. Understanding the concept of a sampling distribution is central to understanding statistical inference. The sampling distribution shown in Figure 4.7 is unimodal and approximately symmetric. It is also centered exactly at the true population mean: $\mu$ = 94.52. Intuitively, this makes sense. The sample means should tend to "fall around" the population mean. We can see that the sample mean has some variability around the population mean, which can be quanti ed using the standard deviation of this distribution of sample means: $\sigma _{\bar {x}} = 1.59$. The standard deviation of the sample mean tells us how far the typical estimate is away from the actual population mean, 94.52 minutes. It also describes the typical error of the point estimate, and for this reason we usually call this standard deviation the standard error (SE) of the estimate. Standard error of an estimate The standard deviation associated with an estimate is called the standard error. It describes the typical error or uncertainty associated with the estimate. When considering the case of the point estimate $\bar {x}$, there is one problem: there is no obvious way to estimate its standard error from a single sample. However, statistical theory provides a helpful tool to address this issue. Exercise 4.3 1. Would you rather use a small sample or a large sample when estimating a parameter? Why? 2. Using your reasoning from (a), would you expect a point estimate based on a small sample to have smaller or larger standard error than a point estimate based on a larger sample?5 In the sample of 100 runners, the standard error of the sample mean is equal to onetenth of the population standard deviation: $1.59 = \frac {15.93}{10}$. In other words, the standard error of the sample mean based on 100 observations is equal to $SE_{\bar {x}} = \sigma_{\bar {x}} = \frac {\sigma_x}{\sqrt {100}} = \frac {15.93}{\sqrt {100}} = 1.59$ where $\sigma_x$ is the standard deviation of the individual observations. This is no coincidence. We can show mathematically that this equation is correct when the observations are independent using the probability tools of Section 2.4. 5(a) Consider two random samples: one of size 10 and one of size 1000. Individual observations in the small sample are highly inuential on the estimate while in larger samples these individual observations would more often average each other out. The larger sample would tend to provide a more accurate estimate. (b) If we think an estimate is better, we probably mean it typically has less error. Based on (a), our intuition suggests that a larger sample size corresponds to a smaller standard error. Computing Standard Error for the sample mean Given n independent observations from a population with standard deviation $\sigma$, the standard error of the sample mean is equal to $SE = \dfrac {\sigma}{\sqrt {n}} \tag {4.4}$ A reliable method to ensure sample observations are independent is to conduct a simple random sample consisting of less than 10% of the population. There is one subtle issue of Equation (4.4): the population standard deviation is typically unknown. You might have already guessed how to resolve this problem: we can use the point estimate of the standard deviation from the sample. This estimate tends to be sufficiently good when the sample size is at least 30 and the population distribution is not strongly skewed. Thus, we often just use the sample standard deviation s instead of $\sigma$. When the sample size is smaller than 30, we will need to use a method to account for extra uncertainty in the standard error. If the skew condition is not met, a larger sample is needed to compensate for the extra skew. These topics are further discussed in Section 4.4. Exercise 4.5 In the sample of 100 runners, the standard deviation of the runners' ages is $s_y = 8.97$. Because the sample is simple random and consists of less than 10% of the population, the observations are independent. (a) What is the standard error of the sample mean, $\bar {y} = 35.05$ years? (b) Would you be surprised if someone told you the average age of all the runners was actually 36 years?6 Exercise 4.6 (a)Would you be more trusting of a sample that has 100 observations or 400 observations? (b) We want to show mathematically that our estimate tends to be better when the sample size is larger. If the standard deviation of the individual observations is 10, what is our estimate of the standard error when the sample size is 100? What about when it is 400? (c) Explain how your answer to (b) mathematically justifies your intuition in part (a).7 Basic properties of point estimates We achieved three goals in this section. First, we determined that point estimates from a sample may be used to estimate population parameters. We also determined that these point estimates are not exact: they vary from one sample to another. Lastly, we quantified the uncertainty of the sample mean using what we call the standard error, mathematically represented in Equation (4.4). While we could also quantify the standard error for other estimates - such as the median, standard deviation, or any other number of statistics - we will postpone these extensions until later chapters or courses. 6(a) Use Equation (4.4) with the sample standard deviation to compute the standard error: $SE_{\bar {y}} = \frac {8.97}{\sqrt {100}} = 0.90$ years. (b) It would not be surprising. Our sample is about 1 standard error from 36 years. In other words, 36 years old does not seem to be implausible given that our sample was relatively close to it. (We use the standard error to identify what is close.) 7(a) Extra observations are usually helpful in understanding the population, so a point estimate with 400 observations seems more trustworthy. (b) The standard error when the sample size is 100 is given by $SE_{100} = \frac {10}{\sqrt {100}} = 1$. For 400: $SE_{400} = \frac {10}{\sqrt {400}} = 0.5$. The larger sample has a smaller standard error. (c) The standard error of the sample with 400 observations is lower than that of the sample with 100 observations. The standard error describes the typical error, and since it is lower for the larger sample, this mathematically shows the estimate from the larger sample tends to be better - though it does not guarantee that every large sample will provide a better estimate than a particular small sample.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.01%3A_Prelude_to_Foundations_for_Inference.txt
A point estimate provides a single plausible value for a parameter. However, a point estimate is rarely perfect; usually there is some error in the estimate. Instead of supplying just a point estimate of a parameter, a next logical step would be to provide a plausible range of values for the parameter. In this section and in Section 4.3, we will emphasize the special case where the point estimate is a sample mean and the parameter is the population mean. In Section 4.5, we generalize these methods for a variety of point estimates and population parameters that we will encounter in Chapter 5 and beyond. Capturing the population Parameter A plausible range of values for the population parameter is called a confidence interval. Using only a point estimate is like fishing in a murky lake with a spear, and using a confidence interval is like shing with a net. We can throw a spear where we saw a fish, but we will probably miss. On the other hand, if we toss a net in that area, we have a good chance of catching the fish. If we report a point estimate, we probably will not hit the exact population parameter. On the other hand, if we report a range of plausible values - a confidence interval - we have a good shot at capturing the parameter. Exercise 4.7 If we want to be very certain we capture the population parameter, should we use a wider interval or a smaller interval?8 An Approximate 95% confidence interval Our point estimate is the most plausible value of the parameter, so it makes sense to build the confidence interval around the point estimate. The standard error, which is a measure of the uncertainty associated with the point estimate, provides a guide for how large we should make the confidence interval. The standard error represents the standard deviation associated with the estimate, and roughly 95% of the time the estimate will be within 2 standard errors of the parameter. If the interval spreads out 2 standard errors from the point estimate, we can be roughly 95% con dent that we have captured the true parameter: $\text {point estimate} \pm 2 \times SE \label{4.8}$ But what does "95% confident" mean? Suppose we took many samples and built a confidence interval from each sample using Equation \ref{4.8}. Then about 95% of those intervals would contain the actual mean, $\mu$. Figure 4.8 shows this process with 25 samples, where 24 of the resulting confidence intervals contain the average time for all the runners, $\mu = 94.52$ minutes, and one does not. Exercise 4.9 In Figure 4.8, one interval does not contain 94.52 minutes. Does this imply that the mean cannot be 94.52? 9 8If we want to be more certain we will capture the sh, we might use a wider net. Likewise, we use a wider confidence interval if we want to be more certain that we capture the parameter. 9Just as some observations occur more than 2 standard deviations from the mean, some point estimates will be more than 2 standard errors from the parameter. A confidence interval only provides a plausible range of values for a parameter. While we might say other values are implausible based on the data, this does not mean they are impossible. The rule where about 95% of observations are within 2 standard deviations of the mean is only approximately true. However, it holds very well for the normal distribution. As we will soon see, the mean tends to be normally distributed when the sample size is sufficiently large. Example 4.10 If the sample mean of times from run10Samp is 95.61 minutes and the standard error, as estimated using the sample standard deviation, is 1.58 minutes, what would be an approximate 95% confidence interval for the average 10 mile time of all runners in the race? Apply the standard error calculated using the sample standard deviation ($SE = \frac {15.78}{\sqrt {100}} = 1:58$), which is how we usually proceed since the population standard deviation is generally unknown. Solution We apply Equation \ref{4.8}: $95.61 \pm 2 \times 1.58 \rightarrow (92.45, 98.77)$ Based on these data, we are about 95% con dent that the average 10 mile time for all runners in the race was larger than 92.45 but less than 98.77 minutes. Our interval extends out 2 standard errors from the point estimate, $\bar {x}$. Exercise 4.11 he sample data suggest the average runner's age is about 35.05 years with a standard error of 0.90 years (estimated using the sample standard deviation, 8.97). What is an approximate 95% confidence interval for the average age of all of the runners?10 10Again apply Equation \ref{4.8}: $35.05 \pm 2 \times 0.90 \rightarrow (33.25, 36.85)$. We interpret this interval as follows: We are about 95% con dent the average age of all participants in the 2012 Cherry Blossom Run was between 33.25 and 36.85 years. A Sampling Distribution for the Mean In Section 4.1.3, we introduced a sampling distribution for $\bar {x}$, the average run time for samples of size 100. We examined this distribution earlier in Figure 4.7. Now we'll take 100,000 samples, calculate the mean of each, and plot them in a histogram to get an especially accurate depiction of the sampling distribution. This histogram is shown in the left panel of Figure 4.9. Does this distribution look familiar? Hopefully so! The distribution of sample means closely resembles the normal distribution (see Section 3.1). A normal probability plot of these sample means is shown in the right panel of Figure 4.9. Because all of the points closely fall around a straight line, we can conclude the distribution of sample means is nearly normal. This result can be explained by the Central Limit Theorem. Central Limit Theorem, informal description If a sample consists of at least 30 independent observations and the data are not strongly skewed, then the distribution of the sample mean is well approximated by a normal model. We will apply this informal version of the Central Limit Theorem for now, and discuss its details further in Section 4.4. The choice of using 2 standard errors in Equation\ref{4.8} was based on our general guideline that roughly 95% of the time, observations are within two standard deviations of the mean. Under the normal model, we can make this more accurate by using 1.96 in place of 2. $\text {point estimate} \pm 1.96 \times SE \label{4.12}$ If a point estimate, such as $\bar {x}$, is associated with a normal model and standard error SE, then we use this more precise 95% confidence interval. Changing the confidence level Suppose we want to consider confidence intervals where the confidence level is somewhat higher than 95%: perhaps we would like a confidence level of 99%. Think back to the analogy about trying to catch a sh: if we want to be more sure that we will catch the fish, we should use a wider net. To create a 99% confidence level, we must also widen our 95% interval. On the other hand, if we want an interval with lower confidence, such as 90%, we could make our original 95% interval slightly slimmer. The 95% confidence interval structure provides guidance in how to make intervals with new confidence levels. Below is a general 95% confidence interval for a point estimate that comes from a nearly normal distribution: $\text {point estimate} \pm 1.96 \times SE \label{4.13}$ There are three components to this interval: the point estimate, "1.96", and the standard error. The choice of 1:96 SE was based on capturing 95% of the data since the estimate is within 1.96 standard deviations of the parameter about 95% of the time. The choice of 1.96 corresponds to a 95% confidence level. Exercise 4.14 If X is a normally distributed random variable, how often will X be within 2.58 standard deviations of the mean?11 To create a 99% confidence interval, change 1.96 in the 95% confidence interval formula to be 2:58. Exercise 4.14 highlights that 99% of the time a normal random variable will be within 2.58 standard deviations of the mean. This approach - using the Z scores in the normal model to compute confidence levels - is appropriate when $\bar {x}$ is associated with a normal distribution with mean $\mu$ and standard deviation $SE_{\bar {x}}$. Thus, the formula for a 99% confidence interval is $\bar {x} \pm 2.58 \times SE \bar {x} \label{4.15}$ The normal approximation is crucial to the precision of these confidence intervals. Section 4.4 provides a more detailed discussion about when the normal model can safely be applied. When the normal model is not a good fit, we will use alternative distributions that better characterize the sampling distribution. Conditions for $\bar {x}$ being nearly normal and SE being accurate Important conditions to help ensure the sampling distribution of $\bar {x}$ is nearly normal and the estimate of SE sufficiently accurate: • The sample observations are independent. • The sample size is large: $n \ge 30$ is a good rule of thumb. • The distribution of sample observations is not strongly skewed. Additionally, the larger the sample size, the more lenient we can be with the sample's skew. 11This is equivalent to asking how often the Z score will be larger than -2.58 but less than 2.58. (For a picture, see Figure 4.10.) To determine this probability, look up -2.58 and 2.58 in the normal probability table (0.0049 and 0.9951). Thus, there is a $0.9951-0.0049 \approx 0.99$ probability that the unobserved random variable X will be within 2.58 standard deviations of $\mu$. Verifying independence is often the most difficult of the conditions to check, and the way to check for independence varies from one situation to another. However, we can provide simple rules for the most common scenarios. TIP: How to verify sample observations are independent Observations in a simple random sample consisting of less than 10% of the population are independent. Caution: Independence for random processes and experiments If a sample is from a random process or experiment, it is important to verify the observations from the process or subjects in the experiment are nearly independent and maintain their independence throughout the process or experiment. Usually subjects are considered independent if they undergo random assignment in an experiment. Exercise 4.16 Create a 99% confidence interval for the average age of all runners in the 2012 Cherry Blossom Run. The point estimate is $\bar {y} = 35.05$ and the standard error is $SE_{\bar {y}} = 0.90$.12 12The observations are independent (simple random sample, < 10% of the population), the sample size is at least 30 (n = 100), and the distribution is only slightly skewed (Figure 4.4); the normal approximation and estimate of SE should be reasonable. Apply the 99% confidence interval formula: $\bar {y} \pm 2.58 \times SE_{\bar {y}} \rightarrow (32.7, 37.4)$. We are 99% confident that the average age of all runners is between 32.7 and 37.4 years. Confidence interval for any confidence level If the point estimate follows the normal model with standard error SE, then a confidence interval for the population parameter is $\text {point estimate} \pm z*SE$ where z* corresponds to the confidence level selected. Figure 4.10 provides a picture of how to identify z* based on a confidence level. We select z* so that the area between -z* and z* in the normal model corresponds to the confidence level. Margin of error In a confidence interval, z* SE is called the margin of error. Exercise 4.17 Use the data in Exercise 4.16 to create a 90% confidence interval for the average age of all runners in the 2012 Cherry Blossom Run.13 Interpreting confidence intervals A careful eye might have observed the somewhat awkward language used to describe confidence intervals. Correct interpretation: $\text {We are XX% con dent that the population parameter is between} \dots$ Incorrect language might try to describe the confidence interval as capturing the population parameter with a certain probability. This is one of the most common errors: while it might be useful to think of it as a probability, the confidence level only quantifies how plausible it is that the parameter is in the interval. Another especially important consideration of confidence intervals is that they only try to capture the population parameter. Our intervals say nothing about the confidence of capturing individual observations, a proportion of the observations, or about capturing point estimates. confidence intervals only attempt to capture population parameters. Nearly normal population with known SD (special topic) In rare circumstances we know important characteristics of a population. For instance, we might know a population is nearly normal and we may also know its parameter values. Even so, we may still like to study characteristics of a random sample from the population. Consider the conditions required for modeling a sample mean using the normal distribution: 1. The observations are independent. 2. The sample size n is at least 30. 3. The data distribution is not strongly skewed. 13We first find z* such that 90% of the distribution falls between -z* and z* in the standard normal model, $N(mu = 0, \sigma = 1)$. We can look up -z* in the normal probability table by looking for a lower tail of 5% (the other 5% is in the upper tail), thus z* = 1.65. The 90% confidence interval can then be computed as $\bar {y} \pm 1.65 \times SE_{\bar {y}} \rightarrow (33.6, 36.5)$. (We had already verified conditions for normality and the standard error.) That is, we are 90% con dent the average age is larger than 33.6 but less than 36.5 years. These conditions are required so we can adequately estimate the standard deviation and so we can ensure the distribution of sample means is nearly normal. However, if the population is known to be nearly normal, the sample mean is always nearly normal (this is a special case of the Central Limit Theorem). If the standard deviation is also known, then conditions (2) and (3) are not necessary for those data. Example Example 4.18 The heights of male seniors in high school closely follow a normal distribution $N(\mu = 70.43, \sigma = 2.73)$, where the units are inches.14 If we randomly sampled the heights of ve male seniors, what distribution should the sample mean follow? Solution The population is nearly normal, the population standard deviation is known, and the heights represent a random sample from a much larger population, satisfying the independence condition. Therefore the sample mean of the heights will follow a nearly normal distribution with mean $\mu = 70.43$ inches and standard error $SE = \frac {\sigma}{\sqrt {n}} = \frac {2.73}{\sqrt {5}} = 1.22$ inches. Alternative conditions for applying the normal distribution to model the sample mean If the population of cases is known to be nearly normal and the population standard deviation $\sigma$ is known, then the sample mean $\bar {x}$ will follow a nearly normal distribution $N(\mu, \frac {\sigma}{\sqrt {n}})$ if the sampled observations are also independent. Sometimes the mean changes over time but the standard deviation remains the same. In such cases, a sample mean of small but nearly normal observations paired with a known standard deviation can be used to produce a confidence interval for the current population mean using the normal distribution. Example 4.19 Is there a connection between height and popularity in high school? Many students may suspect as much, but what do the data say? Suppose the top 5 nominees for prom king at a high school have an average height of 71.8 inches. Does this provide strong evidence that these seniors' heights are not representative of all male seniors at their high school? Solution If these five seniors are height-representative, then their heights should be like a random sample from the distribution given in Example 4.18, $N (\mu = 70.43, \sigma = 2.73)$, and the sample mean should follow $N (\mu = 70.43, \frac {\sigma}{\sqrt {n}} = 1.22)$. Formally we are conducting what is called a hypothesis test, which we will discuss in greater detail during the next section. We are weighing two possibilities: • H0: The prom king nominee heights are representative; $\bar {x}$ will follow a normal distribution with mean 70.43 inches and standard error 1.22 inches. • HA: The heights are not representative; we suspect the mean height is different from 70.43 inches. If there is strong evidence that the sample mean is not from the normal distribution provided in H0, then that suggests the heights of prom king nominees are not a simple random sample (i.e. HA is true). We can look at the Z score of the sample mean to tell us how unusual our sample is. If H0 is true: 14These values were computed using the USDA Food Commodity Intake Database. $Z = \frac {\bar {x} - \mu}{\frac {\sigma}{\sqrt {n}}} = \frac {71.8 - 70.43}{1.22} = 1.12$ A Z score of just 1.12 is not very unusual (we typically use a threshold of $\pm2$ to decide what is unusual), so there is not strong evidence against the claim that the heights are representative. This does not mean the heights are actually representative, only that this very small sample does not necessarily show otherwise. TIP: Relaxing the nearly normal condition As the sample size becomes larger, it is reasonable to slowly relax the nearly normal assumption on the data when dealing with small samples. By the time the sample size reaches 30, the data must show strong skew for us to be concerned about the normality of the sampling distribution.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.03%3A_Confidence_Intervals.txt
Is the typical US runner getting faster or slower over time? We consider this question in the context of the Cherry Blossom Run, comparing runners in 2006 and 2012. Technological advances in shoes, training, and diet might suggest runners would be faster in 2012. An opposing viewpoint might say that with the average body mass index on the rise, people tend to run slower. In fact, all of these components might be influencing run time. In addition to considering run times in this section, we consider a topic near and dear to most students: sleep. A recent study found that college students average about 7 hours of sleep per night.15 However, researchers at a rural college are interested in showing that their students sleep longer than seven hours on average. We investigate this topic in Section 4.3.4. Hypothesis Testing Framework The average time for all runners who finished the Cherry Blossom Run in 2006 was 93.29 minutes (93 minutes and about 17 seconds). We want to determine if the run10Samp data set provides strong evidence that the participants in 2012 were faster or slower than those runners in 2006, versus the other possibility that there has been no change.16 We simplify these three options into two competing hypotheses: • H0: The average 10 mile run time was the same for 2006 and 2012. • HA: The average 10 mile run time for 2012 was different than that of 2006. We call H0 the null hypothesis and HA the alternative hypothesis. Null and alternative hypotheses • The null hypothesis (H0) often represents either a skeptical perspective or a claim to be tested. • The alternative hypothesis (HA) represents an alternative claim under consideration and is often represented by a range of possible parameter values. 15theloquitur.com/?p=1161 16While we could answer this question by examining the entire population data (run10), we only consider the sample data (run10Samp), which is more realistic since we rarely have access to population data. The null hypothesis often represents a skeptical position or a perspective of no difference. The alternative hypothesis often represents a new perspective, such as the possibility that there has been a change. Hypothesis testing framework The skeptic will not reject the null hypothesis (H0), unless the evidence in favor of the alternative hypothesis (HA) is so strong that she rejects H0 in favor of HA. The hypothesis testing framework is a very general tool, and we often use it without a second thought. If a person makes a somewhat unbelievable claim, we are initially skeptical. However, if there is sufficient evidence that supports the claim, we set aside our skepticism and reject the null hypothesis in favor of the alternative. The hallmarks of hypothesis testing are also found in the US court system. Exercise $1$ A US court considers two possible claims about a defendant: she is either innocent or guilty. If we set these claims up in a hypothesis framework, which would be the null hypothesis and which the alternative?17 Jurors examine the evidence to see whether it convincingly shows a defendant is guilty. Even if the jurors leave unconvinced of guilt beyond a reasonable doubt, this does not mean they believe the defendant is innocent. This is also the case with hypothesis testing: even if we fail to reject the null hypothesis, we typically do not accept the null hypothesis as true. Failing to find strong evidence for the alternative hypothesis is not equivalent to accepting the null hypothesis. 17H0: The average cost is $650 per month, $\mu$ =$650. In the example with the Cherry Blossom Run, the null hypothesis represents no difference in the average time from 2006 to 2012. The alternative hypothesis represents something new or more interesting: there was a difference, either an increase or a decrease. These hypotheses can be described in mathematical notation using $\mu_{12}$ as the average run time for 2012: • H0: $\mu_{12} = 93.29$ • HA: $\mu_{12} \ne 93.29$ where 93.29 minutes (93 minutes and about 17 seconds) is the average 10 mile time for all runners in the 2006 Cherry Blossom Run. Using this mathematical notation, the hypotheses can now be evaluated using statistical tools. We call 93.29 the null value since it represents the value of the parameter if the null hypothesis is true. We will use the run10Samp data set to evaluate the hypothesis test. Testing Hypotheses using Confidence Intervals We can start the evaluation of the hypothesis setup by comparing 2006 and 2012 run times using a point estimate from the 2012 sample: $\bar {x}_{12} = 95.61$ minutes. This estimate suggests the average time is actually longer than the 2006 time, 93.29 minutes. However, to evaluate whether this provides strong evidence that there has been a change, we must consider the uncertainty associated with $\bar {x}_{12}$. 16The jury considers whether the evidence is so convincing (strong) that there is no reasonable doubt regarding the person's guilt; in such a case, the jury rejects innocence (the null hypothesis) and concludes the defendant is guilty (alternative hypothesis). We learned in Section 4.1 that there is fluctuation from one sample to another, and it is very unlikely that the sample mean will be exactly equal to our parameter; we should not expect $\bar {x}_{12}$ to exactly equal $\mu_{12}$. Given that $\bar {x}_{12} = 95.61$, it might still be possible that the population average in 2012 has remained unchanged from 2006. The difference between $\bar {x}_{12}$ and 93.29 could be due to sampling variation, i.e. the variability associated with the point estimate when we take a random sample. In Section 4.2, confidence intervals were introduced as a way to find a range of plausible values for the population mean. Based on run10Samp, a 95% confidence interval for the 2012 population mean, $\mu_{12}$, was calculated as $(92.45, 98.77)$ Because the 2006 mean, 93.29, falls in the range of plausible values, we cannot say the null hypothesis is implausible. That is, we failed to reject the null hypothesis, H0. Double negatives can sometimes be used in statistics In many statistical explanations, we use double negatives. For instance, we might say that the null hypothesis is not implausible or we failed to reject the null hypothesis. Double negatives are used to communicate that while we are not rejecting a position, we are also not saying it is correct. Example $1$ Next consider whether there is strong evidence that the average age of runners has changed from 2006 to 2012 in the Cherry Blossom Run. In 2006, the average age was 36.13 years, and in the 2012 run10Samp data set, the average was 35.05 years with a standard deviation of 8.97 years for 100 runners. Solution First, set up the hypotheses: • H0: The average age of runners has not changed from 2006 to 2012, $\mu_{age} = 36.13.$ • HA: The average age of runners has changed from 2006 to 2012, $\mu _{age} 6 \ne 36.13.$ We have previously veri ed conditions for this data set. The normal model may be applied to $\bar {y}$ and the estimate of SE should be very accurate. Using the sample mean and standard error, we can construct a 95% con dence interval for $\mu _{age}$ to determine if there is sufficient evidence to reject H0: $\bar{y} \pm 1.96 \times \dfrac {s}{\sqrt {100}} \rightarrow 35.05 \pm 1.96 \times 0.90 \rightarrow (33.29, 36.81)$ This confidence interval contains the null value, 36.13. Because 36.13 is not implausible, we cannot reject the null hypothesis. We have not found strong evidence that the average age is different than 36.13 years. Exercise $2$ Colleges frequently provide estimates of student expenses such as housing. A consultant hired by a community college claimed that the average student housing expense was $650 per month. What are the null and alternative hypotheses to test whether this claim is accurate?18 Solution HA: The average cost is different than$650 per month, $\mu \ne$ $650. 18Applying the normal model requires that certain conditions are met. Because the data are a simple random sample and the sample (presumably) represents no more than 10% of all students at the college, the observations are independent. The sample size is also sufficiently large (n = 75) and the data exhibit only moderate skew. Thus, the normal model may be applied to the sample mean. Exercise $3$ The community college decides to collect data to evaluate the$650 per month claim. They take a random sample of 75 students at their school and obtain the data represented in Figure 4.11. Can we apply the normal model to the sample mean? Solution If the court makes a Type 1 Error, this means the defendant is innocent (H0 true) but wrongly convicted. A Type 2 Error means the court failed to reject H0 (i.e. failed to convict the person) when she was in fact guilty (HA true). Example $2$ The sample mean for student housing is $611.63 and the sample standard deviation is$132.85. Construct a 95% confidence interval for the population mean and evaluate the hypotheses of Exercise 4.22. Solution The standard error associated with the mean may be estimated using the sample standard deviation divided by the square root of the sample size. Recall that n = 75 students were sampled. $SE = \dfrac {s}{\sqrt {n}} = \dfrac {132.85}{\sqrt {75}} = 15.34$ You showed in Exercise 4.23 that the normal model may be applied to the sample mean. This ensures a 95% confidence interval may be accurately constructed: $\bar {x} \pm z*SE \rightarrow 611.63 \pm 1.96 \times 15.34 \times (581.56, 641.70)$ Because the null value $650 is not in the confidence interval, a true mean of$650 is implausible and we reject the null hypothesis. The data provide statistically significant evidence that the actual average housing expense is less than $650 per month. Decision Errors Hypothesis tests are not flawless. Just think of the court system: innocent people are sometimes wrongly convicted and the guilty sometimes walk free. Similarly, we can make a wrong decision in statistical hypothesis tests. However, the difference is that we have the tools necessary to quantify how often we make such errors. There are two competing hypotheses: the null and the alternative. In a hypothesis test, we make a statement about which one might be true, but we might choose incorrectly. There are four possible scenarios in a hypothesis test, which are summarized in Table 4.12. Table 4.12: Four different scenarios for hypothesis tests. Test conclusion do not reject H0 reject H0 in favor of HA H0 true HA true okay Type 2 Error Type 1 Error okay A Type 1 Error is rejecting the null hypothesis when H0 is actually true. A Type 2 Error is failing to reject the null hypothesis when the alternative is actually true. Exercise 4.25 In a US court, the defendant is either innocent (H0) or guilty (HA). What does a Type 1 Error represent in this context? What does a Type 2 Error represent? Table 4.12 may be useful. Solution To lower the Type 1 Error rate, we might raise our standard for conviction from "beyond a reasonable doubt" to "beyond a conceivable doubt" so fewer people would be wrongly convicted. However, this would also make it more difficult to convict the people who are actually guilty, so we would make more Type 2 Errors. Exercise 4.26 How could we reduce the Type 1 Error rate in US courts? What influence would this have on the Type 2 Error rate? Solution To lower the Type 2 Error rate, we want to convict more guilty people. We could lower the standards for conviction from "beyond a reasonable doubt" to "beyond a little doubt". Lowering the bar for guilt will also result in more wrongful convictions, raising the Type 1 Error rate. Exercise 4.27 How could we reduce the Type 2 Error rate in US courts? What influence would this have on the Type 1 Error rate? Solution A skeptic would have no reason to believe that sleep patterns at this school are different than the sleep patterns at another school. Exercises 4.25-4.27 provide an important lesson: If we reduce how often we make one type of error, we generally make more of the other type. Hypothesis testing is built around rejecting or failing to reject the null hypothesis. That is, we do not reject H0 unless we have strong evidence. But what precisely does strong evidence mean? As a general rule of thumb, for those cases where the null hypothesis is actually true, we do not want to incorrectly reject H0 more than 5% of the time. This corresponds to a significance level of 0.05. We often write the significance level using $\alpha$ (the Greek letter alpha): $\alpha = 0.05.$ We discuss the appropriateness of different significance levels in Section 4.3.6. If we use a 95% confidence interval to test a hypothesis where the null hypothesis is true, we will make an error whenever the point estimate is at least 1.96 standard errors away from the population parameter. This happens about 5% of the time (2.5% in each tail). Similarly, using a 99% con dence interval to evaluate a hypothesis is equivalent to a significance level of $\alpha = 0.01$. A confidence interval is, in one sense, simplistic in the world of hypothesis tests. Consider the following two scenarios: • The null value (the parameter value under the null hypothesis) is in the 95% confidence interval but just barely, so we would not reject H0. However, we might like to somehow say, quantitatively, that it was a close decision. • The null value is very far outside of the interval, so we reject H0. However, we want to communicate that, not only did we reject the null hypothesis, but it wasn't even close. Such a case is depicted in Figure 4.13. In Section 4.3.4, we introduce a tool called the p-value that will be helpful in these cases. The p-value method also extends to hypothesis tests where con dence intervals cannot be easily constructed or applied. Formal Testing using p-Values The p-value is a way of quantifying the strength of the evidence against the null hypothesis and in favor of the alternative. Formally the p-value is a conditional probability. definition: p-value The p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis is true. We typically use a summary statistic of the data, in this chapter the sample mean, to help compute the p-value and evaluate the hypotheses. Exercise $1$ A poll by the National Sleep Foundation found that college students average about 7 hours of sleep per night. Researchers at a rural school are interested in showing that students at their school sleep longer than seven hours on average, and they would like to demonstrate this using a sample of students. What would be an appropriate skeptical position for this research? Solution This is entirely based on the interests of the researchers. Had they been only interested in the opposite case - showing that their students were actually averaging fewer than seven hours of sleep but not interested in showing more than 7 hours - then our setup would have set the alternative as $\mu < 7$. We can set up the null hypothesis for this test as a skeptical perspective: the students at this school average 7 hours of sleep per night. The alternative hypothesis takes a new form reflecting the interests of the research: the students average more than 7 hours of sleep. We can write these hypotheses as • H0: $\mu$ = 7. • HA: $\mu$ > 7. Using $\mu$ > 7 as the alternative is an example of a one-sided hypothesis test. In this investigation, there is no apparent interest in learning whether the mean is less than 7 hours. (The standard error can be estimated from the sample standard deviation and the sample size: $SE_{\bar {x}} = \dfrac {s_x}{\sqrt {n}} = \dfrac {1.75}{\sqrt {110}} = 0.17$). Earlier we encountered a two-sided hypothesis where we looked for any clear difference, greater than or less than the null value. Always use a two-sided test unless it was made clear prior to data collection that the test should be one-sided. Switching a two-sided test to a one-sided test after observing the data is dangerous because it can inflate the Type 1 Error rate. TIP: One-sided and two-sided tests If the researchers are only interested in showing an increase or a decrease, but not both, use a one-sided test. If the researchers would be interested in any difference from the null value - an increase or decrease - then the test should be two-sided. TIP: Always write the null hypothesis as an equality We will find it most useful if we always list the null hypothesis as an equality (e.g. $\mu$ = 7) while the alternative always uses an inequality (e.g. $\mu \ne 7, \mu > 7, or \mu < 7)$. The researchers at the rural school conducted a simple random sample of n = 110 students on campus. They found that these students averaged 7.42 hours of sleep and the standard deviation of the amount of sleep for the students was 1.75 hours. A histogram of the sample is shown in Figure 4.14. Before we can use a normal model for the sample mean or compute the standard error of the sample mean, we must verify conditions. (1) Because this is a simple random sample from less than 10% of the student body, the observations are independent. (2) The sample size in the sleep study is sufficiently large since it is greater than 30. (3) The data show moderate skew in Figure 4.14 and the presence of a couple of outliers. This skew and the outliers (which are not too extreme) are acceptable for a sample size of n = 110. With these conditions veri ed, the normal model can be safely applied to $\bar {x}$ and the estimated standard error will be very accurate. Exercise $1$ What is the standard deviation associated with $\bar {x}$? That is, estimate the standard error of $\bar {x}$.25 The hypothesis test will be evaluated using a significance level of $\alpha = 0.05$. We want to consider the data under the scenario that the null hypothesis is true. In this case, the sample mean is from a distribution that is nearly normal and has mean 7 and standard deviation of about 0.17. Such a distribution is shown in Figure 4.15. The shaded tail in Figure 4.15 represents the chance of observing such a large mean, conditional on the null hypothesis being true. That is, the shaded tail represents the p-value. We shade all means larger than our sample mean, $\bar {x} = 7.42$, because they are more favorable to the alternative hypothesis than the observed mean. We compute the p-value by finding the tail area of this normal distribution, which we learned to do in Section 3.1. First compute the Z score of the sample mean, $\bar {x} = 7.42$: $Z = \dfrac {\bar {x} - \text {null value}}{SE_{\bar {x}}} = \dfrac {7.42 - 7}{0.17} = 2.47$ Using the normal probability table, the lower unshaded area is found to be 0.993. Thus the shaded area is 1 - 0.993 = 0.007. If the null hypothesis is true, the probability of observing such a large sample mean for a sample of 110 students is only 0.007. That is, if the null hypothesis is true, we would not often see such a large mean. We evaluate the hypotheses by comparing the p-value to the significance level. Because the p-value is less than the significance level $(p-value = 0.007 < 0.05 = \alpha)$, we reject the null hypothesis. What we observed is so unusual with respect to the null hypothesis that it casts serious doubt on H0 and provides strong evidence favoring HA. p-value as a tool in hypothesis testing The p-value quantifies how strongly the data favor HA over H0. A small p-value (usually < 0.05) corresponds to sufficient evidence to reject H0 in favor of HA. TIP: It is useful to First draw a picture to find the p-value It is useful to draw a picture of the distribution of $\bar {x}$ as though H0 was true (i.e. $\mu$ equals the null value), and shade the region (or regions) of sample means that are at least as favorable to the alternative hypothesis. These shaded regions represent the p-value. The ideas below review the process of evaluating hypothesis tests with p-values: • The null hypothesis represents a skeptic's position or a position of no difference. We reject this position only if the evidence strongly favors HA. • A small p-value means that if the null hypothesis is true, there is a low probability of seeing a point estimate at least as extreme as the one we saw. We interpret this as strong evidence in favor of the alternative. • We reject the null hypothesis if the p-value is smaller than the significance level, $\alpha$, which is usually 0.05. Otherwise, we fail to reject H0. • We should always state the conclusion of the hypothesis test in plain language so non-statisticians can also understand the results. The p-value is constructed in such a way that we can directly compare it to the significance level ( $\alpha$) to determine whether or not to reject H0. This method ensures that the Type 1 Error rate does not exceed the significance level standard. Exercise If the null hypothesis is true, how often should the p-value be less than 0.05? Solution About 5% of the time. If the null hypothesis is true, then the data only has a 5% chance of being in the 5% of data most favorable to HA. Exercise 4.31 Suppose we had used a significance level of 0.01 in the sleep study. Would the evidence have been strong enough to reject the null hypothesis? (The p-value was 0.007.) What if the significance level was $\alpha = 0.001$? 27 27We reject the null hypothesis whenever p-value < $\alpha$. Thus, we would still reject the null hypothesis if $\alpha = 0.01$ but not if the significance level had been $\alpha = 0.001$. Exercise 4.32 Ebay might be interested in showing that buyers on its site tend to pay less than they would for the corresponding new item on Amazon. We'll research this topic for one particular product: a video game called Mario Kart for the Nintendo Wii. During early October 2009, Amazon sold this game for$46.99. Set up an appropriate (one-sided!) hypothesis test to check the claim that Ebay buyers pay less during auctions at this same time.28 28The skeptic would say the average is the same on Ebay, and we are interested in showing the average price is lower. Exercise 4.33 During early October, 2009, 52 Ebay auctions were recorded for Mario Kart.29 The total prices for the auctions are presented using a histogram in Figure 4.17, and we may like to apply the normal model to the sample mean. Check the three conditions required for applying the normal model: (1) independence, (2) at least 30 observations, and (3) the data are not strongly skewed.30 30(1) The independence condition is unclear. We will make the assumption that the observations are independent, which we should report with any nal results. (2) The sample size is sufficiently large: $n = 52 \ge 30$. (3) The data distribution is not strongly skewed; it is approximately symmetric. H0: The average auction price on Ebay is equal to (or more than) the price on Amazon. We write only the equality in the statistical notation: $\mu_{ebay} = 46.99$. HA: The average price on Ebay is less than the price on Amazon, $\mu _{ebay} < 46.99$. 29These data were collected by OpenIntro staff. Example 4.34 The average sale price of the 52 Ebay auctions for Wii Mario Kart was $44.17 with a standard deviation of$4.15. Does this provide sufficient evidence to reject the null hypothesis in Exercise 4.32? Use a significance level of $\alpha = 0.01$. The hypotheses were set up and the conditions were checked in Exercises 4.32 and 4.33. The next step is to find the standard error of the sample mean and produce a sketch to help find the p-value. Because the alternative hypothesis says we are looking for a smaller mean, we shade the lower tail. We find this shaded area by using the Z score and normal probability table: $Z = \dfrac {44.17 \times 46.99}{0.5755} = -4.90$, which has area less than 0.0002. The area is so small we cannot really see it on the picture. This lower tail area corresponds to the p-value. Because the p-value is so small - specifically, smaller than = 0.01 - this provides sufficiently strong evidence to reject the null hypothesis in favor of the alternative. The data provide statistically signi cant evidence that the average price on Ebay is lower than Amazon's asking price. Two-sided hypothesis testing with p-values We now consider how to compute a p-value for a two-sided test. In one-sided tests, we shade the single tail in the direction of the alternative hypothesis. For example, when the alternative had the form $\mu$ > 7, then the p-value was represented by the upper tail (Figure 4.16). When the alternative was $\mu$ < 46.99, the p-value was the lower tail (Exercise 4.32). In a two-sided test, we shade two tails since evidence in either direction is favorable to HA. Exercise 4.35 Earlier we talked about a research group investigating whether the students at their school slept longer than 7 hours each night. Let's consider a second group of researchers who want to evaluate whether the students at their college differ from the norm of 7 hours. Write the null and alternative hypotheses for this investigation.31 Example 4.36 The second college randomly samples 72 students and nds a mean of $\bar {x} = 6.83$ hours and a standard deviation of s = 1.8 hours. Does this provide strong evidence against H0 in Exercise 4.35? Use a significance level of $\alpha = 0.05$. First, we must verify assumptions. (1) A simple random sample of less than 10% of the student body means the observations are independent. (2) The sample size is 72, which is greater than 30. (3) Based on the earlier distribution and what we already know about college student sleep habits, the distribution is probably not strongly skewed. Next we can compute the standard error $(SE_{\bar {x}} = \dfrac {s}{\sqrt {n}} = 0.21)$ of the estimate and create a picture to represent the p-value, shown in Figure 4.18. Both tails are shaded. 31Because the researchers are interested in any difference, they should use a two-sided setup: H0 : $\mu$ = 7, HA : $\mu \ne 7.$ An estimate of 7.17 or more provides at least as strong of evidence against the null hypothesis and in favor of the alternative as the observed estimate, $\bar {x} = 6.83$. We can calculate the tail areas by rst nding the lower tail corresponding to $\bar {x}$: $Z = \dfrac {6.83 - 7.00}{0.21} = -0.81 \xrightarrow {table} \text {left tail} = 0.2090$ Because the normal model is symmetric, the right tail will have the same area as the left tail. The p-value is found as the sum of the two shaded tails: $\text {p-value} = \text {left tail} + \text {right tail} = 2 \times \text {(left tail)} = 0.4180$ This p-value is relatively large (larger than $\mu$= 0.05), so we should not reject H0. That is, if H0 is true, it would not be very unusual to see a sample mean this far from 7 hours simply due to sampling variation. Thus, we do not have sufficient evidence to conclude that the mean is different than 7 hours. Example 4.37 It is never okay to change two-sided tests to one-sided tests after observing the data. In this example we explore the consequences of ignoring this advice. Using $\alpha = 0.05$, we show that freely switching from two-sided tests to onesided tests will cause us to make twice as many Type 1 Errors as intended. Suppose the sample mean was larger than the null value, $\mu_0$ (e.g. $\mu_0$ would represent 7 if H0: $\mu$ = 7). Then if we can ip to a one-sided test, we would use HA: $\mu > \mu_0$. Now if we obtain any observation with a Z score greater than 1.65, we would reject H0. If the null hypothesis is true, we incorrectly reject the null hypothesis about 5% of the time when the sample mean is above the null value, as shown in Figure 4.19. Suppose the sample mean was smaller than the null value. Then if we change to a one-sided test, we would use HA: $\mu < \mu_0$. If $\bar {x}$ had a Z score smaller than -1.65, we would reject H0. If the null hypothesis is true, then we would observe such a case about 5% of the time. By examining these two scenarios, we can determine that we will make a Type 1 Error 5% + 5% = 10% of the time if we are allowed to swap to the "best" one-sided test for the data. This is twice the error rate we prescribed with our significance level: $\alpha = 0.05$ (!). Caution: One-sided hypotheses are allowed only before seeing data After observing data, it is tempting to turn a two-sided test into a one-sided test. Avoid this temptation. Hypotheses must be set up before observing the data. If they are not, the test must be two-sided. Choosing a Significance Level Choosing a significance level for a test is important in many contexts, and the traditional level is 0.05. However, it is often helpful to adjust the significance level based on the application. We may select a level that is smaller or larger than 0.05 depending on the consequences of any conclusions reached from the test. • If making a Type 1 Error is dangerous or especially costly, we should choose a small significance level (e.g. 0.01). Under this scenario we want to be very cautious about rejecting the null hypothesis, so we demand very strong evidence favoring HA before we would reject H0. • If a Type 2 Error is relatively more dangerous or much more costly than a Type 1 Error, then we should choose a higher significance level (e.g. 0.10). Here we want to be cautious about failing to reject H0 when the null is actually false. We will discuss this particular case in greater detail in Section 4.6. Significance levels should reflect consequences of errors The significance level selected for a test should reflect the consequences associated with Type 1 and Type 2 Errors. Example 4.38 A car manufacturer is considering a higher quality but more expensive supplier for window parts in its vehicles. They sample a number of parts from their current supplier and also parts from the new supplier. They decide that if the high quality parts will last more than 12% longer, it makes nancial sense to switch to this more expensive supplier. Is there good reason to modify the significance level in such a hypothesis test? The null hypothesis is that the more expensive parts last no more than 12% longer while the alternative is that they do last more than 12% longer. This decision is just one of the many regular factors that have a marginal impact on the car and company. A significancelevel of 0.05 seems reasonable since neither a Type 1 or Type 2 error should be dangerous or (relatively) much more expensive. Example 4.39 The same car manufacturer is considering a slightly more expensive supplier for parts related to safety, not windows. If the durability of these safety components is shown to be better than the current supplier, they will switch manufacturers. Is there good reason to modify the significance level in such an evaluation? The null hypothesis would be that the suppliers' parts are equally reliable. Because safety is involved, the car company should be eager to switch to the slightly more expensive manufacturer (reject H0) even if the evidence of increased safety is only moderately strong. A slightly larger significance level, such as $\mu = 0.10$, might be appropriate. Exercise 4.40 A part inside of a machine is very expensive to replace. However, the machine usually functions properly even if this part is broken, so the part is replaced only if we are extremely certain it is broken based on a series of measurements. Identify appropriate hypotheses for this test (in plain language) and suggest an appropriate significance level.32
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.04%3A_Hypothesis_Testing.txt
The normal model for the sample mean tends to be very good when the sample consists of at least 30 independent observations and the population data are not strongly skewed. The Central Limit Theorem provides the theory that allows us to make this assumption. Central Limit Theorem - informal definition The distribution of $\bar {x}$ is approximately normal. The approximation can be poor if the sample size is small, but it improves with larger sample sizes. The Central Limit Theorem states that when the sample size is small, the normal approximation may not be very good. However, as the sample size becomes large, the normal approximation improves. We will investigate three cases to see roughly when the approximation is reasonable. We consider three data sets: one from a uniform distribution, one from an exponential distribution, and the other from a log-normal distribution. These distributions are shown in the top panels of Figure 4.20. The uniform distribution is symmetric, the exponential distribution may be considered as having moderate skew since its right tail is relatively short (few outliers), and the log-normal distribution is strongly skewed and will tend to produce more apparent outliers. The left panel in the n = 2 row represents the sampling distribution of $\bar {x}$ if it is the sample mean of two observations from the uniform distribution shown. The dashed line represents the closest approximation of the normal distribution. Similarly, the center and right panels of the n = 2 row represent the respective distributions of $\bar {x}$ for data from exponential and log-normal distributions. 31Here the null hypothesis is that the part is not broken, and the alternative is that it is broken. If we don't have sufficient evidence to reject H0, we would not replace the part. It sounds like failing to x the part if it is broken (H0 false, HA true) is not very problematic, and replacing the part is expensive. Thus, we should require very strong evidence against H0 before we replace the part. Choose a small significance level, such as $\alpha = 0.01$. 32The normal approximation becomes better as larger samples are used. Exercise Exercise 4.41 Examine the distributions in each row of Figure 4.20. What do you notice about the normal approximation for each sampling distribution as the sample size becomes larger?33 Exercise Example 4.42 Would the normal approximation be good in all applications where the sample size is at least 30? Not necessarily. For example, the normal approximation for the log-normal example is questionable for a sample size of 30. Generally, the more skewed a population distribution or the more common the frequency of outliers, the larger the sample required to guarantee the distribution of the sample mean is nearly normal. TIP: With larger n, the sampling distribution of $\bar {x}$ becomes more normal As the sample size increases, the normal model for $\bar {x}$ becomes more reasonable. We can also relax our condition on skew when the sample size is very large. We discussed in Section 4.1.3 that the sample standard deviation, s, could be used as a substitute of the population standard deviation, $\sigma$, when computing the standard error. This estimate tends to be reasonable when $n \le 30$. We will encounter alternative distributions for smaller sample sizes in Chapters 5 and 6. Example 4.43 Figure 4.21 shows a histogram of 50 observations. These represent winnings and losses from 50 consecutive days of a professional poker player. Can the normal approximation be applied to the sample mean, 90.69? We should consider each of the required conditions. 1. These are referred to as time series data, because the data arrived in a particular sequence. If the player wins on one day, it may inuence how she plays the next. To make the assumption of independence we should perform careful checks on such data. While the supporting analysis is not shown, no evidence was found to indicate the observations are not independent. 2. The sample size is 50, satisfying the sample size condition. 3. There are two outliers, one very extreme, which suggests the data are very strongly skewed or very distant outliers may be common for this type of data. Outliers can play an important role and affect the distribution of the sample mean and the estimate of the standard error. Since we should be skeptical of the independence of observations and the very extreme upper outlier poses a challenge, we should not use the normal model for the sample mean of these 50 observations. If we can obtain a much larger sample, perhaps several hundred observations, then the concerns about skew and outliers would no longer apply. Caution: Examine data structure when considering independence Some data sets are collected in such a way that they have a natural underlying structure between observations, e.g. when observations occur consecutively. Be especially cautious about independence assumptions regarding such data sets. Caution: Watch out for strong skew and outliers Strong skew is often identi ed by the presence of clear outliers. If a data set has prominent outliers, or such observations are somewhat common for the type of data under study, then it is useful to collect a sample with many more than 30 observations if the normal model will be used for $\bar {x}$. There are no simple guidelines for what sample size is big enough for all situations, so proceed with caution when working in the presence of strong skew or more extreme outliers. You won’t be a pro at assessing skew by the end of this book, so just use your best judgement and continue learning. As you develop your statistics skills and encounter tough situations, also consider learning about better ways to analyze skewed data, such as the studentized bootstrap (bootstrap-t), or consult a more experienced statistician.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.05%3A_Examining_the_Central_Limit_Theorem.txt
The sample mean is not the only point estimate for which the sampling distribution is nearly normal. For example, the sampling distribution of sample proportions closely resembles the normal distribution when the sample size is sufficiently large. In this section, we introduce a number of examples where the normal approximation is reasonable for the point estimate. Chapters 5 and 6 will revisit each of the point estimates you see in this section along with some other new statistics. We make another important assumption about each point estimate encountered in this section: the estimate is unbiased. A point estimate is unbiased if the sampling distribution of the estimate is centered at the parameter it estimates. That is, an unbiased estimate does not naturally over or underestimate the parameter. Rather, it tends to provide a "good" estimate. The sample mean is an example of an unbiased point estimate, as are each of the examples we introduce in this section. Finally, we will discuss the general case where a point estimate may follow some distribution other than the normal distribution. We also provide guidance about how to handle scenarios where the statistical techniques you are familiar with are insufficient for the problem at hand. Confidence intervals for nearly normal point estimates In Section 4.2, we used the point estimate $\bar {x}$ with a standard error $SE_{\bar {x}}$ to create a 95% confidence interval for the population mean: $\bar {x} \pm 1.96 \times SE_{\bar {x}} \label {4.44}$ We constructed this interval by noting that the sample mean is within 1.96 standard errors of the actual mean about 95% of the time. This same logic generalizes to any unbiased point estimate that is nearly normal. We may also generalize the confidence level by using a place-holder z^*. General confidence interval for the normal sampling distribution case A confidence interval based on an unbiased and nearly normal point estimate is $\text {point estimate} \pm z^* SE \label{4.45}$ where z^* is selected to correspond to the confidence level, and SE represents the standard error. The value z^*SE is called the margin of error. Generally the standard error for a point estimate is estimated from the data and computed using a formula. For example, the standard error for the sample mean is $SE_{\bar {x}} = \dfrac {s}{\sqrt {n}}$ In this section, we provide the computed standard error for each example and exercise without detailing where the values came from. In future chapters, you will learn to fill in these and other details for each situation. Example $1$ In Exercise 4.1, we computed a point estimate for the average difference in run times between men and women: $\bar {x}_{women} - \bar {x}_{men} = 14.48$ minutes. This point estimate is associated with a nearly normal distribution with standard error SE = 2.78 minutes. What is a reasonable 95% confidence interval for the difference in average run times? Solution The normal approximation is said to be valid, so we apply Equation \ref{4.45}: $\text {point estimate} \pm z^*SE \rightarrow 14.48 \pm 1.96 \times 2.78 \rightarrow (9.03, 19.93)$ Thus, we are 95% confident that the men were, on average, between 9.03 and 19.93 minutes faster than women in the 2012 Cherry Blossom Run. That is, the actual average difference is plausibly between 9.03 and 19.93 minutes with 95% confidence. Example $2$ Does Example $1$ guarantee that if a husband and wife both ran in the race, the husband would run between 9.03 and 19.93 minutes faster than the wife? Solution Our confidence interval says absolutely nothing about individual observations. It only makes a statement about a plausible range of values for the average difference between all men and women who participated in the run. Exercise $1$ What $z^*$ would be appropriate for a 99% confidence level? For help, see Figure 4.10 on page 169. Solution We seek $z^*$ such that 99% of the area under the normal curve will be between the Z scores $-z^*$ and $z^*$. Because the remaining 1% is found in the tails, each tail has area 0.5%, and we can identify $-z^*$ by looking up 0.0050 in the normal probability table: z^* = 2.58. See also Figure 4.10 on page 169. $\hat {p} \pm z^*SE_{\hat {p}} \rightarrow 0.45 \pm 1.65 \times 0.05 \rightarrow (0.3675, 0.5325)$ Thus, we are 90% confident that between 37% and 53% of the participants were men. Exercise $2$ The proportion of men in the run10Samp sample is $\hat {p} = 0.45$. This sample meets certain conditions that ensure $\hat {p}$ will be nearly normal, and the standard error of the estimate is $SE_{\hat {p}} = 0.05$. Create a 90% confidence interval for the proportion of participants in the 2012 Cherry Blossom Run who are men. Answer We use $z^* = 1.65$ (see Exercise 4.17 on page 170), and apply the general confidence interval formula: Hypothesis Testing for Nearly Normal Point Estimates Just as the confidence interval method works with many other point estimates, we can generalize our hypothesis testing methods to new point estimates. Here we only consider the p-value approach, introduced in Section 4.3.4, since it is the most commonly used technique and also extends to non-normal cases. Hypothesis testing using the normal model 1. First write the hypotheses in plain language, then set them up in mathematical notation. 2. Identify an appropriate point estimate of the parameter of interest. 3. Verify conditions to ensure the standard error estimate is reasonable and the point estimate is nearly normal and unbiased. 4. Compute the standard error. Draw a picture depicting the distribution of the estimate under the idea that $H_0$ is true. Shade areas representing the p-value. 5. Using the picture and normal model, compute the test statistic (Z score) and identify the p-value to evaluate the hypotheses. Write a conclusion in plain language. Exercise $3$: sulphinpyrazone A drug called sulphinpyrazone was under consideration for use in reducing the death rate in heart attack patients. To determine whether the drug was effective, a set of 1,475 patients were recruited into an experiment and randomly split into two groups: a control group that received a placebo and a treatment group that received the new drug. What would be an appropriate null hypothesis? And the alternative? Answer The skeptic's perspective is that the drug does not work at reducing deaths in heart attack patients (H0), while the alternative is that the drug does work (HA). We can formalize the hypotheses from Exercise $3$ by letting pcontrol and ptreatment represent the proportion of patients who died in the control and treatment groups, respectively. Then the hypotheses can be written as • H0 : $p_{control} = p_{treatment}$ (the drug doesn't work) • HA : $p_{control} > p_{treatment}$ (the drug works) or equivalently, • H0 : $p_{control} - p_{treatment}$ = 0 (the drug doesn't work) • HA : $p_{control} - p_{treatment}$ > 0 (the drug works) Strong evidence against the null hypothesis and in favor of the alternative would correspond to an observed difference in death rates, $point estimate = \hat {p}_{control} - \hat {p}_{treatment}$ being larger than we would expect from chance alone. This difference in sample proportions represents a point estimate that is useful in evaluating the hypotheses. Example $3$ We want to evaluate the hypothesis setup from Exericse 4.50 using data from the actual study (Anturane Reinfarction Trial Research Group. 1980. Sulfinpyrazone in the prevention of sudden death after myocardial infarction. New England Journal of Medicine 302(5):250-256). In the control group, 60 of 742 patients died. In the treatment group, 41 of 733 patients died. The sample difference in death rates can be summarized as $point estimate = \hat {p}_{control} - \hat {p}_{treatment} = \dfrac {60}{742} - \dfrac {41}{733} = 0.025$ This point estimate is nearly normal and is an unbiased estimate of the actual difference in death rates. The standard error of this sample difference is SE = 0.013. Evaluate the hypothesis test at a 5% significance level: $\alpha = 0.05$. Solution We would like to identify the p-value to evaluate the hypotheses. If the null hypothesis is true, then the point estimate would have come from a nearly normal distribution, like the one shown in Figure $1$. The distribution is centered at zero since $p_{control} - p_{treatment} = 0$ under the null hypothesis. Because a large positive difference provides evidence against the null hypothesis and in favor of the alternative, the upper tail has been shaded to represent the p-value. We need not shade the lower tail since this is a one-sided test: an observation in the lower tail does not support the alternative hypothesis. The p-value can be computed by using the Z score of the point estimate and the normal probability table. $Z = \dfrac {point estimate - null value}{SE_{point estimate}} = \dfrac {0.025 - 0}{0.013} = 1.92 \label{4.52}$ Examining Z in the normal probability table, we nd that the lower unshaded tail is about 0.973. Thus, the upper shaded tail representing the p-value is $p-value = 1 - 0.973 = 0.027$ Because the p-value is less than the significance level ($\alpha = 0.05)$, we say the null hypothesis is implausible. That is, we reject the null hypothesis in favor of the alternative and conclude that the drug is effective at reducing deaths in heart attack patients. The Z score in Equation \ref{4.52} is called a test statistic. In most hypothesis tests, a test statistic is a particular data summary that is especially useful for computing the p-value and evaluating the hypothesis test. In the case of point estimates that are nearly normal, the test statistic is the Z score. Test statistic A test statistic is a special summary statistic that is particularly useful for evaluating a hypothesis test or identifying the p-value. When a point estimate is nearly normal, we use the Z score of the point estimate as the test statistic. In later chapters we encounter situations where other test statistics are helpful. Non-Normal Point Estimates We may apply the ideas of confidence intervals and hypothesis testing to cases where the point estimate or test statistic is not necessarily normal. There are many reasons why such a situation may arise: • the sample size is too small for the normal approximation to be valid; • the standard error estimate may be poor; or • the point estimate tends towards some distribution that is not the normal distribution. For each case where the normal approximation is not valid, our rst task is always to understand and characterize the sampling distribution of the point estimate or test statistic. Next, we can apply the general frameworks for confidence intervals and hypothesis testing to these alternative distributions. When to Retreat Statistical tools rely on conditions. When the conditions are not met, these tools are unreliable and drawing conclusions from them is treacherous. The conditions for these tools typically come in two forms. • The individual observations must be independent. A random sample from less than 10% of the population ensures the observations are independent. In experiments, we generally require that subjects are randomized into groups. If independence fails, then advanced techniques must be used, and in some such cases, inference may not be possible. • Other conditions focus on sample size and skew. For example, if the sample size is too small, the skew too strong, or extreme outliers are present, then the normal model for the sample mean will fail. Verification of conditions for statistical tools is always necessary. Whenever conditions are not satisfied for a statistical technique, there are three options. The rst is to learn new methods that are appropriate for the data. The second route is to consult a statistician. The third route is to ignore the failure of conditions. This last option effectively invalidates any analysis and may discredit novel and interesting findings. If you work at a university, then there may be campus consulting services to assist you. Alternatively, there are many private consulting firms that are also available for hire. Finally, we caution that there may be no inference tools helpful when considering data that include unknown biases, such as convenience samples. For this reason, there are books, courses, and researchers devoted to the techniques of sampling and experimental design. See Sections 1.3-1.5 for basic principles of data collection.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.06%3A_Inference_for_other_Estimators.txt
The Type 2 Error rate and the magnitude of the error for a point estimate are controlled by the sample size. Real differences from the null value, even large ones, may be difficult to detect with small samples. If we take a very large sample, we might nd a statistically significant difference but the magnitude might be so small that it is of no practical value. In this section we describe techniques for selecting an appropriate sample size based on these considerations. Finding a sample size for a certain margin of error Many companies are concerned about rising healthcare costs. A company may estimate certain health characteristics of its employees, such as blood pressure, to project its future cost obligations. However, it might be too expensive to measure the blood pressure of every employee at a large company, and the company may choose to take a sample instead. Example $1$ Blood pressure oscillates with the beating of the heart, and the systolic pressure is de ned as the peak pressure when a person is at rest. The average systolic blood pressure for people in the U.S. is about 130 mmHg with a standard deviation of about 25 mmHg. How large of a sample is necessary to estimate the average systolic blood pressure with a margin of error of 4 mmHg using a 95% confidence level? Solution First, we frame the problem carefully. Recall that the margin of error is the part we add and subtract from the point estimate when computing a confidence interval. The margin of error for a 95% confidence interval estimating a mean can be written as $\text {ME} _{95} = 1.96 \times \text {SE} = 1.96 \times \dfrac {\sigma _{employee}}{\sqrt {n}}$ The challenge in this case is to find the sample size n so that this margin of error is less than or equal to 4, which we write as an inequality: $1.96 \times \dfrac {\sigma_{employee}}{\sqrt {n}} \le 4$ In the above equation we wish to solve for the appropriate value of n, but we need a value for $\sigma_{employee}$ before we can proceed. However, we haven't yet collected any data, so we have no direct estimate! Instead, we use the best estimate available to us: the approximate standard deviation for the U.S. population, 25. To proceed and solve for n, we substitute 25 for $\sigma _{employee}$: $1.96 \times \dfrac {\sigma_{employee}}{\sqrt {n}} \approx 1.96 \times \dfrac {25}{\sqrt {n}} \le 4$ $1.96 \times \dfrac {25}{4} \le \sqrt {n}$ ${(1.96 \times \dfrac {25}{4})}^2 \le n$ $150.06 \le n$ This suggests we should choose a sample size of at least 151 employees. We round up because the sample size must be greater than or equal to 150.06. A potentially controversial part of Example 4.53 is the use of the U.S. standard deviation for the employee standard deviation. Usually the standard deviation is not known. In such cases, it is reasonable to review scientific literature or market research to make an educated guess about the standard deviation. Identify a sample size for a particular margin of error To estimate the necessary sample size for a maximum margin of error m, we set up an equation to represent this relationship: $m \ge ME = z^* \dfrac {\sigma}{\sqrt {n}}$ where $z^*$ is chosen to correspond to the desired confidence level, and $\sigma$ is the standard deviation associated with the population. Solve for the sample size, n. Sample size computations are helpful in planning data collection, and they require careful forethought. Next we consider another topic important in planning data collection and setting a sample size: the Type 2 Error rate. Power and the Type 2 Error rate Consider the following two hypotheses: • H0: The average blood pressure of employees is the same as the national average, $\mu$ = 130. • HA: The average blood pressure of employees is different than the national average, $\mu \ne 130$. Suppose the alternative hypothesis is actually true. Then we might like to know, what is the chance we make a Type 2 Error? That is, what is the chance we will fail to reject the null hypothesis even though we should reject it? The answer is not obvious! If the average blood pressure of the employees is 132 (just 2 mmHg from the null value), it might be very difficult to detect the difference unless we use a large sample size. On the other hand, it would be easier to detect a difference if the real average of employees was 140. Example $2$ Suppose the actual employee average is 132 and we take a sample of 100 individuals. Then the true sampling distribution of $\bar {x}$ is approximately N(132, 2.5) (since $SE = \dfrac {25}{\sqrt {100}} = 2.5)$. What is the probability of successfully rejecting the null hypothesis? Solution This problem can be divided into two normal probability questions. First, we identify what values of $\bar {x}$ would represent sufficiently strong evidence to reject H0. Second, we use the hypothetical sampling distribution for that has center $\mu$ = 132 to find the probability of observing sample means in the areas we found in the rst step. Step 1. The null distribution could be represented by N(130, 2.5), the same standard deviation as the true distribution but with the null value as its center. Then we can find the two tail areas by identifying the Z score corresponding to the 2.5% tails $(\pm1.96)$, and solving for x in the Z score equation: $-1.96 = Z_1 = \dfrac {x_1 - 130}{2.5} +1.96 = Z_2 = \dfrac {x_2 - 130}{2.5}$ $x_1 = 125.1 x_2 = 134.9$ (An equally valid approach is to recognize that $x_1$ is $1.96 \times SE$ below the mean and $x_2$ is $1.96 \times SE$ above the mean to compute the values.) Figure 4.23 shows the null distribution on the left with these two dotted cutoffs. Step 2. Next, we compute the probability of rejecting H0 if $\bar {x}$ actually came from N(132, 2.5). This is the same as finding the two shaded tails for the second distribution in Figure 4.23. We use the Z score method: $Z_{left} = \dfrac {125.1 - 132}{2.5} = -2.76 Z_{right} = \dfrac {134.9 - 132}{2.5} = 1.16$ $area_{left} = 0.003 area_{right} = 0.123$ The probability of rejecting the null mean, if the true mean is 132, is the sum of these areas: 0.003 + 0.123 = 0.126. The probability of rejecting the null hypothesis is called the power. The power varies depending on what we suppose the truth might be. In Example 4.54, the difference between the null value and the supposed true mean was relatively small, so the power was also small: only 0.126. However, when the truth is far from the null value, where we use the standard error as a measure of what is far, the power tends to increase. Exercise $1$ Exercise 4.55 Suppose the true sampling distribution of $\bar {x}$ is centered at 140. That is, $\bar {x}$ comes from N(140, 2.5). What would the power be under this scenario? It may be helpful to draw N(140, 2.5) and shade the area representing power on Figure 4.23; use the same cutoff values identified in Example $2$. Answer Draw the distribution N(140, 2.5), then find the area below 125.1 (about zero area) and above 134.9 (about 0.979). If the true mean is 140, the power is about 0.979. Figure 4.23: The sampling distribution of $\bar {x}$ under two scenarios. Left: N(130, 2.5). Right: N(132, 2.5), and the shaded areas in this distribution represent the power of the test. Exercise $2$ If the power of a test is 0.979 for a particular mean, what is the Type 2 Error rate for this mean? Answer The Type 2 Error rate represents the probability of failing to reject the null hypothesis. Since the power is the probability we do reject, the Type 2 Error rate will be 1 - 0.979 = 0.021. Exercise $3$ Provide an intuitive explanation for why we are more likely to reject H0 when the true mean is further from the null value. Answer Answers may vary a little. When the truth is far from the null value, the point estimate also tends to be far from the null value, making it easier to detect the difference and reject H0. Statistical significance versus practical significance When the sample size becomes larger, point estimates become more precise and any real differences in the mean and null value become easier to detect and recognize. Even a very small difference would likely be detected if we took a large enough sample. Sometimes researchers will take such large samples that even the slightest difference is detected. While we still say that difference is statistically significant, it might not be practically significant. Statistically significant differences are sometimes so minor that they are not practically relevant. This is especially important to research: if we conduct a study, we want to focus on finding a meaningful result. We don't want to spend lots of money finding results that hold no practical value. The role of a statistician in conducting a study often includes planning the size of the study. The statistician might first consult experts or scientific literature to learn what would be the smallest meaningful difference from the null value. She also would obtain some reasonable estimate for the standard deviation. With these important pieces of information, she would choose a sufficiently large sample size so that the power for the meaningful difference is perhaps 80% or 90%. While larger sample sizes may still be used, she might advise against using them in some cases, especially in sensitive areas of research.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.07%3A_Sample_Size_and_Power_%28Special_Topic%29.txt
4.1: Variability in Estimates 4.1 Identify the parameter, Part I. For each of the following situations, state whether the parameter of interest is a mean or a proportion. It may be helpful to examine whether individual responses are numerical or categorical. 1. In a survey, one hundred college students are asked how many hours per week they spend on the Internet. 2. In a survey, one hundred college students are asked: "What percentage of the time you spend on the Internet is part of your course work?" 3. In a survey, one hundred college students are asked whether or not they cited information from Wikipedia in their papers. 4. In a survey, one hundred college students are asked what percentage of their total weekly spending is on alcoholic beverages. 5. In a sample of one hundred recent college graduates, it is found that 85 percent expect to get a job within one year of their graduation date. 4.2 Identify the parameter, Part II. For each of the following situations, state whether the parameter of interest is a mean or a proportion. 1. A poll shows that 64% of Americans personally worry a great deal about federal spending and the budget deficit. 2. A survey reports that local TV news has shown a 17% increase in revenue between 2009 and 2011 while newspaper revenues decreased by 6.4% during this time period. 3. In a survey, high school and college students are asked whether or not they use geolocation services on their smart phones. 4. In a survey, internet users are asked whether or not they purchased any Groupon coupons. 5. In a survey, internet users are asked how many Groupon coupons they purchased over the last year. 4.3 College credits. A college counselor is interested in estimating how many credits a student typically enrolls in each semester. The counselor decides to randomly sample 100 students by using the registrar's database of students. The histogram below shows the distribution of the number of credits taken by these students. Sample statistics for this distribution are also provided. 1. What is the point estimate for the average number of credits taken per semester by students at this college? What about the median? 2. What is the point estimate for the standard deviation of the number of credits taken per semester by students at this college? What about the IQR? 3. Is a load of 16 credits unusually high for this college? What about 18 credits? Explain your reasoning. Hint: Observations farther than two standard deviations from the mean are usually considered to be unusual. 4. The college counselor takes another random sample of 100 students and this time finds a sample mean of 14.02 units. Should she be surprised that this sample statistic is slightly different than the one from the original sample? Explain your reasoning. 5. The sample means given above are point estimates for the mean number of credits taken by all students at that college. What measures do we use to quantify the variability of this estimate? Compute this quantity using the data from the original sample. 4.4 Heights of adults. Researchers studying anthropometry collected body girth measurements and skeletal diameter measurements, as well as age, weight, height and gender, for 507 physically active individuals. The histogram below shows the sample distribution of heights in centimeters.42 1. What is the point estimate for the average height of active individuals? What about the median? 2. What is the point estimate for the standard deviation of the heights of active individuals? What about the IQR? 3. Is a person who is 1m 80cm (180 cm) tall considered unusually tall? And is a person who is 1m 55cm (155cm) considered unusually short? Explain your reasoning. 4. The researchers take another random sample of physically active individuals. Would you expect the mean and the standard deviation of this new sample to be the ones given above. Explain your reasoning. 5. The samples means obtained are point estimates for the mean height of all active individuals, if the sample of individuals is equivalent to a simple random sample. What measure do we use to quantify the variability of such an estimate? Compute this quantity using the data from the original sample under the condition that the data are a simple random sample. 4.5 Wireless routers. John is shopping for wireless routers and is overwhelmed by the number of available options. In order to get a feel for the average price, he takes a random sample of 75 routers and nds that the average price for this sample is $75 and the standard deviation is$25. 1. Based on this information, how much variability should he expect to see in the mean prices of repeated samples, each containing 75 randomly selected wireless routers? 2. A consumer website claims that the average price of routers is $80. Is a true average of$80 consistent with John's sample? 42G. Heinz et al. "Exploring relationships in body dimensions". In: Journal of Statistics Education 11.2 (2003). 4.6 Chocolate chip cookies. Students are asked to count the number of chocolate chips in 22 cookies for a class activity. They found that the cookies on average had 14.77 chocolate chips with a standard deviation of 4.37 chocolate chips. 1. Based on this information, about how much variability should they expect to see in the mean number of chocolate chips in random samples of 22 chocolate chip cookies? 2. The packaging for these cookies claims that there are at least 20 chocolate chips per cookie. One student thinks this number is unreasonably high since the average they found is much lower. Another student claims the difference might be due to chance. What do you think? 4.2: Confidence Intervals 4.7 Relaxing after work. The General Social Survey (GSS) is a sociological survey used to collect data on demographic characteristics and attitudes of residents of the United States. In 2010, the survey collected responses from 1,154 US residents. The survey is conducted face-to-face with an in-person interview of a randomly-selected sample of adults. One of the questions on the survey is "After an average work day, about how many hours do you have to relax or pursue activities that you enjoy?" A 95% confidence interval from the 2010 GSS survey is 3.53 to 3.83 hours.43 1. Interpret this interval in the context of the data. 2. What does a 95% confidence level mean in this context? 3. Suppose the researchers think a 90% confidence level would be more appropriate for this interval. Will this new interval be smaller or larger than the 95% confidence interval? Assume the standard deviation has remained constant since 2010. 4.8 Mental health. Another question on the General Social Survey introduced in Exercise 4.7 is "For how many days during the past 30 days was your mental health, which includes stress, depression, and problems with emotions, not good?" Based on responses from 1,151 US residents, the survey reported a 95% confidence interval of 3.40 to 4.24 days in 2010. 1. Interpret this interval in context of the data. 2. What does a 95% confidence level mean in this context? 3. Suppose the researchers think a 99% confidence level would be more appropriate for this interval. Will this new interval be smaller or larger than the 95% confidence interval? 4. If a new survey asking the same questions was to be done with 500 Americans, would the standard error of the estimate be larger, smaller, or about the same. Assume the standard deviation has remained constant since 2010. 4.9 Width of a confidence interval. Earlier in Chapter 4, we calculated the 99% confidence interval for the average age of runners in the 2012 Cherry Blossom Run as (32.7, 37.4) based on a sample of 100 runners. How could we decrease the width of this interval without losing confidence? 4.10 Confidence levels. If a higher confidence level means that we are more confident about the number we are reporting, why don't we always report a confidence interval with the highest possible confidence level? 43National Opinion Research Center, General Social Survey, 2010. 4.11 Waiting at an ER, Part I. A hospital administrator hoping to improve wait times decides to estimate the average emergency room waiting time at her hospital. He collects a simple random sample of 64 patients and determines the time (in minutes) between when they checked in to the ER until they were first seen by a doctor. A 95% confidence interval based on this sample is (128 minutes, 147 minutes), which is based on the normal model for the mean. Determine whether the following statements are true or false, and explain your reasoning for those statements you identify as false. 1. This confidence interval is not valid since we do not know if the population distribution of the ER wait times is nearly normal. 2. We are 95% con dent that the average waiting time of these 64 emergency room patients is between 128 and 147 minutes. 3. We are 95% con dent that the average waiting time of all patients at this hospital's emergency room is between 128 and 147 minutes. 4. 95% of such random samples would have a sample mean between 128 and 147 minutes. 5. A 99% confidence interval would be narrower than the 95% confidence interval since we need to be more sure of our estimate. 6. The margin of error is 9.5 and the sample mean is 137.5. 7. In order to decrease the margin of error of a 95% confidence interval to half of what it is now, we would need to double the sample size. 4.12 Thanksgiving spending, Part I. The 2009 holiday retail season, which kicked off on November 27, 2009 (the day after Thanksgiving), had been marked by somewhat lower self-reported consumer spending than was seen during the comparable period in 2008. To get an estimate of consumer spending, 436 randomly sampled American adults were surveyed. Daily consumer spending for the six-day period after Thanksgiving, spanning the Black Friday weekend and Cyber Monday, averaged $84.71. A 95% confidence interval based on this sample is ($80.31, $89.11). Determine whether the following statements are true or false, and explain your reasoning. 1. We are 95% con dent that the average spending of these 436 American adults is between$80.31 and $89.11. 2. This confidence interval is not valid since the distribution of spending in the sample is right skewed. 3. 95% of such random samples would have a sample mean between$80.31 and $89.11. 4. We are 95% con dent that the average spending of all American adults is between$80.31 and $89.11. 5. A 90% confidence interval would be narrower than the 95% confidence interval since we don't need to be as sure about capturing the parameter. 6. In order to decrease the margin of error of a 95% confidence interval to a third of what it is now, we would need to use a sample 3 times larger. 7. The margin of error for the reported interval is 4.4. 4.13 Exclusive relationships. A survey was conducted on 203 undergraduates from Duke University who took an introductory statistics course in Spring 2012. Among many other questions, this survey asked them about the number of exclusive relationships they have been in. The histogram below shows the distribution of the data from this sample. The sample average is 3.2 with a standard deviation of 1.97. Estimate the average number of exclusive relationships Duke students have been in using a 90% confidence interval and interpret this interval in context. Check any conditions required for inference, and note any assumptions you must make as you proceed with your calculations and conclusions. 4.14 Age at first marriage, Part I. The National Survey of Family Growth conducted by the Centers for Disease Control gathers information on family life, marriage and divorce, pregnancy, infertility, use of contraception, and men's and women's health. One of the variables collected on this survey is the age at first marriage. The histogram below shows the distribution of ages at first marriage of 5,534 randomly sampled women between 2006 and 2010. The average age at first marriage among these women is 23.44 with a standard deviation of 4.72.44 Estimate the average age at first marriage of women using a 95% confidence interval, and interpret this interval in context. Discuss any relevant assumptions. 44National Survey of Family Growth, 2006-2010 Cycle. 4.3: Hypothesis Testing 4.15 Identify hypotheses, Part I. Write the null and alternative hypotheses in words and then symbols for each of the following situations. New York is known as "the city that never sleeps". A random sample of 25 New Yorkers were asked how much sleep they get per night. Do these data provide convincing evidence that New Yorkers on average sleep less than 8 hours a night? Employers at a rm are worried about the effect of March Madness, a basketball championship held each spring in the US, on employee productivity. They estimate that on a regular business day employees spend on average 15 minutes of company time checking personal email, making personal phone calls, etc. They also collect data on how much company time employees spend on such non-business activities during March Madness. They want to determine if these data provide convincing evidence that employee productivity decreases during March Madness. 4.16 Identify hypotheses, Part II. Write the null and alternative hypotheses in words and using symbols for each of the following situations. Since 2008, chain restaurants in California have been required to display calorie counts of each menu item. Prior to menus displaying calorie counts, the average calorie intake of diners at a restaurant was 1100 calories. After calorie counts started to be displayed on menus, a nutritionist collected data on the number of calories consumed at this restaurant from a random sample of diners. Do these data provide convincing evidence of a difference in the average calorie intake of a diners at this restaurant? Based on the performance of those who took the GRE exam between July 1, 2004 and June 30, 2007, the average Verbal Reasoning score was calculated to be 462. In 2011 the average verbal score was slightly higher. Do these data provide convincing evidence that the average GRE Verbal Reasoning score has changed since 2004?45 4.17 Online communication. A study suggests that the average college student spends 2 hours per week communicating with others online. You believe that this is an underestimate and decide to collect your own sample for a hypothesis test. You randomly sample 60 students from your dorm and nd that on average they spent 3.5 hours a week communicating with others online. A friend of yours, who offers to help you with the hypothesis test, comes up with the following set of hypotheses. Indicate any errors you see. $H0 : \bar {x} < 2 hours$ $HA : \bar {x} > 3.5 hours$ 4.18 Age at first marriage, Part II. Exercise 4.14 presents the results of a 2006 - 2010 survey showing that the average age of women at first marriage is 23.44. Suppose a researcher believes that this value has increased in 2012, but he would also be interested if he found a decrease. Below is how he set up his hypotheses. Indicate any errors you see. H0 : x = 23:44 years old HA : x > 23:44 years old 4.19 Waiting at an ER, Part II. Exercise 4.11 provides a 95% confidence interval for the mean waiting time at an emergency room (ER) of (128 minutes, 147 minutes). 1. A local newspaper claims that the average waiting time at this ER exceeds 3 hours. What do you think of this claim? 2. The Dean of Medicine at this hospital claims the average wait time is 2.2 hours. What do you think of this claim? 3. Without actually calculating the interval, determine if the claim of the Dean from part would be considered reasonable based on a 99% confidence interval? 45ETS, Interpreting your GRE Scores. 4.20 Thanksgiving spending, Part II. Exercise 4.12 provides a 95% confidence interval for the average spending by American adults during the six-day period after Thanksgiving 2009: ($80.31, $89.11). 1. A local news anchor claims that the average spending during this period in 2009 was$100. What do you think of this claim? 2. Would the news anchor's claim be considered reasonable based on a 90% confidence interval? Why or why not? 4.21 Ball bearings. A manufacturer claims that bearings produced by their machine last 7 hours on average under harsh conditions. A factory worker randomly samples 75 ball bearings, and records their lifespans under harsh conditions. He calculates a sample mean of 6.85 hours, and the standard deviation of the data is 1.25 working hours. The following histogram shows the distribution of the lifespans of the ball bearings in this sample. Conduct a formal hypothesis test of this claim. Make sure to check that relevant conditions are satisfied. 4.22 Gifted children, Part I. Researchers investigating characteristics of gifted children collected data from schools in a large city on a random sample of thirty-six children who were identified as gifted children soon after they reached the age of four. The following histogram shows the distribution of the ages (in months) at which these children first counted to 10 successfully. Also provided are some sample statistics.46 1. Are conditions for inference satisfied? 2. Suppose you read on a parenting website that children first count to 10 successfully when they are 32 months old, on average. Perform a hypothesis test to evaluate if these data provide convincing evidence that the average age at which gifted children first count to 10 successfully is different than the general average of 32 months. Use a signi cance level of 0.10. 3. Interpret the p-value in context of the hypothesis test and the data. 4. Calculate a 90% confidence interval for the average age at which gifted children first count to 10 successfully. 5. Do your results from the hypothesis test and the confidence interval agree? Explain. 46F.A. Graybill and H.K. Iyer. Regression Analysis: Concepts and Applications. Duxbury Press, 1994, pp. 511-516. 4.23 Waiting at an ER, Part III. The hospital administrator mentioned in Exercise 4.11 randomly selected 64 patients and measured the time (in minutes) between when they checked in to the ER and the time they were first seen by a doctor. The average time is 137.5 minutes and the standard deviation is 39 minutes. He is getting grief from his supervisor on the basis that the wait times in the ER increased greatly from last year's average of 127 minutes. However, the administrator claims that the increase is probably just due to chance. 1. Are conditions for inference met? Note any assumptions you must make to proceed. 2. Using a signi cance level of $\alpha = 0.05$, is the change in wait times statistically signi cant? Use a two-sided test since it seems the supervisor had to inspect the data before he suggested an increase occurred. 3. Would the conclusion of the hypothesis test change if the signi cance level was changed to $\alpha = 0.01$? 4.24 Gifted children, Part II. Exercise 4.22 describes a study on gifted children. In this study, along with variables on the children, the researchers also collected data on the mother's and father's IQ of the 36 randomly sampled gifted children. The histogram below shows the distribution of mother's IQ. Also provided are some sample statistics. 1. Perform a hypothesis test to evaluate if these data provide convincing evidence that the average IQ of mothers of gifted children is different than the average IQ for the population at large, which is 100. Use a signi cance level of 0.10. 2. Calculate a 90% confidence interval for the average IQ of mothers of gifted children. 3. Do your results from the hypothesis test and the confidence interval agree? 4.25 Nutrition labels. The nutrition label on a bag of potato chips says that a one ounce (28 gram) serving of potato chips has 130 calories and contains ten grams of fat, with three grams of saturated fat. A random sample of 35 bags yielded a sample mean of 134 calories with a standard deviation of 17 calories. Is there evidence that the nutrition label does not provide an accurate measure of calories in the bags of potato chips? We have veri ed the independence, sample size, and skew conditions are satisfied. 4.26 Find the sample mean. You are given the following hypotheses: $H0: \mu = 34, HA: \mu > 34$. We know that the sample standard deviation is 10 and the sample size is 65. For what sample mean would the p-value be equal to 0.05? Assume that all conditions necessary for inference are satisfied. 4.27 Testing for Fibromyalgia. A patient named Diana was diagnosed with Fibromyalgia, a long-term syndrome of body pain, and was prescribed anti-depressants. Being the skeptic that she is, Diana didn't initially believe that anti-depressants would help her symptoms. However after a couple months of being on the medication she decides that the anti-depressants are working, because she feels like her symptoms are in fact getting better. 1. Write the hypotheses in words for Diana's skeptical position when she started taking the anti-depressants. 2. What is a Type 1 error in this context? 3. What is a Type 2 error in this context? 4. How would these errors affect the patient? 4.28 Testing for food safety. A food safety inspector is called upon to investigate a restaurant with a few customer reports of poor sanitation practices. The food safety inspector uses a hypothesis testing framework to evaluate whether regulations are not being met. If he decides the restaurant is in gross violation, its license to serve food will be revoked. 1. Write the hypotheses in words. 2. What is a Type 1 error in this context? 3. What is a Type 2 error in this context? 4. Which error is more problematic for the restaurant owner? Why? 5. Which error is more problematic for the diners? Why? 6. As a diner, would you prefer that the food safety inspector requires strong evidence or very strong evidence of health concerns before revoking a restaurant's license? Explain your reasoning. 4.29 Errors in drug testing. Suppose regulators monitored 403 drugs last year, each for a particular adverse response. For each drug they conducted a single hypothesis test with a significance level of 5% to determine if the adverse effect was higher in those taking the drug than those who did not take the drug; the regulators ultimately rejected the null hypothesis for 42 drugs. Describe the error the regulators might have made for a drug where the null hypothesis was rejected. Describe the error regulators might have made for a drug where the null hypothesis was not rejected. Suppose the vast majority of the 403 drugs do not have adverse effects. Then, if you picked one of the 42 suspect drugs at random, about how sure would you be that the drug really has an adverse effect? Can you also say how sure you are that a particular drug from the 361 where the null hypothesis was not rejected does not have the corresponding adverse response? 4.30 Car insurance savings, Part I. A car insurance company advertises that customers switching to their insurance save, on average, $432 on their yearly premiums. A market researcher at a competing insurance discounter is interested in showing that this value is an overestimate so he can provide evidence to government regulators that the company is falsely advertising their prices. He randomly samples 82 customers who recently switched to this insurance and nds an average savings of$395, with a standard deviation of $102. 1. Are conditions for inference satis ed? 2. Perform a hypothesis test and state your conclusion. 3. Do you agree with the market researcher that the amount of savings advertised is an overestimate? Explain your reasoning. 4. Calculate a 90% confidence interval for the average amount of savings of all customers who switch their insurance. 5. Do your results from the hypothesis test and the confidence interval agree? Explain. 4.31 Happy hour. A restaurant owner is considering extending the happy hour at his restaurant since he would like to see if it increases revenue. If it does, he will permanently extend happy hour. He estimates that the current average revenue per customer is$18 during happy hour. He runs the extended happy hour for a week and nds an average revenue of $19.25 with a standard deviation$3.02 based on a simple random sample of 70 customers. 1. Are conditions for inference satis ed? 2. Perform a hypothesis test. Suppose the customers and their buying habits this week were no different than in any other week for this particular bar. (This may not always be a reasonable assumption.) 3. Calculate a 90% confidence interval for the average revenue per customer. 4. Do your results from the hypothesis test and the confidence interval agree? Explain. 5. If your hypothesis test and confidence interval suggest a signi cant increase in revenue per customer, why might you still not recommend that the restaurant owner extend the happy hour based on this criterion? What may be a better measure to consider? 4.32 Speed reading, Part I. A company offering online speed reading courses claims that students who take their courses show a 5 times (500%) increase in the number of words they can read in a minute without losing comprehension. A random sample of 100 students yielded an average increase of 415% with a standard deviation of 220%. Is there evidence that the company's claim is false? 1. Are conditions for inference satisfied? 2. Perform a hypothesis test evaluating if the company's claim is reasonable or if the true average improvement is less than 500%. Make sure to interpret your response in context of the hypothesis test and the data. Use $\alpha = 0.025$. 3. Calculate a 95% confidence interval for the average increase in the number of words students can read in a minute without losing comprehension. 4. Do your results from the hypothesis test and the confidence interval agree? Explain. 4.4: Examining the Central Limit Theorem 4.33 Ages of pennies, Part I. The histogram below shows the distribution of ages of pennies at a bank. Describe the distribution. Sampling distributions for means from simple random samples of 5, 30, and 100 pennies is shown in the histograms below. Describe the shapes of these distributions and comment on whether they look like what you would expect to see based on the Central Limit Theorem. 4.34 Ages of pennies, Part II. The mean age of the pennies from Exercise 4.33 is 10.44 years with a standard deviation of 9.2 years. Using the Central Limit Theorem, calculate the means and standard deviations of the distribution of the mean from random samples of size 5, 30, and 100. Comment on whether the sampling distributions shown in Exercise 4.33 agree with the values you compute. 4.35 Identify distributions, Part I. Four plots are presented below. The plot at the top is a distribution for a population. The mean is 10 and the standard deviation is 3. Also shown below is a distribution of (1) a single random sample of 100 values from this population, (2) a distribution of 100 sample means from random samples with size 5, and (3) a distribution of 100 sample means from random samples with size 25. Determine which plot (A, B, or C) is which and explain your reasoning. 4.36 Identify distributions, Part II. Four plots are presented below. The plot at the top is a distribution for a population. The mean is 60 and the standard deviation is 18. Also shown below is a distribution of (1) a single random sample of 500 values from this population, (2) a distribution of 500 sample means from random samples of each size 18, and (3) a distribution of 500 sample means from random samples of each size 81. Determine which plot (A, B, or C) is which and explain your reasoning. 4.37 Housing prices, Part I. A housing survey was conducted to determine the price of a typical home in Topanga, CA. The mean price of a house was roughly $1.3 million with a standard deviation of$300,000. There were no houses listed below $600,000 but a few houses above$3 million. 1. Is the distribution of housing prices in Topanga symmetric, right skewed, or left skewed? Hint: Sketch the distribution. 2. Would you expect most houses in Topanga to cost more or less than $1.3 million? 3. Can we estimate the probability that a randomly chosen house in Topanga costs more than$1.4 million using the normal distribution? 4. What is the probability that the mean of 60 randomly chosen houses in Topanga is more than $1.4 million? 5. How would doubling the sample size affect the standard error of the mean? 4.38 Stats nal scores. Each year about 1500 students take the introductory statistics course at a large university. This year scores on the nal exam are distributed with a median of 74 points, a mean of 70 points, and a standard deviation of 10 points. There are no students who scored above 100 (the maximum score attainable on the nal) but a few students scored below 20 points. 1. Is the distribution of scores on this nal exam symmetric, right skewed, or left skewed? 2. Would you expect most students to have scored above or below 70 points? 3. Can we calculate the probability that a randomly chosen student scored above 75 using the normal distribution? 4. What is the probability that the average score for a random sample of 40 students is above 75? 5. How would cutting the sample size in half affect the standard error of the mean? 4.39 Weights of pennies.The distribution of weights of US pennies is approximately normal with a mean of 2.5 grams and a standard deviation of 0.03 grams. 1. What is the probability that a randomly chosen penny weighs less than 2.4 grams? 2. Describe the sampling distribution of the mean weight of 10 randomly chosen pennies. 3. What is the probability that the mean weight of 10 pennies is less than 2.4 grams? 4. Sketch the two distributions (population and sampling) on the same scale. 5. Could you estimate the probabilities from and if the weights of pennies had a skewed distribution? 4.40 CFLs. A manufacturer of compact uorescent light bulbs advertises that the distribution of the lifespans of these light bulbs is nearly normal with a mean of 9,000 hours and a standard deviation of 1,000 hours. 1. What is the probability that a randomly chosen light bulb lasts more than 10,500 hours? 2. Describe the distribution of the mean lifespan of 15 light bulbs. 3. What is the probability that the mean lifespan of 15 randomly chosen light bulbs is more than 10,500 hours? 4. Sketch the two distributions (population and sampling) on the same scale. 5. Could you estimate the probabilities from parts and if the lifespans of light bulbs had a skewed distribution? 4.41 Songs on an iPod. Suppose an iPod has 3,000 songs. The histogram below shows the distribution of the lengths of these songs. We also know that, for this iPod, the mean length is 3.45 minutes and the standard deviation is 1.63 minutes. 1. Calculate the probability that a randomly selected song lasts more than 5 minutes. 2. You are about to go for an hour run and you make a random playlist of 15 songs. What is the probability that your playlist lasts for the entire duration of your run? Hint: If you want the playlist to last 60 minutes, what should be the minimum average length of a song? 3. You are about to take a trip to visit your parents and the drive is 6 hours. You make a random playlist of 100 songs. What is the probability that your playlist lasts the entire drive? 4.42 Spray paint. Suppose the area that can be painted using a single can of spray paint is slightly variable and follows a nearly normal distribution with a mean of 25 square feet and a standard deviation of 3 square feet. 1. What is the probability that the area covered by a can of spray paint is more than 27 square feet? 2. Suppose you want to spray paint an area of 540 square feet using 20 cans of spray paint. On average, how many square feet must each can be able to cover to spray paint all 540 square feet? 3. What is the probability that you can cover a 540 square feet area using 20 cans of spray paint? 4. If the area covered by a can of spray paint had a slightly skewed distribution, could you still calculate the probabilities in parts and using the normal distribution? 4.5: Inference for other Estimators 4.43 Spam mail, Part I. The 2004 National Technology Readiness Survey sponsored by the Smith School of Business at the University of Maryland surveyed 418 randomly sampled Americans, asking them how many spam emails they receive per day. The survey was repeated on a new random sample of 499 Americans in 2009.47 1. What are the hypotheses for evaluating if the average spam emails per day has changed from 2004 to 2009. 2. In 2004 the mean was 18.5 spam emails per day, and in 2009 this value was 14.9 emails per day. What is the point estimate for the difference between the two population means? 3. A report on the survey states that the observed difference between the sample means is not statistically signi cant. Explain what this means in context of the hypothesis test and the data. 4. Would you expect a confidence interval for the difference between the two population means to contain 0? Explain your reasoning. 4.44 Nearsightedness. It is believed that nearsightedness affects about 8% of all children. In a random sample of 194 children, 21 are nearsighted. 1. Construct hypotheses appropriate for the following question: do these data provide evidence that the 8% value is inaccurate? 2. What proportion of children in this sample are nearsighted? 3. Given that the standard error of the sample proportion is 0.0195 and the point estimate follows a nearly normal distribution, calculate the test statistic (the Z statistic). 4. What is the p-value for this hypothesis test? 5. What is the conclusion of the hypothesis test? 4.45 Spam mail, Part II. The National Technology Readiness Survey from Exercise 4.43 also asked Americans how often they delete spam emails. 23% of the respondents in 2004 said they delete their spam mail once a month or less, and in 2009 this value was 16%. 1. What are the hypotheses for evaluating if the proportion of those who delete their email once a month or less (or never) has changed from 2004 to 2009? 2. What is the point estimate for the difference between the two population proportions? 3. A report on the survey states that the observed decrease from 2004 to 2009 is statistically signi cant. Explain what this means in context of the hypothesis test and the data. 4. Would you expect a confidence interval for the difference between the two population proportions to contain 0? Explain your reasoning. 4.46 Unemployment and relationship problems. A USA Today/Gallup poll conducted between 2010 and 2011 asked a group of unemployed and underemployed Americans if they have had major problems in their relationships with their spouse or another close family member as a result of not having a job (if unemployed) or not having a full-time job (if underemployed). 27% of the 1,145 unemployed respondents and 25% of the 675 underemployed respondents said they had major problems in relationships as a result of their employment status. 1. What are the hypotheses for evaluating if the proportions of unemployed and underemployed people who had relationship problems were different? 2. The p-value for this hypothesis test is approximately 0.35. Explain what this means in context of the hypothesis test and the data. 47Rockbridge, 2009 National Technology Readiness Survey SPAM Report. 4.6: Sample Size and Power (Special Topic) 4.47 Which is higher?In each part below, there is a value of interest and two scenarios (I and II). For each part, report if the value of interest is larger under scenario I, scenario II, or whether the value is equal under the scenarios. 1. The standard error of x when s = 120 and (I) n = 25 or (II) n = 125. 2. The margin of error of a confidence interval when the confidence level is (I) 90% or (II) 80%. 3. The p-value for a Z statistic of 2.5 when (I) n = 500 or (II) n = 1000. 4. The probability of making a Type 2 error when the alternative hypothesis is true and the significance level is (I) 0.05 or (II) 0.10. 4.48 True or false. Determine if the following statements are true or false, and explain your reasoning. If false, state how it could be corrected. 1. If a given value (for example, the null hypothesized value of a parameter) is within a 95% confidence interval, it will also be within a 99% confidence interval. 2. Decreasing the significance level ($\alpha$) will increase the probability of making a Type 1 error. 3. Suppose the null hypothesis is $\mu = 5$ and we fail to reject H0. Under this scenario, the true population mean is 5. 4. If the alternative hypothesis is true, then the probability of making a Type 2 error and the power of a test add up to 1. 5. With large sample sizes, even small differences between the null value and the true value of the parameter, a difference often called the effect size, will be identified as statistically significant. 6. A cutoff of $\alpha$ = 0.05 is the ideal value for all hypothesis tests. 4.49 Car insurance savings, Part II. The market researcher from Exercise 4.30 collected data about the savings of 82 customers at a competing car insurance company. The mean and standard deviation of this sample are$395 and $102, respectively. He would like to conduct another survey but have a margin of error of no more than$10 at a 99% confidence level. How large of a sample should he collect? 4.50 Speed reading, Part II. A random sample of 100 students who took online speed reading courses from the company described in Exercise 4.32 yielded an average increase in reading speed of 415% and a standard deviation of 220%. We would like to calculate a 95% confidence interval for the average increase in reading speed with a margin of error of no more than 15%. How many students should we sample? 4.51 Waiting at the ER, Part IV. Exercise 4.23 introduced us to a hospital where ER wait times were being analyzed. The previous year's average was 128 minutes. Suppose that this year's average wait time is 135 minutes. 1. Provide the hypotheses for this situation in plain language. 2. If we plan to collect a sample size of n = 64, what values could $\bar {x}$ take so that we reject H0? Suppose the sample standard deviation from the earlier exercise (39 minutes) is the population standard deviation. You may assume that the conditions for the nearly normal model for $\bar {x}$ are satisfied. 3. Calculate the probability of a Type 2 error. Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./04%3A_Foundations_for_Inference/4.E%3A_Foundations_for_Inference_%28Exercises%29.txt
Chapter 4 introduced a framework for statistical inference based on con dence intervals and hypotheses. In this chapter, we encounter several new point estimates and scenarios. In each case, the inference ideas remain the same: 1. Determine which point estimate or test statistic is useful. 2. Identify an appropriate distribution for the point estimate or test statistic. 3. Apply the ideas from Chapter 4 using the distribution from step 2. Each section in Chapter 5 explores a new situation: the difference of two means (5.1, 5.2); a single mean or difference of means where we relax the minimum sample size condition (5.3, 5.4); and the comparison of means across multiple groups (5.5). Chapter 6 will introduce scenarios that highlight categorical data. • 5.1: One-Sample Means with the t Distribution • 5.2: Paired Data Two sets of observations are paired if each observation in one set has a special correspondence or connection with exactly one observation in the other data set. To analyze paired data, it is often useful to look at the difference in outcomes of each pair of observations. • 5.3: Difference of Two Means In this section we consider a difference in two population means, μ1−μ2, under the condition that the data are not paired. The methods are similar in theory but different in the details. Just as with a single sample, we identify conditions to ensure a point estimate of the difference is nearly normal. Next we introduce a formula for the standard error, which allows us to apply our general tools discussed previously. • 5.4: Power Calculations for a Difference of Means (Special Topic) It is also useful to be able to compare two means for small samples. In this section we use the t distribution for the difference in sample means. We will again drop the minimum sample size condition and instead impose a strong condition on the distribution of the data. • 5.5: Comparing many Means with ANOVA (Special Topic) In this section, we will learn a new method called analysis of variance (ANOVA) and a new test statistic called F. • 5.6: Exercises Exercises for Chapter 5 of the "OpenIntro Statistics" textmap by Diez, Barr and Çetinkaya-Rundel. 05: Inference for Numerical Data The motivation in Chapter 4 for requiring a large sample was two-fold. First, a large sample ensures that the sampling distribution of $\bar {x}$ is nearly normal. We will see in Section 5.3.1 that if the population data are nearly normal, then $\bar {x}$ is also nearly normal regardless of the 10The standard error squared represents the variance of the estimate. If X and Y are two random variables with variances $\sigma^2_x$ and $\sigma^2_y$, then the variance of X - Y is $\sigma^2_x + \sigma^2_y$. Likewise, the variance corresponding to $\bar {x}_1- \bar {x}_2$ is $\sigma^2_{\bar {x}_1} +\sigma^2_{\bar {x}_2}$. Because $\sigma^2_{\bar {x}_1}$ and $\sigma^2_{\bar {x}_2}$ are just another way of writing $SE^2_{\bar {x}_1}$ and $SE^2_{\bar {x}_2}$, the variance associated with $\bar {x}_1 - \bar {x}_2$ may be written as $SE^2_{\bar {x}_1} + SE^2_{\bar {x}_2}$ sample size. The second motivation for a large sample was that we get a better estimate of the standard error when using a large sample. The standard error estimate will not generally be accurate for smaller sample sizes, and this motivates the introduction of the t distribution, which we introduce in Section 5.3.2. We will see that the t distribution is a helpful substitute for the normal distribution when we model a sample mean $\bar {x}$ that comes from a small sample. While we emphasize the use of the t distribution for small samples, this distribution may also be used for means from large samples. The normality condition We use a special case of the Central Limit Theorem to ensure the distribution of the sample means will be nearly normal, regardless of sample size, provided the data come from a nearly normal distribution. Central Limit Theorem for normal data The sampling distribution of the mean is nearly normal when the sample observations are independent and come from a nearly normal distribution. This is true for any sample size. While this seems like a very helpful special case, there is one small problem. It is inherently difficult to verify normality in small data sets. Caution: Checking the normality condition We should exercise caution when verifying the normality condition for small samples. It is important to not only examine the data but also think about where the data come from. For example, ask: would I expect this distribution to be symmetric, and am I con dent that outliers are rare? You may relax the normality condition as the sample size goes up. If the sample size is 10 or more, slight skew is not problematic. Once the sample size hits about 30, then moderate skew is reasonable. Data with strong skew or outliers require a more cautious analysis. Introducing the t distribution The second reason we previously required a large sample size was so that we could accurately estimate the standard error using the sample data. In the cases where we will use a small sample to calculate the standard error, it will be useful to rely on a new distribution for inference calculations: the t distribution. A t distribution, shown as a solid line in Figure 5.10, has a bell shape. However, its tails are thicker than the normal model's. This means observations are more likely to fall beyond two standard deviations from the mean than under the normal distribution.11 These extra thick tails are exactly the correction we need to resolve the problem of a poorly estimated standard error. The t distribution, always centered at zero, has a single parameter: degrees of freedom. The degrees of freedom (df) describe the precise form of the bell shaped t distribution. 11The standard deviation of the t distribution is actually a little more than 1. However, it is useful to always think of the t distribution as having a standard deviation of 1 in all of our applications. Several t distributions are shown in Figure 5.11. When there are more degrees of freedom, the t distribution looks very much like the standard normal distribution. Degrees of freedom (df) The degrees of freedom describe the shape of the t distribution. The larger the degrees of freedom, the more closely the distribution approximates the normal model. When the degrees of freedom is about 30 or more, the t distribution is nearly indistinguishable from the normal distribution. In Section 5.3.3, we relate degrees of freedom to sample size. We will nd it very useful to become familiar with the t distribution, because it plays a very similar role to the normal distribution during inference for small samples of numerical data. We use a t table, partially shown in Table 5.12, in place of the normal probability table for small sample numerical data. A larger table is presented in Appendix B.2 on page 410. Each row in the t table represents a t distribution with different degrees of freedom. The columns correspond to tail probabilities. For instance, if we know we are working with the t distribution with df = 18, we can examine row 18, which is highlighted in one tail two tails 0.100 0.200 0.050 0.100 0.025 0.050 0.010 0.020 0.005 0.010 df 1 2 3 $\vdots$ 17 18 19 20 $\vdots$ 400 500 $\infty$ 3.08 1.89 1.64 $\vdots$ 1.33 1.33 1.33 1.33 $\vdots$ 1.28 1.28 1.28 6.31 2.92 2.35 $\vdots$ 1.74 1.73 1.73 1.72 $\vdots$ 1.65 1.65 1.64 12.71 4.30 3.18 $\vdots$ 2.11 2.10 2.09 2.09 $\vdots$ 1.97 1.96 1.96 31.82 6.96 4.54 $\vdots$ 2.57 2.55 2.54 2.53 $\vdots$ 2.34 2.33 2.33 63.66 9.92 3 5.84 $\vdots$ 2.90 2.88 2.86 2.85 $\vdots$ 2.59 2.59 2.58 Table 5.12: An abbreviated look at the t table. Each row represents a different t distribution. The columns describe the cutoffs for specific tail areas. The row with df = 18 has been highlighted. If we want the value in this row that identifies the cutoff for an upper tail of 10%, we can look in the column where one tail is 0.100. This cutoff is 1.33. If we had wanted the cutoff for the lower 10%, we would use -1.33. Just like the normal distribution, all t distributions are symmetric. Example 5.15 What proportion of the t distribution with 18 degrees of freedom falls below -2.10? Just like a normal probability problem, we first draw the picture in Figure 5.13 and shade the area below -2.10. To nd this area, we identify the appropriate row: df = 18. Then we identify the column containing the absolute value of -2.10; it is the third column. Because we are looking for just one tail, we examine the top line of the table, which shows that a one tail area for a value in the third row corresponds to 0.025. About 2.5% of the distribution falls below -2.10. In the next example we encounter a case where the exact t value is not listed in the table. Example 5.16 A t distribution with 20 degrees of freedom is shown in the left panel of Figure 5.14. Estimate the proportion of the distribution falling above 1.65. We identify the row in the t table using the degrees of freedom: df = 20. Then we look for 1.65; it is not listed. It falls between the rst and second columns. Since these values bound 1.65, their tail areas will bound the tail area corresponding to 1.65. We identify the one tail area of the rst and second columns, 0.050 and 0.10, and we conclude that between 5% and 10% of the distribution is more than 1.65 standard deviations above the mean. If we like, we can identify the precise area using statistical software: 0.0573. Example 5.17 A t distribution with 2 degrees of freedom is shown in the right panel of Figure 5.14. Estimate the proportion of the distribution falling more than 3 units from the mean (above or below). As before, first identify the appropriate row: df = 2. Next, nd the columns that capture 3; because 2:92 < 3 < 4:30, we use the second and third columns. Finally, we nd bounds for the tail areas by looking at the two tail values: 0.05 and 0.10. We use the two tail values because we are looking for two (symmetric) tails. Exercise 5.18 What proportion of the t distribution with 19 degrees of freedom falls above -1.79 units?12 The t distribution as a solution to the standard error problem When estimating the mean and standard error from a small sample, the t distribution is a more accurate tool than the normal model. This is true for both small and large samples. TIP: When to use the t distribution Use the t distribution for inference of the sample mean when observations are independent and nearly normal. You may relax the nearly normal condition as the sample size increases. For example, the data distribution may be moderately skewed when the sample size is at least 30. 12We find the shaded area above -1.79 (we leave the picture to you). The small left tail is between 0.025 and 0.05, so the larger upper region must have an area between 0.95 and 0.975. To proceed with the t distribution for inference about a single mean, we must check two conditions. Independence of observations. We verify this condition just as we did before. We collect a simple random sample from less than 10% of the population, or if it was an experiment or random process, we carefully check to the best of our abilities that the observations were independent. Observations come from a nearly normal distribution. This second condition is difficult to verify with small data sets. We often (i) take a look at a plot of the data for obvious departures from the normal model, and (ii) consider whether any previous experiences alert us that the data may not be nearly normal. When examining a sample mean and estimated standard error from a sample of n independent and nearly normal observations, we use a t distribution with n - 1 degrees of freedom (df). For example, if the sample size was 19, then we would use the t distribution with df = 19 - 1 = 18 degrees of freedom and proceed exactly as we did in Chapter 4, except that now we use the t table. One sample t confidence intervals Dolphins are at the top of the oceanic food chain, which causes dangerous substances such as mercury to concentrate in their organs and muscles. This is an important problem for both dolphins and other animals, like humans, who occasionally eat them. For instance, this is particularly relevant in Japan where school meals have included dolphin at times. Here we identify a con dence interval for the average mercury content in dolphin muscle using a sample of 19 Risso's dolphins from the Taiji area in Japan.13 The data are summarized in Table 5.16. The minimum and maximum observed values can be used to evaluate whether or not there are obvious outliers or skew. 13Taiji was featured in the movie The Cove, and it is a signi cant source of dolphin and whale meat in Japan. Thousands of dolphins pass through the Taiji area annually, and we will assume these 19 dolphins represent a simple random sample from those dolphins. Data reference: Endo T and Haraguchi K. 2009. High mercury levels in hair samples from residents of Taiji, a Japanese whaling town. Marine Pollution Bulletin 60(5):743-747. n 19 $\bar {x}$ 4.4 s 2.3 minimum 1.7 maximum 9.2 Table 5.16: Summary of mercury content in the muscle of 19 Risso's dolphins from the Taiji area. Measurements are in $\mu$g/wet g (micrograms of mercury per wet gram of muscle). Example 5.19 Are the independence and normality conditions satis ed for this data set? The observations are a simple random sample and consist of less than 10% of the population, therefore independence is reasonable. The summary statistics in Table 5.16 do not suggest any skew or outliers; all observations are within 2.5 standard deviations of the mean. Based on this evidence, the normality assumption seems reasonable. In the normal model, we used z* and the standard error to determine the width of a con dence interval. We revise the confidence interval formula slightly when using the t distribution: $\bar {x} \pm t*_{df} SE$ The sample mean and estimated standard error are computed just as before ($\bar {x}$ = 4.4 and $SE = \frac {s}{\sqrt {n}} =0.528$). The value $t*_{df}$ is a cutoff we obtain based on the con dence level and the t distribution with df degrees of freedom. Before determining this cutoff , we will first need the degrees of freedom. Degrees of freedom for a single sample If the sample has n observations and we are examining a single mean, then we use the t distribution with df = n - 1 degrees of freedom. In our current example, we should use the t distribution with df = 19 -1 = 18 degrees of freedom. Then identifying $t*_{18}$ is similar to how we found z*. • For a 95% confidence interval, we want to nd the cutoff $t*_{18}$ such that 95% of the t distribution is between $-t*_{18} \text {and} t*_{18}$. • We look in the t table on page 224, nd the column with area totaling 0.05 in the two tails (third column), and then the row with 18 degrees of freedom: $t*_{18} = 2.10$. Generally the value of $t*_{df}$ is slightly larger than what we would get under the normal model with z*. Finally, we can substitute all our values into the con dence interval equation to create the 95% con dence interval for the average mercury content in muscles from Risso's dolphins that pass through the Taiji area: $\bar {x} \pm t*_{18}SE \rightarrow 4.4 \pm 2.10 \times 0.528 \rightarrow (3.87, 4.93)$ We are 95% con dent the average mercury content of muscles in Risso's dolphins is between 3.87 and 4.93 $\mu$g/wet gram. This is above the Japanese regulation level of 0.4 $\mu$g/wet gram. Finding a t confidence interval for the mean Based on a sample of n independent and nearly normal observations, a confidence interval for the population mean is $\bar {x} \pm t*_{df} SE$ where $\bar {x}$ is the sample mean, $t*_{df}$ corresponds to the confidence level and degrees of freedom, and SE is the standard error as estimated by the sample. Exercise 5.20 The FDA's webpage provides some data on mercury content of sh.14 Based on a sample of 15 croaker white fish (Pacific), a sample mean and standard deviation were computed as 0.287 and 0.069 ppm (parts per million), respectively. The 15 observations ranged from 0.18 to 0.41 ppm. We will assume these observations are independent. Based on the summary statistics of the data, do you have any objections to the normality condition of the individual observations?15 Example 5.21 Estimate the standard error of $\bar {x} = 0.287$ ppm using the data summaries in Exercise 5.20. If we are to use the t distribution to create a 90% confidence interval for the actual mean of the mercury content, identify the degrees of freedom we should use and also find $t*_{df}$. The standard error: $SE = \frac {0.069}{\sqrt {15}} = 0.0178$. Degrees of freedom: df = n - 1 = 14. Looking in the column where two tails is 0.100 (for a 90% confidence interval) and row df = 14, we identify $t*_{14} = 1.76$. Exercise 5.22 Using the results of Exercise 5.20 and Example 5.21, compute a 90% confidence interval for the average mercury content of croaker white fish (Pacific).16 One sample t tests An SAT preparation company claims that its students' scores improve by over 100 points on average after their course. A consumer group would like to evaluate this claim, and they collect data on a random sample of 30 students who took the class. Each of these students took the SAT before and after taking the company's course, and so we have a difference in scores for each student. We will examine these differences $x_1 = 57, x_2 = 133,\dots, x_30 = 140$ as a sample to evaluate the company's claim. (This is paired data, so we analyze the score differences; for a review of the ideas of paired data, see Section 5.1.) The distribution of the differences, shown in Figure 5.17, has mean 135.9 and standard deviation 82.2. Do these data provide convincing evidence to back up the company's claim? Exercise 5.23 Set up hypotheses to evaluate the company's claim. Use $\mu_{diff}$ to represent the true average difference in student scores.17 14www.fda.gov/Food/FoodSafety/P...bornePathogens contaminants/Methylmercury/ucm115644.htm 15There are no obvious outliers; all observations are within 2 standard deviations of the mean. If there is skew, it is not evident. There are no red ags for the normal model based on this (limited) information, and we do not have reason to believe the mercury content is not nearly normal in this type of fish. 16$\bar {x} \pm t*_{14}SE \rightarrow 0.287 \pm 1.76 \times 0.0178 \rightarrow (0.256, 0.318)$. We are 90% confident that the average mercury content of croaker white sh (Pacific) is between 0.256 and 0.318 ppm. 17This is a one-sided test. H0: student scores do not improve by more than 100 after taking the company's course. $\mu _{diff} = 100$ (we always write the null hypothesis with an equality). HA: students scores improve by more than 100 points on average after taking the company's course. $\mu _{diff} > 100$. Exercise 5.24 Are the conditions to use the t distribution method satisfied?18 Just as we did for the normal case, we standardize the sample mean using the Z score to identify the test statistic. However, we will write T instead of Z, because we have a small sample and are basing our inference on the t distribution: $T = \frac {\bar {x} - null value}{SE} = \frac {135.9 - 100}{\frac {82.2}{\sqrt {30}}} = 2.39$ If the null hypothesis was true, the test statistic T would follow a t distribution with df = n - 1 = 29 degrees of freedom. We can draw a picture of this distribution and mark the observed T, as in Figure 5.18. The shaded right tail represents the p-value: the probability of observing such strong evidence in favor of the SAT company's claim, if the average student improvement is really only 100. 18This is a random sample from less than 10% of the company's students (assuming they have more than 300 former students), so the independence condition is reasonable. The normality condition also seems reasonable based on Figure 5.17. We can use the t distribution method. Note that we could use the normal distribution. However, since the sample size (n = 30) just meets the threshold for reasonably estimating the standard error, it is advisable to use the t distribution. Exercise 5.25 Use the t table in Appendix B.2 on page 410 to identify the p-value. What do you conclude?19 Exercise 5.26 Because we rejected the null hypothesis, does this mean that taking the company's class improves student scores by more than 100 points on average?20
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.01%3A_One-Sample_Means_with_the_t_Distribution.txt
Are textbooks actually cheaper online? Here we compare the price of textbooks at UCLA's bookstore and prices at Amazon.com. Seventy-three UCLA courses were randomly sampled in Spring 2010, representing less than 10% of all UCLA courses (when a class had multiple books, only the most expensive text was considered). A portion of this data set is shown in Table $1$. Table $1$: Six cases of the textbooks data set. dept course ucla amazon diff 1 Am Ind C170 27.67 27.95 -0.28 2 Anthro 9 40.59 31.14 9.45 3 Anthro 135T 31.68 32.00 -0.32 4 Anthro 191HB 16.00 11.52 4.48 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 72 Wom Std M144 23.76 18.72 5.04 73 Wom Std 285 27.70 18.22 9.48 Paired Observations and Samples Each textbook has two corresponding prices in the data set: one for the UCLA bookstore and one for Amazon. Therefore, each textbook price from the UCLA bookstore has a natural correspondence with a textbook price from Amazon. When two sets of observations have this special correspondence, they are said to be paired. Paired data Two sets of observations are paired if each observation in one set has a special correspondence or connection with exactly one observation in the other data set. To analyze paired data, it is often useful to look at the difference in outcomes of each pair of observations. In the textbook data set, we look at the difference in prices, which is represented as the diff variable in the textbooks data. Here the differences are taken as $\text {UCLA price} - \text {Amazon price}$ for each book. It is important that we always subtract using a consistent order; here Amazon prices are always subtracted from UCLA prices. A histogram of these differences is shown in Figure $1$. Using differences between paired observations is a common and useful way to analyze paired data. Exercise $1$ The first difference shown in Table $1$ is computed as 27.67 - 27.95 = -0.28. Verify the differences are calculated correctly for observations 2 and 3. Solution • Observation 2: 40.59 - 31.14 = 9.45. • Observation 3: 31.68 - 32.00 = -0.32. Inference for Paired Data To analyze a paired data set, we use the exact same tools that we developed in Chapter 4. Now we apply them to the differences in the paired observations. Table $1$: Summary statistics for the price differences. There were 73 books, so there are 73 differences. $n_{diff}$ $\bar {x}_{diff}$ $s_{diff}$ 73 12.76 14.26 Example $1$: UCLA vs. Amazon Set up and implement a hypothesis test to determine whether, on average, there is a difference between Amazon's price for a book and the UCLA bookstore's price. Solution There are two scenarios: there is no difference or there is some difference in average prices. The no difference scenario is always the null hypothesis: • H0: $\mu_{diff}$ = 0. There is no difference in the average textbook price. • HA: $\mu_{diff} \ne$ 0. There is a difference in average prices. Can the normal model be used to describe the sampling distribution of $\bar {x} _{diff}$? We must check that the differences meet the conditions established in Chapter 4. The observations are based on a simple random sample from less than 10% of all books sold at the bookstore, so independence is reasonable; there are more than 30 differences; and the distribution of differences, shown in Figure $1$, is strongly skewed, but this amount of skew is reasonable for this sized data set (n = 73). Because all three conditions are reasonably satisfied, we can conclude the sampling distribution of $\bar {x}_{diff}$ nearly normal and our estimate of the standard error will be reasonable. We compute the standard error associated with $\bar {x} _{diff}$ using the standard deviation of the differences ($s_{diff}$ = 14.26) and the number of differences ($n_{diff}$ = 73): $SE _{\bar {x}_{diff}} = \dfrac {s_{diff}}{\sqrt {n_{diff}}} = \dfrac {14.26}{\sqrt {73}} = 1.67$ To visualize the p-value, the sampling distribution of $\bar {x} _{diff}$is drawn as though H0 is true, which is shown in Figure $1$. The p-value is represented by the two (very) small tails. To find the tail areas, we compute the test statistic, which is the Z score of $\bar {x} _{diff}$ under the null condition that the actual mean difference is 0: $Z = \dfrac {\bar {x}_{diff} - 0}{SE _{\bar {x}_{diff}}}= \dfrac {12.76 - 0}{1.67} = 7.59$ This Z score is so large it is not even in the table, which ensures the single tail area will be 0.0002 or smaller. Since the p-value corresponds to both tails in this case and the normal distribution is symmetric, the p-value can be estimated as twice the one-tail area: $\text {p-value} = 2 \times \text {(one tail area)} \approx 2 \times 0.0002 = 0.0004$ Because the p-value is less than 0.05, we reject the null hypothesis. We have found convincing evidence that Amazon is, on average, cheaper than the UCLA bookstore for UCLA course textbooks. Exercise $1$ Create a 95% confidence interval for the average price difference between books at the UCLA bookstore and books on Amazon. Solution Conditions have already verified and the standard error computed in Example $1$. To find the interval, identify $z^*$ (1.96 for 95% confidence) and plug it, the point estimate, and the standard error into the confidence interval formula: $\text {point estimate} \pm z^*SE \rightarrow 12.76 \pm 1.96 \times 1.67 \rightarrow (9.49, 16.03)$ We are 95% confident that Amazon is, on average, between $9.49 and$16.03 cheaper than the UCLA bookstore for UCLA course books.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.02%3A_Paired_Data.txt
In this section we consider a difference in two population means, $\mu_1 - \mu_2$, under the condition that the data are not paired. The methods are similar in theory but different in the details. Just as with a single sample, we identify conditions to ensure a point estimate of the difference $\bar {x}_1 - \bar {x}_2$ is nearly normal. Next we introduce a formula for the standard error, which allows us to apply our general tools from Section 4.5. We apply these methods to two examples: participants in the 2012 Cherry Blossom Run and newborn infants. This section is motivated by questions like "Is there convincing evidence that newborns from mothers who smoke have a different average birth weight than newborns from mothers who don't smoke?" Point Estimates and Standard Errors for Differences of Means We would like to estimate the average difference in run times for men and women using the run10Samp data set, which was a simple random sample of 45 men and 55 women from all runners in the 2012 Cherry Blossom Run. Table $2$ presents relevant summary statistics, and box plots of each sample are shown in Figure 5.6. Table $2$: Summary statistics for the run time of 100 participants in the 2009 Cherry Blossom Run. men women $\bar {x}$ 87.65 102.13 $s$ 12.5 15.2 $n$ 45 55 The two samples are independent of one-another, so the data are not paired. Instead a point estimate of the difference in average 10 mile times for men and women, $\mu_w - \mu_m$, can be found using the two sample means: $\bar {x}_w - \bar {x}_m = 102.13 - 87.65 = 14.48$ Because we are examining two simple random samples from less than 10% of the population, each sample contains at least 30 observations, and neither distribution is strongly skewed, we can safely conclude the sampling distribution of each sample mean is nearly normal. Finally, because each sample is independent of the other (e.g. the data are not paired), we can conclude that the difference in sample means can be modeled using a normal distribution. (Probability theory guarantees that the difference of two independent normal random variables is also normal. Because each sample mean is nearly normal and observations in the samples are independent, we are assured the difference is also nearly normal.) Conditions for normality of $\bar {x}_1 - \bar {x}_2$ If the sample means, $\bar {x}_1$ and $\bar {x}_2$, each meet the criteria for having nearly normal sampling distributions and the observations in the two samples are independent, then the difference in sample means, $\bar {x}_1 - \bar {x}_2$, will have a sampling distribution that is nearly normal. We can quantify the variability in the point estimate, $\bar {x}_w - \bar {x}_m$, using the following formula for its standard error: $SE_{\bar {x}_w - \bar {x}_m} = \sqrt {\dfrac {\sigma^2_w}{n_w} + \dfrac {\sigma^2_m}{n_m}}$ We usually estimate this standard error using standard deviation estimates based on the samples: \begin{align} SE_{\bar {x}_w-\bar {x}_m} &\approx \sqrt {\dfrac {s^2_w}{n_w} + \dfrac {s^2_m}{n_m}} \[6pt] &= \sqrt {\dfrac {15.2^2}{55} + \dfrac {12.5^2}{45}} \&= 2.77 \end{align} Because each sample has at least 30 observations ($n_w = 55$ and $n_m = 45$), this substitution using the sample standard deviation tends to be very good. Distribution of a difference of sample means The sample difference of two means, $\bar {x}_1 - \bar {x}_2$, is nearly normal with mean $\mu_1 - \mu_2$ and estimated standard error $SE_{\bar {x}_1-\bar {x}_2} = \sqrt {\dfrac {s^2_1}{n_1} + \dfrac {s^2_2}{n_2}} \label{5.4}$ when each sample mean is nearly normal and all observations are independent. Confidence Interval for the Difference When the data indicate that the point estimate $\bar {x}_1 - \bar {x}_2$ comes from a nearly normal distribution, we can construct a confidence interval for the difference in two means from the framework built in Chapter 4. Here a point estimate, $\bar {x}_w - \bar {x}_m = 14.48$, is associated with a normal model with standard error SE = 2.77. Using this information, the general confidence interval formula may be applied in an attempt to capture the true difference in means, in this case using a 95% confidence level: $\text {point estimate} \pm z^*SE \rightarrow 14.48 \pm 1.96 \times 2.77 = (9.05, 19.91)$ Based on the samples, we are 95% confident that men ran, on average, between 9.05 and 19.91 minutes faster than women in the 2012 Cherry Blossom Run. Exercise $1$ What does 95% confidence mean? Solution If we were to collected many such samples and create 95% confidence intervals for each, then about 95% of these intervals would contain the population difference, $\mu_w - \mu_m$. Exercise $2$ We may be interested in a different confidence level. Construct the 99% confidence interval for the population difference in average run times based on the sample data. Solution The only thing that changes is z*: we use z* = 2:58 for a 99% confidence level. (If the selection of $z^*$ is confusing, see Section 4.2.4 for an explanation.) The 99% confidence interval: $14.48 \pm 2.58 \times 2.77 \rightarrow (7.33, 21.63).$ We are 99% confident that the true difference in the average run times between men and women is between 7.33 and 21.63 minutes. Hypothesis tests Based on a Difference in Means A data set called baby smoke represents a random sample of 150 cases of mothers and their newborns in North Carolina over a year. Four cases from this data set are represented in Table $2$. We are particularly interested in two variables: weight and smoke. The weight variable represents the weights of the newborns and the smoke variable describes which mothers smoked during pregnancy. We would like to know if there is convincing evidence that newborns from mothers who smoke have a different average birth weight than newborns from mothers who don't smoke? We will use the North Carolina sample to try to answer this question. The smoking group includes 50 cases and the nonsmoking group contains 100 cases, represented in Figure $2$. Table $2$: Four cases from the baby smoke data set. The value "NA", shown for the first two entries of the first variable, indicates that piece of data is missing. fAge mAge weeks weight sexBaby smoke 1 NA 13 37 5.00 female nonsmoker 2 NA 14 36 5.88 female nonsmoker 3 19 15 41 8.13 male smoker $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 150 45 50 36 9.25 female nonsmoker Example $1$ Set up appropriate hypotheses to evaluate whether there is a relationship between a mother smoking and average birth weight. Solution The null hypothesis represents the case of no difference between the groups. • H0: There is no difference in average birth weight for newborns from mothers who did and did not smoke. In statistical notation: $\mu_n - \mu_s = 0$, where $\mu_n$ represents non-smoking mothers and $\mu_s$ represents mothers who smoked. • HA: There is some difference in average newborn weights from mothers who did and did not smoke ($\mu_n - \mu_s \ne 0$). Summary statistics are shown for each sample in Table $3$. Because the data come from a simple random sample and consist of less than 10% of all such cases, the observations are independent. Additionally, each group's sample size is at least 30 and the skew in each sample distribution is strong (Figure $2$). However, this skew is reasonable for these sample sizes of 50 and 100. Therefore, each sample mean is associated with a nearly normal distribution. Table $3$: Summary statistics for the baby smoke data set. smoker nonsmoker mean 6.78 7.18 st. dev. 1.43 1.60 samp. size 50 100 Exercise $3$ 1. What is the point estimate of the population difference, $\mu_n - \mu_s$? 2. Can we use a normal distribution to model this difference? 3. Compute the standard error of the point estimate from part (a) Solution (a) The difference in sample means is an appropriate point estimate: $\bar {x}_n - \bar {x}_s = 0.40$. (b) Because the samples are independent and each sample mean is nearly normal, their difference is also nearly normal. (c) The standard error of the estimate can be estimated using Equation \ref{5.4}: $SE = \sqrt {\dfrac {\sigma^2_n}{n_n} + \dfrac {\sigma^2_s}{n_s}} \approx \sqrt {\dfrac {s^2_n}{n_n} + \dfrac {s^2_s}{n_s}} = \sqrt {\dfrac {1.60^2}{100} + \dfrac {1.43^2}{50}} = 0.26$ The standard error estimate should be sufficiently accurate since the conditions were reasonably satisfied. Example $2$ If the null hypothesis from Exercise 5.8 was true, what would be the expected value of the point estimate? And the standard deviation associated with this estimate? Draw a picture to represent the p-value. Solution If the null hypothesis was true, then we expect to see a difference near 0. The standard error corresponds to the standard deviation of the point estimate: 0.26. To depict the p-value, we draw the distribution of the point estimate as though H0 was true and shade areas representing at least as much evidence against H0 as what was observed. Both tails are shaded because it is a two-sided test. Example $3$ Compute the p-value of the hypothesis test using the figure in Example 5.9, and evaluate the hypotheses using a signi cance level of $\alpha = 0.05.$ Solution Since the point estimate is nearly normal, we can nd the upper tail using the Z score and normal probability table: $Z = \dfrac {0.40 - 0}{0.26} = 1.54 \rightarrow \text {upper tail} = 1 - 0.938 = 0.062$ Because this is a two-sided test and we want the area of both tails, we double this single tail to get the p-value: 0.124. This p-value is larger than the signi cance value, 0.05, so we fail to reject the null hypothesis. There is insufficient evidence to say there is a difference in average birth weight of newborns from North Carolina mothers who did smoke during pregnancy and newborns from North Carolina mothers who did not smoke during pregnancy. Exercise $4$ Does the conclusion to Example 5.10 mean that smoking and average birth weight are unrelated? Solution Absolutely not. It is possible that there is some difference but we did not detect it. If this is the case, we made a Type 2 Error. Exercise $5$ If we made a Type 2 Error and there is a difference, what could we have done differently in data collection to be more likely to detect such a difference? Solution We could have collected more data. If the sample sizes are larger, we tend to have a better shot at finding a difference if one exists. Summary for inference of the difference of two means When considering the difference of two means, there are two common cases: the two samples are paired or they are independent. (There are instances where the data are neither paired nor independent.) The paired case was treated in Section 5.1, where the one-sample methods were applied to the differences from the paired observations. We examined the second and more complex scenario in this section. When applying the normal model to the point estimate $\bar {x}_1 - \bar {x}_2$ (corresponding to unpaired data), it is important to verify conditions before applying the inference framework using the normal model. First, each sample mean must meet the conditions for normality; these conditions are described in Chapter 4 on page 168. Secondly, the samples must be collected independently (e.g. not paired data). When these conditions are satisfied, the general inference tools of Chapter 4 may be applied. For example, a confidence interval may take the following form: $\text {point estimate} \pm z^*SE$ When we compute the confidence interval for $\mu_1 - \mu_2$, the point estimate is the difference in sample means, the value $z^*$ corresponds to the confidence level, and the standard error is computed from Equation \ref{5.4}. While the point estimate and standard error formulas change a little, the framework for a confidence interval stays the same. This is also true in hypothesis tests for differences of means. In a hypothesis test, we apply the standard framework and use the specific formulas for the point estimate and standard error of a difference in two means. The test statistic represented by the Z score may be computed as $Z = \dfrac {\text {point estimate - null value}}{SE}$ When assessing the difference in two means, the point estimate takes the form $\bar {x}_1- \bar {x}_2$, and the standard error again takes the form of Equation \ref{5.4}. Finally, the null value is the difference in sample means under the null hypothesis. Just as in Chapter 4, the test statistic Z is used to identify the p-value. Examining the Standard Error Formula The formula for the standard error of the difference in two means is similar to the formula for other standard errors. Recall that the standard error of a single mean, $\bar {x}_1$, can be approximated by $SE_{\bar {x}_1} = \dfrac {s_1}{\sqrt {n_1}}$ where $s_1$ and $n_1$ represent the sample standard deviation and sample size. The standard error of the difference of two sample means can be constructed from the standard errors of the separate sample means: $SE_{\bar {x}_1- \bar {x}_2} = \sqrt {SE^2_{\bar {x}_1} + SE^2_{\bar {x}_2}} = \sqrt {\dfrac {s^2_1}{n_1} + \dfrac {s^2_2}{n_2}} \label {5.13}$ This special relationship follows from probability theory. Exercise $6$ Prerequisite: Section 2.4. We can rewrite Equation \ref{5.13} in a different way: $SE^2_{\bar {x}_1 - \bar {x}_2} = SE^2_{\bar {x}_1} + SE^2_{bar {x}_2}$ Explain where this formula comes from using the ideas of probability theory.10
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.03%3A_Difference_of_Two_Means.txt
It is also useful to be able to compare two means for small samples. For instance, a teacher might like to test the notion that two versions of an exam were equally difficult. She could do so by randomly assigning each version to students. If she found that the average scores on the exams were so different that we cannot write it off as chance, then she may want to award extra points to students who took the more difficult exam. In a medical context, we might investigate whether embryonic stem cells can improve heart pumping capacity in individuals who have suffered a heart attack. We could look for evidence of greater heart health in the stem cell group against a control group. In this section we use the t distribution for the difference in sample means. We will again drop the minimum sample size condition and instead impose a strong condition on the distribution of the data. Sampling Distributions for the Difference in two Means In the example of two exam versions, the teacher would like to evaluate whether there is convincing evidence that the difference in average scores between the two exams is not due to chance. It will be useful to extend the t distribution method from Section 5.3 to apply to a difference of means: $\bar {x}_1 - \bar {x}_2$ as a point estimate for $\mu _1 - \mu _2$ Our procedure for checking conditions mirrors what we did for large samples in Section 5.2. First, we verify the small sample conditions (independence and nearly normal data) for each sample separately, then we verify that the samples are also independent. For instance, if the teacher believes students in her class are independent, the exam scores are nearly normal, and the students taking each version of the exam were independent, then we can use the t distribution for inference on the point estimate $\bar {x}_1 - \bar {x}_2$. The formula for the standard error of $\bar {x}_1 - \bar {x}_2$., introduced in Section 5.2, also applies to small samples: $SE_{\bar {x}_1- \bar {x}_2} = \sqrt {SE^2_{\bar {x}_1} + SE^2_{\bar {x}_2}} = \sqrt { \dfrac {s^2_1}{n_1} +\dfrac {s^2_2}{n_2}} \tag {5.27}$ 19We use the row with 29 degrees of freedom. The value T = 2.39 falls between the third and fourth columns. Because we are looking for a single tail, this corresponds to a p-value between 0.01 and 0.025. The p-value is guaranteed to be less than 0.05 (the default signi cance level), so we reject the null hypothesis. The data provide convincing evidence to support the company's claim that student scores improve by more than 100 points following the class. 20This is an observational study, so we cannot make this causal conclusion. For instance, maybe SAT test takers tend to improve their score over time even if they don't take a special SAT class, or perhaps only the most motivated students take such SAT courses. Because we will use the t distribution, we will need to identify the appropriate degrees of freedom. This can be done using computer software. An alternative technique is to use the smaller of $n_1 - 1$ and $n_2 - 1$, which is the method we will apply in the examples and exercises.21 Using the t distribution for a difference in means The t distribution can be used for inference when working with the standardized difference of two means if (1) each sample meets the conditions for using the t distribution and (2) the samples are independent. We estimate the standard error of the difference of two means using Equation \ref{5.27}. Two Sample t test Summary statistics for each exam version are shown in Table 5.19. The teacher would like to evaluate whether this difference is so large that it provides convincing evidence that Version B was more difficult (on average) than Version A. Table 5.19: Summary statistics of scores for each exam version. Version n $\bar {x}$ s min max A B 30 27 79.4 74.1 14 20 45 32 100 100 Exercise $1$ Construct a two-sided hypothesis test to evaluate whether the observed difference in sample means, $\bar {x}_A - \bar {x}_B = 5.3$, might be due to chance. Solution Because the teacher did not expect one exam to be more difficult prior to examining the test results, she should use a two-sided hypothesis test. H0: the exams are equally difficult, on average. $\mu_A - \mu_B = 0$. HA: one exam was more difficult than the other, on average. $\mu_A - \mu_B \ne 0$. Exercise $1$ To evaluate the hypotheses in Exercise 5.28 using the t distribution, we must first verify assumptions. 1. Does it seem reasonable that the scores are independent within each group? 2. What about the normality condition for each group? 3. Do you think scores from the two groups would be independent of each other (i.e. the two samples are independent)?23 Solution (a) It is probably reasonable to conclude the scores are independent. (b) The summary statistics suggest the data are roughly symmetric about the mean, and it doesn't seem unreasonable to suggest the data might be normal. Note that since these samples are each nearing 30, moderate skew in the data would be acceptable. (c) It seems reasonable to suppose that the samples are independent since the exams were handed out randomly. After verifying the conditions for each sample and confirming the samples are independent of each other, we are ready to conduct the test using the t distribution. In this case, we are estimating the true difference in average test scores using the sample data, so the point estimate is $\bar {x}_A - \bar {x}_B = 5.3$. The standard error of the estimate can be calculated using Equation \ref{5.27}: $SE = \sqrt {\dfrac {s^2_A}{n_A} + \dfrac {s^2_B}{n_B}} = \sqrt {\dfrac {14^2}{30} + \dfrac {20^2}{27}} = 4.62$ 21This technique for degrees of freedom is conservative with respect to a Type 1 Error; it is more difficult to reject the null hypothesis using this df method. Figure 5.20: The t distribution with 26 degrees of freedom. The shaded right tail represents values with T $\ge$ 1.15. Because it is a two-sided test, we also shade the corresponding lower tail. Finally, we construct the test statistic: $T = \dfrac {\text {point estimate - null value}}{SE} = \dfrac {(79.4 - 74.1) - 0}{4.62} = 1.15$ If we have a computer handy, we can identify the degrees of freedom as 45.97. Otherwise we use the smaller of $n_1 - 1 \text {and} n_2 - 1$: df = 26. Exercise $1$ Exercise 5.30 Identify the p-value, shown in Figure 5.20. Use df = 26. Solution We examine row df = 26 in the t table. Because this value is smaller than the value in the left column, the p-value is larger than 0.200 (two tails!). Because the p-value is so large, we do not reject the null hypothesis. That is, the data do not convincingly show that one exam version is more difficult than the other, and the teacher should not be convinced that she should add points to the Version B exam scores. In Exercise 5.30, we could have used df = 45.97. However, this value is not listed in the table. In such cases, we use the next lower degrees of freedom (unless the computer also provides the p-value). For example, we could have used df = 45 but not df = 46. Exercise $1$ Do embryonic stem cells (ESCs) help improve heart function following a heart attack? Table 5.21 contains summary statistics for an experiment to test ESCs in sheep that had a heart attack. Each of these sheep was randomly assigned to the ESC or control group, and the change in their hearts' pumping capacity was measured. A positive value generally corresponds to increased pumping capacity, which suggests a stronger recovery. 1. Set up hypotheses that will be used to test whether there is convincing evidence that ESCs actually increase the amount of blood the heart pumps. 2. Check conditions for using the t distribution for inference with the point estimate $\bar {x}_1 - \bar {x}_2$. To assist in this assessment, the data are presented in Figure 5.22.25 Solution (a) We first setup the hypotheses: • H0: The stem cells do not improve heart pumping function. $\mu _{esc} - \mu _{control} = 0$. • HA: The stem cells do improve heart pumping function. $\mu _{esc} - \mu _{control} > 0$. (b) Because the sheep were randomly assigned their treatment and, presumably, were kept separate from one another, the independence assumption is reasonable for each sample as well as for between samples. The data are very limited, so we can only check for obvious outliers in the raw data in Figure 5.22. Since the distributions are (very) roughly symmetric, we will assume the normality condition is acceptable. Because the conditions are satisfied, we can apply the t distribution. Table 5.21: Summary statistics of scores, split by exam version. n $\bar {x}$ s ESCs control 9 9 3.50 -4.33 5.17 2.76 Figure 5.23: Distribution of the sample difference of the test statistic if the null hypothesis was true. The shaded area, hardly visible in the right tail, represents the p-value. Example $1$ Use the data from Table 5.21 and df = 8 to evaluate the hypotheses for the ESC experiment described in Exercise 5.31. Solution First, we compute the sample difference and the standard error for that point estimate: $\bar {x}_{esc} - \bar {x}_{control} = 7.88$ $SE = \dfrac {\dfrac {5.17^2}{9} + \dfrac {2.76^2}{9}} = 1.95$ The p-value is depicted as the shaded slim right tail in Figure 5.23, and the test statistic is computed as follows: $T = \dfrac {7.88 - 0}{1.95} = 4.03$ We use the smaller of $n_1 - 1$ and $n_2 - 1$ (each are the same) for the degrees of freedom: df = 8. Finally, we look for T = 4.03 in the t table; it falls to the right of the last column, so the p-value is smaller than 0.005 (one tail!). Because the p-value is less than 0.005 and therefore also smaller than 0.05, we reject the null hypothesis. The data provide convincing evidence that embryonic stem cells improve the heart's pumping function in sheep that have suffered a heart attack. Two sample t confidence interval The results from the previous section provided evidence that ESCs actually help improve the pumping function of the heart. But how large is this improvement? To answer this question, we can use a confidence interval. Exercise $1$ In Exercise 5.31, you found that the point estimate, $\bar {x}_{esc} - \bar {x}_{control} = 7.88$, has a standard error of 1.95. Using df = 8, create a 99% confidence interval for the improvement due to ESCs. Solution We know the point estimate, 7.88, and the standard error, 1.95. We also veri ed the conditions for using the t distribution in Exercise 5.31. Thus, we only need identify $t*_8$ to create a 99% con dence interval: $t*_8 = 3.36$. The 99% con dence interval for the improvement from ESCs is given by $\text {point estimate} \pm t*_8 SE \rightarrow 7.88 \pm 3.36 \times 1.95 \rightarrow (1.33, 14.43)$ That is, we are 99% con dent that the true improvement in heart pumping function is somewhere between 1.33% and 14.43%. Pooled Standard Deviation Estimate (special topic) Occasionally, two populations will have standard deviations that are so similar that they can be treated as identical. For example, historical data or a well-understood biological mechanism may justify this strong assumption. In such cases, we can make our t distribution approach slightly more precise by using a pooled standard deviation. The pooled standard deviation of two groups is a way to use data from both samples to better estimate the standard deviation and standard error. If s1 and s2 are the standard deviations of groups 1 and 2 and there are good reasons to believe that the population standard deviations are equal, then we can obtain an improved estimate of the group variances by pooling their data: $s^2_{pooled} = \dfrac {s^2_1 \times (n_1 - 1) + s^2_2 \times (n_2 - 1)}{n_1 + n_2 - 2}$ where $n_1$ and $n_2$ are the sample sizes, as before. To use this new statistic, we substitute $s^2_{pooled}$ in place of $s^2_1$ and $s^2_2$ in the standard error formula, and we use an updated formula for the degrees of freedom: $df = n_1 + n_2 - 2$ The bene ts of pooling the standard deviation are realized through obtaining a better estimate of the standard deviation for each group and using a larger degrees of freedom parameter for the t distribution. Both of these changes may permit a more accurate model of the sampling distribution of $\bar {x}^2_1 - \bar {x}^2_2$ Caution: Pooling standard deviations should be done only after careful research A pooled standard deviation is only appropriate when background research indicates the population standard deviations are nearly equal. When the sample size is large and the condition may be adequately checked with data, the benefits of pooling the standard deviations greatly diminishes.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.04%3A_Power_Calculations_for_a_Difference_of_Means_%28Special_Topic%29.txt
Sometimes we want to compare means across many groups. We might initially think to do pairwise comparisons; for example, if there were three groups, we might be tempted to compare the first mean with the second, then with the third, and then finally compare the second and third means for a total of three comparisons. However, this strategy can be treacherous. If we have many groups and do many comparisons, it is likely that we will eventually nd a difference just by chance, even if there is no difference in the populations. In this section, we will learn a new method called analysis of variance (ANOVA) and a new test statistic called F. ANOVA uses a single hypothesis test to check whether the means across many groups are equal: • H0: The mean outcome is the same across all groups. In statistical notation, $\mu _1 = \mu _2 = \dots = \mu _k$ where $\mu _i$ represents the mean of the outcome for observations in category i. • HA: At least one mean is different. Generally we must check three conditions on the data before performing ANOVA: • the observations are independent within and across groups, • the data within each group are nearly normal, and • the variability across the groups is about equal. When these three conditions are met, we may perform an ANOVA to determine whether the data provide strong evidence against the null hypothesis that all the $\mu _i$ are equal. Example $1$ College departments commonly run multiple lectures of the same introductory course each semester because of high demand. Consider a statistics department that runs three lectures of an introductory statistics course. We might like to determine whether there are statistically significant differences infirstexam scores in these three classes (A, B, and C). Describe appropriate hypotheses to determine whether there are any differences between the three classes. Solution The hypotheses may be written in the following form: • H0: The average score is identical in all lectures. Any observed difference is due to chance. Notationally, we write $\mu _A = \mu _B = \mu _C$. • HA: The average score varies by class. We would reject the null hypothesis in favor of the alternative hypothesis if there were larger differences among the class averages than what we might expect from chance alone. Strong evidence favoring the alternative hypothesis in ANOVA is described by unusually large differences among the group means. We will soon learn that assessing the variability of the group means relative to the variability among individual observations within each group is key to ANOVA's success. Example $2$ Examine Figure $1$. Compare groups I, II, and III. Can you visually determine if the differences in the group centers is due to chance or not? Now compare groups IV, V, and VI. Do these differences appear to be due to chance? Figure $1$: Side-by-side dot plot for the outcomes for six groups. Solution Any real difference in the means of groups I, II, and III is difficult to discern, because the data within each group are very volatile relative to any differences in the average outcome. On the other hand, it appears there are differences in the centers of groups IV, V, and VI. For instance, group V appears to have a higher mean than that of the other two groups. Investigating groups IV, V, and VI, we see the differences in the groups' centers are noticeable because those differences are large relative to the variability in the individual observations within each group. Is Batting Performance Related to Player Position in MLB? We would like to discern whether there are real differences between the batting performance of baseball players according to their position: out elder (OF), in elder (IF), designated hitter (DH), and catcher (C). We will use a data set called bat10, which includes batting records of 327 Major League Baseball (MLB) players from the 2010 season. Six of the 327 cases represented in bat10 are shown in Table $1$, and descriptions for each variable are provided in Table 5.26. The measure we will use for the player batting performance (the outcome variable) is on-base percentage (OBP). The on-base percentage roughly represents the fraction of the time a player successfully gets on base or hits a home run. Table $1$: Six cases from the bat10 data matrix. name team position AB H HR RBI AVG OBP 1 I Suzuki SEA OF 680 214 6 43 0.315 0.359 2 D Jeter NYY IF 663 179 10 67 0.270 0.340 3 M Young TEX IF 656 186 21 91 0.284 0.330 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 325 B Molina SF C 202 52 3 17 0.257 0.312 326 J Thole NTM C 202 56 3 17 0.277 0.357 327 C Heisey CIN OF 201 51 8 21 0.254 0.324 Exercise $1$ The null hypothesis under consideration is the following: $\mu _{OF} = \mu _{IF} = \mu _{DH} = \mu _{C}.$Write the null and corresponding alternative hypotheses in plain language. Solution • H0: The average on-base percentage is equal across the four positions. • HA: The average on-base Table $2$: Variables and their descriptions for the bat10 data set. variable description name Player name team The abbreviated name of the player's team position The player's primary eld position (OF, IF, DH, C) AB Number of opportunities at bat H Number of hits HR Number of home runs RBI Number of runs batted in AVG Batting average, which is equal to H=AB OBP On-base percentage, which is roughly equal to the fraction of times a player gets on base or hits a home run Example $3$ The player positions have been divided into four groups: outfield (OF), infield (IF), designated hitter (DH), and catcher (C). What would be an appropriate point estimate of the batting average by out elders, $\mu _{OF}$? Solution A good estimate of the batting average by out elders would be the sample average of AVG for just those players whose position is out field: $\bar {x}_{OF} = 0.334.$ Table $3$ provides summary statistics for each group. A side-by-side box plot for the batting average is shown in Figure $1$. Notice that the variability appears to be approximately constant across groups; nearly constant variance across groups is an important assumption that must be satisfied before we consider the ANOVA approach. Table $3$: Summary statistics of on-base percentage, split by player position. OF IF DH C Sample size ($n_i$) 120 154 14 39 Sample mean ($\bar {x}_i$) 0.334 0.332 0.348 0.323 Sample SD ($s_i$) 0.029 0.037 0.036 0.045 percentage varies across some (or all) groups. Example $1$ The largest difference between the sample means is between the designated hitter and the catcher positions. Consider again the original hypotheses: • H0: $\mu _{OF} = \mu _{IF} = \mu _{DH} = \mu _{C}$ • HA: The average on-base percentage ($\mu _i$) varies across some (or all) groups. Why might it be inappropriate to run the test by simply estimating whether the difference of $\mu _{DH}$ and $\mu _{C}$ is statistically significant at a 0.05 significance level? Solution The primary issue here is that we are inspecting the data before picking the groups that will be compared. It is inappropriate to examine all data by eye (informal testing) and only afterwards decide which parts to formally test. This is called data snooping or data fishing. Naturally we would pick the groups with the large differences for the formal test, leading to an ination in the Type 1 Error rate. To understand this better, let's consider a slightly different problem. Suppose we are to measure the aptitude for students in 20 classes in a large elementary school at the beginning of the year. In this school, all students are randomly assigned to classrooms, so any differences we observe between the classes at the start of the year are completely due to chance. However, with so many groups, we will probably observe a few groups that look rather different from each other. If we select only these classes that look so different, we will probably make the wrong conclusion that the assignment wasn't random. While we might only formally test differences for a few pairs of classes, we informally evaluated the other classes by eye before choosing the most extreme cases for a comparison. For additional information on the ideas expressed in Example 5.38, we recommend reading about the prosecutor's fallacy (See, for example, www.stat.columbia.edu/~cook/movabletype/archives/2007/05/the prosecutors.html.) In the next section we will learn how to use the F statistic and ANOVA to test whether observed differences in means could have happened just by chance even if there was no difference in the respective population means. Analysis of variance (ANOVA) and the F test The method of analysis of variance in this context focuses on answering one question: is the variability in the sample means so large that it seems unlikely to be from chance alone? This question is different from earlier testing procedures since we will simultaneously consider many groups, and evaluate whether their sample means differ more than we would expect from natural variation. We call this variability the mean square between groups (MSG), and it has an associated degrees of freedom, dfG = k - 1 when there are k groups. The MSG can be thought of as a scaled variance formula for means. If the null hypothesis is true, any variation in the sample means is due to chance and shouldn't be too large. Details of MSG calculations are provided in the footnote,29 however, we typically use software for these computations. The mean square between the groups is, on its own, quite useless in a hypothesis test. We need a benchmark value for how much variability should be expected among the sample means if the null hypothesis is true. To this end, we compute a pooled variance estimate, often abbreviated as the mean square error (MSE), which has an associated degrees of freedom value $df_E = n - k$. It is helpful to think of MSE as a measure of the variability within the groups. Details of the computations of the MSE are provided in the footnote30 for interested readers. When the null hypothesis is true, any differences among the sample means are only due to chance, and the MSG and MSE should be about equal. As a test statistic for ANOVA, we examine the fraction of MSG and MSE: $F = \frac {MSG}{MSE} \label {5.39}$ The MSG represents a measure of the between-group variability, and MSE measures the variability within each of the groups. Exercise $1$ For the baseball data, MSG = 0.00252 and MSE = 0.00127. Identify the degrees of freedom associated with MSG and MSE and verify the F statistic is approximately 1.994. Solution There are k = 4 groups, so $df_G = k - 1 = 3$. There are $n = n_1 + n_2 + n_3 + n_4 = 327$ total observations, so $df_E = n - k = 323$. Then the F statistic is computed as the ratio of MSG and MSE: $F = \frac {MSG}{MSE} = \frac {0.00252}{0.00127} = 1.984 \approx 1.994,$ (F = 1.994 was computed by using values for MSG and MSE that were not rounded.) We can use the F statistic to evaluate the hypotheses in what is called an F test. A p-value can be computed from the F statistic using an F distribution, which has two associated parameters: $df_1$ and $df_2$. For the F statistic in ANOVA, $df_1 = df_G$ and $df_2 = df_E$. An F distribution with 3 and 323 degrees of freedom, corresponding to the F statistic for the baseball hypothesis test, is shown in Figure 5.29. 29Let $\bar {x}$ represent the mean of outcomes across all groups. Then the mean square between groups is computed as $MSG = \frac {1}{df_G} SSG = \frac {1}{k - 1} \sum \limits^k_{i=1} n_i {(\bar {x}_i -\bar {x})}^2$ where SSG is called the sum of squares between groups and $n_i$ is the sample size of group i. 30Let $\bar {x}$ represent the mean of outcomes across all groups. Then the sum of squares total (SST) is computed as $SST = \sum \limits _{i=1}^n {(y_i - \bar {x})}^2$ where the sum is over all observations in the data set. Then we compute the sum of squared errors (SSE) in one of two equivalent ways: $SSE = SST - SSG$ $= (n_1 - 1)s^2_1 + (n_2 - 1)s^2_2 +\dots + (n_k - 1)s^2_k$ where $s^2_i$ is the sample variance (square of the standard deviation) of the residuals in group i. Then the MSE is the standardized form of SSE: $MSE = \frac {1}{df_E} SSE$. The larger the observed variability in the sample means (MSG) relative to the withingroup observations (MSE), the larger F will be and the stronger the evidence against the null hypothesis. Because larger values of F represent stronger evidence against the null hypothesis, we use the upper tail of the distribution to compute a p-value. The F statistic and the F test Analysis of variance (ANOVA) is used to test whether the mean outcome differs across 2 or more groups. ANOVA uses a test statistic F, which represents a standardized ratio of variability in the sample means relative to the variability within the groups. If H0 is true and the model assumptions are satisfied, the statistic F follows an F distribution with parameters $df_1 = k -1$ and $df_2 = n - k$. The upper tail of the F distribution is used to represent the p-value. Exercise $1$ The test statistic for the baseball example is F = 1.994. Shade the area corresponding to the p-value in Figure 5.29. 32 Example $1$ A common method for preparing oxygen is the decomposition Example 5.42 The p-value corresponding to the shaded area in the solution of Exercise 5.41 is equal to about 0.115. Does this provide strong evidence against the null hypothesis? The p-value is larger than 0.05, indicating the evidence is not strong enough to reject the null hypothesis at a signi cance level of 0.05. That is, the data do not provide strong evidence that the average on-base percentage varies by player's primary field position. Reading an ANOVA table from software The calculations required to perform an ANOVA by hand are tedious and prone to human error. For these reasons, it is common to use statistical software to calculate the F statistic and p-value. An ANOVA can be summarized in a table very similar to that of a regression summary, which we will see in Chapters 7 and 8. Table 5.30 shows an ANOVA summary to test whether the mean of on-base percentage varies by player positions in the MLB. Many of these values should look familiar; in particular, the F test statistic and p-value can be retrieved from the last columns. Table 5.30: ANOVA summary for testing whether the average on-base percentage differs across player positions. DF Sum Sq Mean Sq F value Pr(> F) position Residuals 3 323 0.0076 0.4080 0.0025 0.0013 1.9943 0.1147 Graphical Diagnostics for an ANOVA Analysis There are three conditions we must check for an ANOVA analysis: all observations must be independent, the data in each group must be nearly normal, and the variance within each group must be approximately equal. • Independence. If the data are a simple random sample from less than 10% of the population, this condition is satisfied. For processes and experiments, carefully consider whether the data may be independent (e.g. no pairing). For example, in the MLB data, the data were not sampled. However, there are not obvious reasons why independence would not hold for most or all observations. • Approximately normal. As with one- and two-sample testing for means, the normality assumption is especially important when the sample size is quite small. The normal probability plots for each group of the MLB data are shown in Figure 5.31; there is some deviation from normality for in elders, but this isn't a substantial concern since there are about 150 observations in that group and the outliers are not extreme. Sometimes in ANOVA there are so many groups or so few observations per group that checking normality for each group isn't reasonable. See the footnote33 for guidance on how to handle such instances. • Constant variance. The last assumption is that the variance in the groups is about equal from one group to the next. This assumption can be checked by examining a sideby-side box plot of the outcomes across the groups, as in Figure 5.28 on page 239. In this case, the variability is similar in the four groups but not identical. We see in Table 5.27 on page 238 that the standard deviation varies a bit from one group to the next. Whether these differences are from natural variation is unclear, so we should report this uncertainty with the nal results. 33First calculate the residuals of the baseball data, which are calculated by taking the observed values and subtracting the corresponding group means. For example, an out elder with OBP of 0.435 would have a residual of $0.405 - \bar {x}_{OF} = 0.071$. Then to check the normality condition, create a normal probability plot using all the residuals simultaneously. Caution: Diagnostics for an ANOVA analysis Independence is always important to an ANOVA analysis. The normality condition is very important when the sample sizes for each group are relatively small. The constant variance condition is especially important when the sample sizes differ between groups. Multiple comparisons and controlling Type 1 Error rate When we reject the null hypothesis in an ANOVA analysis, we might wonder, which of these groups have different means? To answer this question, we compare the means of each possible pair of groups. For instance, if there are three groups and there is strong evidence that there are some differences in the group means, there are three comparisons to make: group 1 to group 2, group 1 to group 3, and group 2 to group 3. These comparisons can be accomplished using a two-sample t test, but we use a modi ed signi cance level and a pooled estimate of the standard deviation across groups. Usually this pooled standard deviation can be found in the ANOVA table, e.g. along the bottom of Table 5.30. Table 5.32: Summary statistics for the first midterm scores in three different lectures of the same course. Class i A B C $n_i$ $\bar {x}_i$ $s_i$ 58 75.1 13.9 55 72.0 13.8 51 78.9 13.1 Example $1$ A common method for preparing oxygen is the decomposition Example 5.43 Example 5.34 on page 236 discussed three statistics lectures, all taught during the same semester. Table 5.32 shows summary statistics for these three courses, and a side-by-side box plot of the data is shown in Figure 5.33. We would like to conduct an ANOVA for these data. Do you see any deviations from the three conditions for ANOVA? In this case (like many others) it is difficult to check independence in a rigorous way. Instead, the best we can do is use common sense to consider reasons the assumption of independence may not hold. For instance, the independence assumption may not be reasonable if there is a star teaching assistant that only half of the students may access; such a scenario would divide a class into two subgroups. No such situations were evident for these particular data, and we believe that independence is acceptable. The distributions in the side-by-side box plot appear to be roughly symmetric and show no noticeable outliers. The box plots show approximately equal variability, which can be veri ed in Table 5.32, supporting the constant variance assumption. Exercise $1$ A common method for preparing oxygen is the decompositio Exercise 5.44 An ANOVA was conducted for the midterm data, and summary results are shown in Table 5.34. What should we conclude?34 34The p-value of the test is 0.0330, less than the default signi cance level of 0.05. Therefore, we reject the null hypothesis and conclude that the difference in the average midterm scores are not due to chance. Table 5.34: ANOVA summary table for the midterm data. Df Sum Sq Mean Sq F value Pr(>F) lecture Residuals 2 161 1290.11 29810.13 645.06 185.16 3.48 0.0330 There is strong evidence that the different means in each of the three classes is not simply due to chance. We might wonder, which of the classes are actually different? As discussed in earlier chapters, a two-sample t test could be used to test for differences in each possible pair of groups. However, one pitfall was discussed in Example 5.38 on page 238: when we run so many tests, the Type 1 Error rate increases. This issue is resolved by using a modi ed signi cance level. Multiple comparisons and the Bonferroni correction for The scenario of testing many pairs of groups is called multiple comparisons. The Bonferroni correction suggests that a more stringent signi cance level is more appropriate for these tests: $\alpha ^* = \frac {\alpha}{K}$ where K is the number of comparisons being considered (formally or informally). If there are k groups, then usually all possible pairs are compared and $K = \frac {k(k - 1)}{2}$. Example 5.45 In Exercise 5.44, you found strong evidence of differences in the average midterm grades between the three lectures. Complete the three possible pairwise comparisons using the Bonferroni correction and report any differences. We use a modi ed signi cance level of $\alpha ^* = \frac {0.05}{3} = 0.0167$. Additionally, we use the pooled estimate of the standard deviation: $s_{pooled} = 13.61$ on df = 161, which is provided in the ANOVA summary table. Lecture A versus Lecture B: The estimated difference and standard error are, respectively, $\bar {x}_A - \bar {x}_B = 75.1 - 72 = 3.1 SE = \sqrt {\frac {13.61^2}{58} +\frac {13.61^2}{55}} = 2.56$ (See Section 5.4.4 on page 235 for additional details.) This results in a T score of 1.21 on df = 161 (we use the df associated with spooled). Statistical software was used to precisely identify the two-tailed p-value since the modi ed signi cance of 0.0167 is not found in the t table. The p-value (0.228) is larger than $\alpha ^* = 0.0167$, so there is not strong evidence of a difference in the means of lectures A and B. Lecture A versus Lecture C: The estimated difference and standard error are 3.8 and 2.61, respectively. This results in a T score of 1.46 on df = 161 and a two-tailed p-value of 0.1462. This p-value is larger than $\alpha^*$, so there is not strong evidence of a difference in the means of lectures A and C. Lecture B versus Lecture C: The estimated difference and standard error are 6.9 and 2.65, respectively. This results in a T score of 2.60 on df = 161 and a two-tailed p-value of 0.0102. This p-value is smaller than $\alpha^*$. Here we nd strong evidence of a difference in the means of lectures B and C. We might summarize the ndings of the analysis from Example 5.45 using the following notation: $\mu_A \overset {?} {=} \mu _B \mu _A \overset {?} {=} \mu _C \mu_B \ne \mu_C$ The midterm mean in lecture A is not statistically distinguishable from those of lectures B or C. However, there is strong evidence that lectures B and C are different. In the first two pairwise comparisons, we did not have sufficient evidence to reject the null hypothesis. Recall that failing to reject H0 does not imply H0 is true. Caution: Sometimes an ANOVA will reject the null but no groups will have statistically signi cant differences It is possible to reject the null hypothesis using ANOVA and then to not subsequently identify differences in the pairwise comparisons. However, this does not invalidate the ANOVA conclusion. It only means we have not been able to successfully identify which groups differ in their means. The ANOVA procedure examines the big picture: it considers all groups simultaneously to decipher whether there is evidence that some difference exists. Even if the test indicates that there is strong evidence of differences in group means, identifying with high con dence a specific difference as statistically signi cant is more difficult. Consider the following analogy: we observe a Wall Street firm that makes large quantities of money based on predicting mergers. Mergers are generally difficult to predict, and if the prediction success rate is extremely high, that may be considered sufficiently strong evidence to warrant investigation by the Securities and Exchange Commission (SEC). While the SEC may be quite certain that there is insider trading taking place at the firm, the evidence against any single trader may not be very strong. It is only when the SEC considers all the data that they identify the pattern. This is effectively the strategy of ANOVA: stand back and consider all the groups simultaneously.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.05%3A_Comparing_many_Means_with_ANOVA_%28Special_Topic%29.txt
Paired data 5.1 Global warming, Part I. Is there strong evidence of global warming? Let's consider a small scale example, comparing how temperatures have changed in the US from 1968 to 2008. The daily high temperature reading on January 1 was collected in 1968 and 2008 for 51 randomly selected locations in the continental US. Then the difference between the two readings (temperature in 2008 - temperature in 1968) was calculated for each of the 51 different locations. The average of these 51 values was 1.1 degrees with a standard deviation of 4.9 degrees. We are interested in determining whether these data provide strong evidence of temperature warming in the continental US. 1. (a) Is there a relationship between the observations collected in 1968 and 2008? Or are the observations in the two groups independent? Explain. 2. (b) Write hypotheses for this research in symbols and in words. 3. (c) Check the conditions required to complete this test. 4. (d) Calculate the test statistic and nd the p-value. 5. (e) What do you conclude? Interpret your conclusion in context. 6. (f) What type of error might we have made? Explain in context what the error means. 7. (g) Based on the results of this hypothesis test, would you expect a confidence interval for the average difference between the temperature measurements from 1968 and 2008 to include 0? Explain your reasoning. 5.2 High School and Beyond, Part I. The National Center of Education Statistics conducted a survey of high school seniors, collecting test data on reading, writing, and several other subjects. Here we examine a simple random sample of 200 students from this survey. Side-by-side box plots of reading and writing scores as well as a histogram of the differences in scores are shown below. 1. (a) Is there a clear difference in the average reading and writing scores? 2. (b) Are the reading and writing scores of each student independent of each other? 3. (c) Create hypotheses appropriate for the following research question: is there an evident difference in the average scores of students in the reading and writing exam? 4. (d) Check the conditions required to complete this test. 5. (e) The average observed difference in scores is $\bar {x}_{\text {read-write}} = -0.545$, and the standard deviation of the differences is 8.887 points. Do these data provide convincing evidence of a difference between the average scores on the two exams? 6. (f) What type of error might we have made? Explain what the error means in the context of the application. 7. (g) Based on the results of this hypothesis test, would you expect a confidence interval for the average difference between the reading and writing scores to include 0? Explain your reasoning. 5.3 Global warming, Part II. We considered the differences between the temperature readings in January 1 of 1968 and 2008 at 51 locations in the continental US in Exercise 5.1. The mean and standard deviation of the reported differences are 1.1 degrees and 4.9 degrees. 1. (a) Calculate a 90% confidence interval for the average difference between the temperature measurements between 1968 and 2008. 2. (b) Interpret this interval in context. 3. (c) Does the confidence interval provide convincing evidence that the temperature was higher in 2008 than in 1968 in the continental US? Explain. 5.4 High school and beyond, Part II. We considered the differences between the reading and writing scores of a random sample of 200 students who took the High School and Beyond Survey in Exercise 5.3. The mean and standard deviation of the differences are $\bar {x}_{\text {read-write}} = -0.545$ and 8.887 points. 1. (a) Calculate a 95% confidence interval for the average difference between the reading and writing scores of all students. 2. (b) Interpret this interval in context. 3. (c) Does the confidence interval provide convincing evidence that there is a real difference in the average scores? Explain. 5.5 Gifted children. Researchers collected a simple random sample of 36 children who had been identi ed as gifted in a large city. The following histograms show the distributions of the IQ scores of mothers and fathers of these children. Also provided are some sample statistics.35 1. (a) Are the IQs of mothers and the IQs of fathers in this data set related? Explain. 2. (b) Conduct a hypothesis test to evaluate if the scores are equal on average. Make sure to clearly state your hypotheses, check the relevant conditions, and state your conclusion in the context of the data. 5.6 Paired or not? In each of the following scenarios, determine if the data are paired. 1. (a) We would like to know if Intel's stock and Southwest Airlines' stock have similar rates of return. To nd out, we take a random sample of 50 days for Intel's stock and another random sample of 50 days for Southwest's stock. 2. (b) We randomly sample 50 items from Target stores and note the price for each. Then we visit Walmart and collect the price for each of those same 50 items. 3. (c) A school board would like to determine whether there is a difference in average SAT scores for students at one high school versus another high school in the district. To check, they take a simple random sample of 100 students from each high school. 35F.A. Graybill and H.K. Iyer. Regression Analysis: Concepts and Applications. Duxbury Press, 1994, pp. 511-516. Difference of two means 5.7 Math scores of 13 year olds, Part I. The National Assessment of Educational Progress tested a simple random sample of 1,000 thirteen year old students in both 2004 and 2008 (two separate simple random samples). The average and standard deviation in 2004 were 257 and 39, respectively. In 2008, the average and standard deviation were 260 and 38, respectively. Calculate a 90% confidence interval for the change in average scores from 2004 to 2008, and interpret this interval in the context of the application. (Reminder: check conditions.)36 5.8 Work hours and education, Part I. The General Social Survey collects data on demographics, education, and work, among many other characteristics of US residents. The histograms below display the distributions of hours worked per week for two education groups: those with and without a college degree.37 Suppose we want to estimate the average difference between the number of hours worked per week by all Americans with a college degree and those without a college degree. Summary information for each group is shown in the tables. 1. (a) What is the parameter of interest, and what is the point estimate? 2. (b) Are conditions satisfied for estimating this difference using a confidence interval? 3. (c) Create a 95% confidence interval for the difference in number of hours worked between the two groups, and interpret the interval in context. 4. (d) Can you think of any real world justi cation for your results? (Note: There isn't a single correct answer to this question.) 5.9 Math scores of 13 year olds, Part II. Exercise 5.7 provides data on the average math scores from tests conducted by the National Assessment of Educational Progress in 2004 and 2008. Two separate simple random samples were taken in each of these years. The average and standard deviation in 2004 were 257 and 39, respectively. In 2008, the average and standard deviation were 260 and 38, respectively. 1. (a) Do these data provide strong evidence that the average math score for 13 year old students has changed from 2004 to 2008? Use a 10% signi cance level. 2. (b) It is possible that your conclusion in part (a) is incorrect. What type of error is possible for this conclusion? Explain. 3. (c) Based on your hypothesis test, would you expect a 90% confidence interval to contain the null value? Explain. 36National Center for Education Statistics, NAEP Data Explorer. 37National Opinion Research Center, General Social Survey, 2010. 5.10 Work hours and education, Part II. The General Social Survey described in Exercise 5.8 included random samples from two groups: US residents with a college degree and US residents without a college degree. For the 505 sampled US residents with a college degree, the average number of hours worked each week was 41.8 hours with a standard deviation of 15.1 hours. For those 667 without a degree, the mean was 39.4 hours with a standard deviation of 15.1 hours. Conduct a hypothesis test to check for a difference in the average number of hours worked for the two groups. 5.11 Does the Paleo diet work? The Paleo diet allows only for foods that humans typically consumed over the last 2.5 million years, excluding those agriculture-type foods that arose during the last 10,000 years or so. Researchers randomly divided 500 volunteers into two equal-sized groups. One group spent 6 months on the Paleo diet. The other group received a pamphlet about controlling portion sizes. Randomized treatment assignment was performed, and at the beginning of the study, the average difference in weights between the two groups was about 0. After the study, the Paleo group had lost on average 7 pounds with a standard deviation of 20 pounds while the control group had lost on average 5 pounds with a standard deviation of 12 pounds. 1. (a) The 95% confidence interval for the difference between the two population parameters (Paleo - control) is given as (-0.891, 4.891). Interpret this interval in the context of the data. 2. (b) Based on this confidence interval, do the data provide convincing evidence that the Paleo diet is more effective for weight loss than the pamphlet (control)? Explain your reasoning. 3. (c) Without explicitly performing the hypothesis test, do you think that if the Paleo group had lost 8 instead of 7 pounds on average, and everything else was the same, the results would then indicate a signi cant difference between the treatment and control groups? Explain your reasoning. 5.12 Weight gain during pregnancy. In 2004, the state of North Carolina released to the public a large data set containing information on births recorded in this state. This data set has been of interest to medical researchers who are studying the relationship between habits and practices of expectant mothers and the birth of their children. The following histograms show the distributions of weight gain during pregnancy by 867 younger moms (less than 35 years old) and 133 mature moms (35 years old and over) who have been randomly sampled from this large data set. The average weight gain of younger moms is 30.56 pounds, with a standard deviation of 14.35 pounds, and the average weight gain of mature moms is 28.79 pounds, with a standard deviation of 13.48 pounds. Calculate a 95% confidence interval for the difference between the average weight gain of younger and mature moms. Also comment on whether or not this interval provides strong evidence that there is a signi cant difference between the two population means. 5.13 Body fat in women and men. The third National Health and Nutrition Examination Survey collected body fat percentage (BF) data from 13,601 subjects whose ages are 20 to 80. A summary table for these data is given below. Note that BF is given as mean $\pm$ standard error. Construct a 95% confidence interval for the difference in average body fat percentages between men and women, and explain the meaning of this interval.38 Gender n BF (%) Men Women 6,580 7,021 23.9 $\pm$ 0.07 35.0 $\pm$ 0.09 5.14 Child care hours, Part I. The China Health and Nutrition Survey aims to examine the effects of the health, nutrition, and family planning policies and programs implemented by national and local governments. One of the variables collected on the survey is the number of hours parents spend taking care of children in their household under age 6 (feeding, bathing, dressing, holding, or watching them). In 2006, 487 females and 312 males were surveyed for this question. On average, females reported spending 31 hours with a standard deviation of 31 hours, and males reported spending 16 hours with a standard deviation of 21 hours. Calculate a 95% confidence interval for the difference between the average number of hours Chinese males and females spend taking care of their children under age 6. Also comment on whether this interval suggests a significant difference between the two population parameters. You may assume that conditions for inference are satisfied.39 One-sample means with the t distribution 5.15 Identify the critical t. An independent random sample is selected from an approximately normal population with unknown standard deviation. Find the degrees of freedom and the critical t value (t?) for the given sample size and confidence level. 1. (a) n = 6, CL = 90% 2. (b) n = 21, CL = 98% 3. (c) n = 29, CL = 95% 4. (d) n = 12, CL = 99% 5.16 Working backwards, Part I. A 90% confidence interval for a population mean is (65,77). The population distribution is approximately normal and the population standard deviation is unknown. This confidence interval is based on a simple random sample of 25 observations. Calculate the sample mean, the margin of error, and the sample standard deviation. 5.17 Working backwards, Part II. A 95% confidence interval for a population mean, $\mu$, is given as (18.985, 21.015). This confidence interval is based on a simple random sample of 36 observations. Calculate the sample mean and standard deviation. Assume that all conditions necessary for inference are satisfied. Use the t distribution in any calculations. 5.18 Find the p-value. An independent random sample is selected from an approximately normal population with an unknown standard deviation. Find the p-value for the given set of hypotheses and T test statistic. Also determine if the null hypothesis would be rejected at $\alpha = 0.05$. 1. (a) $H_A : \mu > \mu _0, n = 11, T = 1.91$ 2. (b) $H_A : \mu < \mu _0, n = 17, T = -3.45$ 3. (c) $H_A : \mu \ne \mu _0, n = 7, T = 0.83$ 4. (d) $H_A : \mu > \mu _0, n = 28, T = 2.13$ 38A Romero-Corral et al. "Accuracy of body mass index in diagnosing obesity in the adult general population". In: International Journal of Obesity 32.6 (2008), pp. 959-966. 39UNC Carolina Population Center, China Health and Nutrition Survey, 2006. 5.19 Sleep habits of New Yorkers. New York is known as "the city that never sleeps". A random sample of 25 New Yorkers were asked how much sleep they get per night. Statistical summaries of these data are shown below. Do these data provide strong evidence that New Yorkers sleep less than 8 hours a night on average? n $\bar {x}$ s min max 25 7.73 0.77 6.17 9.78 1. (a) Write the hypotheses in symbols and in words. 2. (b) Check conditions, then calculate the test statistic, T, and the associated degrees of freedom. 3. (c) Find and interpret the p-value in this context. Drawing a picture may be helpful. 4. (d) What is the conclusion of the hypothesis test? 5. (e) If you were to construct a 90% confidence interval that corresponded to this hypothesis test, would you expect 8 hours to be in the interval? 5.20 Fuel efficiency of Prius. Fueleconomy.gov, the official US government source for fuel economy information, allows users to share gas mileage information on their vehicles. The histogram below shows the distribution of gas mileage in miles per gallon (MPG) from 14 users who drive a 2012 Toyota Prius. The sample mean is 53.3 MPG and the standard deviation is 5.2 MPG. Note that these data are user estimates and since the source data cannot be veri ed, the accuracy of these estimates are not guaranteed.40 1. (a) We would like to use these data to evaluate the average gas mileage of all 2012 Prius drivers. Do you think this is reasonable? Why or why not? 2. (b) The EPA claims that a 2012 Prius gets 50 MPG (city and highway mileage combined). Do these data provide strong evidence against this estimate for drivers who participate on fueleconomy.gov? Note any assumptions you must make as you proceed with the test. 3. (c) Calculate a 95% confidence interval for the average gas mileage of a 2012 Prius by drivers who participate on fueleconomy.gov. 5.21 Find the mean. You are given the following hypotheses: • H0 : $\mu$ = 60 • HA : $\mu$ < 60 We know that the sample standard deviation is 8 and the sample size is 20. For what sample mean would the p-value be equal to 0.05? Assume that all conditions necessary for inference are satisfied. 5.22 t* vs. z*. For a given confidence level, t* df is larger than z*. Explain how $t^*_{df} being slightly larger than z* affects the width of the confidence interval. 40Fuelecomy.gov, Shared MPG Estimates: Toyota Prius 2012. The t distribution for the difference of two means 5.23 Cleveland vs. Sacramento. Average income varies from one region of the country to another, and it often reects both lifestyles and regional living expenses. Suppose a new graduate is considering a job in two locations, Cleveland, OH and Sacramento, CA, and he wants to see whether the average income in one of these cities is higher than the other. He would like to conduct a t test based on two small samples from the 2000 Census, but he first must consider whether the conditions are met to implement the test. Below are histograms for each city. Should he move forward with the t test? Explain your reasoning. 5.24 Oscar winners. The rst Oscar awards for best actor and best actress were given out in 1929. The histograms below show the age distribution for all of the best actor and best actress winners from 1929 to 2012. Summary statistics for these distributions are also provided. Is a t test appropriate for evaluating whether the difference in the average ages of best actors and actresses might be due to chance? Explain your reasoning.41 41Oscar winners from 1929 - 2012, data up to 2009 from the Journal of Statistics Education data archive and more current data from Wikipedia.org. 5.25 Friday the 13th, Part I. In the early 1990's, researchers in the UK collected data on traffic ow, number of shoppers, and traffic accident related emergency room admissions on Friday the 13th and the previous Friday, Friday the 6th. The histograms below show the distribution of number of cars passing by a specific intersection on Friday the 6th and Friday the 13th for many such date pairs. Also given are some sample statistics, where the difference is the number of cars on the 6th minus the number of cars on the 13th.42 6 th 13 th Diff. \(\bar {x}$ s n 128,385 7,259 10 126,550 7,664 10 1,835 1,176 10 1. (a) Are there any underlying structures in these data that should be considered in an analysis? Explain. 2. (b) What are the hypotheses for evaluating whether the number of people out on Friday the 6th is different than the number out on Friday the 13th? 3. (c) Check conditions to carry out the hypothesis test from part (b). 4. (d) Calculate the test statistic and the p-value. 5. (e) What is the conclusion of the hypothesis test? 6. (f) Interpret the p-value in this context. 7. (g) What type of error might have been made in the conclusion of your test? Explain. 5.26 Diamonds, Part I. Prices of diamonds are determined by what is known as the 4 Cs: cut, clarity, color, and carat weight. The prices of diamonds go up as the carat weight increases, but the increase is not smooth. For example, the difference between the size of a 0.99 carat diamond and a 1 carat diamond is undetectable to the naked human eye, but the price of a 1 carat diamond tends to be much higher than the price of a 0.99 diamond. In this question we use two random samples of diamonds, 0.99 carats and 1 carat, each sample of size 23, and compare the average prices of the diamonds. In order to be able to compare equivalent units, we first divide the price for each diamond by 100 times its weight in carats. That is, for a 0.99 carat diamond, we divide the price by 99. For a 1 carat diamond, we divide the price by 100. The distributions and some sample statistics are shown below.43 0.99 carats 1 carat Men SD n $44.51$ 13.32 23 $56.81$ 16.13 23 Conduct a hypothesis test to evaluate if there is a difference between the average standardized prices of 0.99 and 1 carat diamonds. Make sure to state your hypotheses clearly, check relevant conditions, and interpret your results in context of the data. 42T.J. Scanlon et al. "Is Friday the 13th Bad For Your Health?" In: BMJ 307 (1993), pp. 1584-1586. 43H. Wickham. ggplot2: elegant graphics for data analysis. Springer New York, 2009. 5.27 Friday the 13th, Part II. The Friday the 13th study reported in Exercise 5.25 also provides data on traffic accident related emergency room admissions. The distributions of these counts from Friday the 6th and Friday the 13th are shown below for six such paired dates along with summary statistics. You may assume that conditions for inference are met. 1. (a) Conduct a hypothesis test to evaluate if there is a difference between the average numbers of traffic accident related emergency room admissions between Friday the 6th and Friday the 13th. 2. (b) Calculate a 95% confidence interval for the difference between the average numbers of traffic accident related emergency room admissions between Friday the 6th and Friday the 13th. 3. (c) The conclusion of the original study states, "Friday 13th is unlucky for some. The risk of hospital admission as a result of a transport accident may be increased by as much as 52%. Staying at home is recommended." Do you agree with this statement? Explain your reasoning. 5.28 Diamonds, Part II. In Exercise 5.26, we discussed diamond prices (standardized by weight) for diamonds with weights 0.99 carats and 1 carat. See the table for summary statistics, and then construct a 95% confidence interval for the average difference between the standardized prices of 0.99 and 1 carat diamonds. You may assume the conditions for inference are met. 0.99 carats 1 carat Men SD n $44.51$ 13.32 23 $56.81$ 16.13 23 5.29 Chicken diet and weight, Part I. Chicken farming is a multi-billion dollar industry, and any methods that increase the growth rate of young chicks can reduce consumer costs while increasing company pro ts, possibly by millions of dollars. An experiment was conducted to measure and compare the effectiveness of various feed supplements on the growth rate of chickens. Newly hatched chicks were randomly allocated into six groups, and each group was given a different feed supplement. Below are some summary statistics from this data set along with box plots showing the distribution of weights by feed type.44 1. (a) Describe the distributions of weights of chickens that were fed linseed and horsebean. 2. (b) Do these data provide strong evidence that the average weights of chickens that were fed linseed and horsebean are different? Use a 5% significance level. 3. (c) What type of error might we have committed? Explain. 4. (d) Would your conclusion change if we used $\alpha$ = 0.01? 5.30 Fuel efficiency of manual and automatic cars, Part I. Each year the US Environmental Protection Agency (EPA) releases fuel economy data on cars manufactured in that year. Below are summary statistics on fuel efficiency (in miles/gallon) from random samples of cars with manual and automatic transmissions manufactured in 2012. Do these data provide strong evidence of a difference between the average fuel efficiency of cars with manual and automatic transmissions in terms of their average city mileage? Assume that conditions for inference are satisfied.45 5.31 Chicken diet and weight, Part II. Casein is a common weight gain supplement for humans. Does it have an effect on chickens? Using data provided in Exercise 5.29, test the hypothesis that the average weight of chickens that were fed casein is different than the average weight of chickens that were fed soybean. If your hypothesis test yields a statistically significant result, discuss whether or not the higher average weight of chickens can be attributed to the casein diet. Assume that conditions for inference are satisfied. 44Chicken Weights by Feed Type, from the datasets package in R. 45U.S. Department of Energy, Fuel Economy Data, 2012 Data le. 5.32 Fuel efficiency of manual and automatic cars, Part II. The table provides summary statistics on highway fuel economy of cars manufactured in 2012 (from Exercise 5.30). Use these statistics to calculate a 98% confidence interval for the difference between average highway mileage of manual and automatic cars, and interpret this interval in the context of the data.46 5.33 Gaming and distracted eating, Part I. A group of researchers are interested in the possible effects of distracting stimuli during eating, such as an increase or decrease in the amount of food consumption. To test this hypothesis, they monitored food intake for a group of 44 patients who were randomized into two equal groups. The treatment group ate lunch while playing solitaire, and the control group ate lunch without any added distractions. Patients in the treatment group ate 52.1 grams of biscuits, with a standard deviation of 45.1 grams, and patients in the control group ate 27.1 grams of biscuits, with a standard deviation of 26.4 grams. Do these data provide convincing evidence that the average food intake (measured in amount of biscuits consumed) is different for the patients in the treatment group? Assume that conditions for inference are satisfied.47 5.34 Gaming and distracted eating, Part II. The researchers from Exercise 5.33 also investigated the effects of being distracted by a game on how much people eat. The 22 patients in the treatment group who ate their lunch while playing solitaire were asked to do a serial-order recall of the food lunch items they ate. The average number of items recalled by the patients in this group was 4.9, with a standard deviation of 1.8. The average number of items recalled by the patients in the control group (no distraction) was 6.1, with a standard deviation of 1.8. Do these data provide strong evidence that the average number of food items recalled by the patients in the treatment and control groups are different? 5.35 Prison isolation experiment, Part I. Subjects from Central Prison in Raleigh, NC, volunteered for an experiment involving an \isolation" experience. The goal of the experiment was to nd a treatment that reduces subjects' psychopathic deviant T scores. This score measures a person's need for control or their rebellion against control, and it is part of a commonly used mental health test called the Minnesota Multiphasic Personality Inventory (MMPI) test. The experiment had three treatment groups: 1. (1) Four hours of sensory restriction plus a 15 minute "therapeutic" tape advising that professional help is available. 2. (2) Four hours of sensory restriction plus a 15 minute "emotionally neutral" tape on training hunting dogs. 3. (3) Four hours of sensory restriction but no taped message. Forty-two subjects were randomly assigned to these treatment groups, and an MMPI test was administered before and after the treatment. Distributions of the differences between pre and 46U.S. Department of Energy, Fuel Economy Data, 2012 Data file. 47R.E. Oldham-Cooper et al. "Playing a computer game during lunch affects fullness, memory for lunch, and later snack intake". In: The American Journal of Clinical Nutrition 93.2 (2011), p. 308. post treatment scores (pre - post) are shown below, along with some sample statistics. Use this information to independently test the effectiveness of each treatment. Make sure to clearly state your hypotheses, check conditions, and interpret results in the context of the data.48 5.36 True or false, Part I. Determine if the following statements are true or false, and explain your reasoning for statements you identify as false. 1. (a) When comparing means of two samples where $n_1 = 20$ and $n_2 = 40$, we can use the normal model for the difference in means since $n_2 \ge 30$. 2. (b) As the degrees of freedom increases, the T distribution approaches normality. 3. (c) We use a pooled standard error for calculating the standard error of the difference between means when sample sizes of groups are equal to each other. Comparing many means with ANOVA 5.37 Chicken diet and weight, Part III. In Exercises 5.29 and 5.31 we compared the effects of two types of feed at a time. A better analysis would rst consider all feed types at once: casein, horsebean, linseed, meat meal, soybean, and sunower. The ANOVA output below can be used to test for differences between the average weights of chicks on different diets. Df Sum Sq Mean Sq F value Pr (>F) feed Residuals 5 65 231,129.16 195,556.02 46,225.83 3,008.55 15.36 0.0000 Conduct a hypothesis test to determine if these data provide convincing evidence that the average weight of chicks varies across some (or all) groups. Make sure to check relevant conditions. Figures and summary statistics are shown below. 5.38 Student performance across discussion sections. A professor who teaches a large introductory statistics class (197 students) with eight discussion sections would like to test if student performance differs by discussion section, where each discussion section has a different teaching assistant. The summary table below shows the average nal exam score for each discussion section as well as the standard deviation of scores and the number of students in each section. Sec 1 Sec 2 Sec 3 Sec 4 Sec 5 Sec 6 Sec 7 Sec 8 $n_i$ $\bar {x}_i$ $s_i$ 33 92.94 4.21 19 91.11 5.58 10 91.80 3.43 29 92.45 5.92 33 89.30 9.32 10 88.30 7.27 32 90.12 6.93 31 93.35 4.57 The ANOVA output below can be used to test for differences between the average scores from the different discussion sections. Df Sum Sq Mean Sq F value Pr (>F) Section Residuals 7 189 525.01 7584.11 75.00 40.13 1.87 0.0767 Conduct a hypothesis test to determine if these data provide convincing evidence that the average score varies across some (or all) groups. Check conditions and describe any assumptions you must make to proceed with the test. 5.39 Coffee, depression, and physical activity. Caffeine is the world's most widely used stimulant, with approximately 80% consumed in the form of coffee. Participants in a study investigating the relationship between coffee consumption and exercise were asked to report the number of hours they spent per week on moderate (e.g., brisk walking) and vigorous (e.g., strenuous sports and jogging) exercise. Based on these data the researchers estimated the total hours of metabolic equivalent tasks (MET) per week, a value always greater than 0. The table below gives summary statistics of MET for women in this study based on the amount of coffee consumed.49 Caffeinated coffee consumption $\le 1 cup/week$ 2-6 cus/week 1 cup/day 2-3 cups/day $\ge 4 cups/day$ Total Mean SD n 18.7 21.1 12,215 19.6 25.5 6,617 19.3 22.5 17,234 18.9 22.0 12,290 17.5 22.0 2,838 50,739 1. (a) Write the hypotheses for evaluating if the average physical activity level varies among the different levels of coffee consumption. 2. (b) Check conditions and describe any assumptions you must make to proceed with the test. 3. (c) Below is part of the output associated with this test. Fill in the empty cells. Df Sum Sq Mean Sq F value Pr (>F) Section Residuals Total --------------- --------------- --------------- --------------- 25,564,819 25,575,327 ---------------- ---------------- ---------------- 0.0003 (d) What is the conclusion of the test? 49M. Lucas et al. "Coffee, caffeine, and risk of depression among women". In: Archives of internal medicine 171.17 (2011), p. 1571. 5.40 Work hours and education, Part III. In Exercises 5.8 and 5.10 you worked with data from the General Social Survey in order to compare the average number of hours worked per week by US residents with and without a college degree. However, this analysis didn't take advantage of the original data which contained more accurate information on educational attainment (less than high school, high school, junior college, Bachelor's, and graduate school). Using ANOVA, we can consider educational attainment levels for all 1,172 respondents at once instead of re-categorizing them into two groups. Below are the distributions of hours worked by educational attainment and relevant summary statistics that will be helpful in carrying out this analysis. Educational attainment Less than HS HS Jr Coll BAchelor's Graduate Total Mean SD n 38.67 15.81 121 39.6 14.97 546 41.39 18.1 97 42.55 13.62 253 40.85 15.51 155 40.45 15.17 1,172 1. (a) Write hypotheses for evaluating whether the average number of hours worked varies across the ve groups. 2. (b) Check conditions and describe any assumptions you must make to proceed with the test. 3. (c) Below is part of the output associated with this test. Fill in the empty cells. Df Sum Sq Mean Sq F value Pr (>F) degree Residuals Total --------------- --------------- --------------- --------------- 267,382 --------------- 501.54 ---------------- ---------------- 0.0682 (d) What is the conclusion of the test? 5.41 GPA and major. Undergraduate students taking an introductory statistics course at Duke University conducted a survey about GPA and major. The side-by-side box plots show the distribution of GPA among three groups of majors. Also provided is the ANOVA output. Df Sum Sq Mean Sq F value Pr (>F) major Residuals 2 195 0.03 15.77 0.02 0.08 0.21 0.8068 1. (a) Write the hypotheses for testing for a difference between average GPA across majors. 2. (b) What is the conclusion of the hypothesis test? 3. (c) How many students answered these questions on the survey, i.e. what is the sample size? 5.42 Child care hours, Part II. Exercise 5.14 introduces the China Health and Nutrition Survey which, among other things, collects information on number of hours Chinese parents spend taking care of their children under age 6. The side by side box plots below show the distribution of this variable by educational attainment of the parent. Also provided below is the ANOVA output for comparing average hours across educational attainment categories. Df Sum Sq Mean Sq F value Pr (>F) education Residuals 4 794 4142.09 653047.83 1035.52 822.48 1.26 0.2846 1. (a) Write the hypotheses for testing for a difference between the average number of hours spent on child care across educational attainment levels. 2. (b) What is the conclusion of the hypothesis test? 5.43 True or false, Part II. Determine if the following statements are true or false in ANOVA, and explain your reasoning for statements you identify as false. 1. (a) As the number of groups increases, the modi ed signi cance level for pairwise tests increases as well. 2. (b) As the total sample size increases, the degrees of freedom for the residuals increases as well. 3. (c) The constant variance condition can be somewhat relaxed when the sample sizes are relatively consistent across groups. 4. (d) The independence assumption can be relaxed when the total sample size is large. 5.44 True or false, Part III. Determine if the following statements are true or false, and explain your reasoning for statements you identify as false. If the null hypothesis that the means of four groups are all the same is rejected using ANOVA at a 5% signi cance level, then ... 1. (a) we can then conclude that all the means are different from one another. 2. (b) the standardized variability between groups is higher than the standardized variability within groups. 3. (c) the pairwise analysis will identify at least one pair of means that are signi cantly different. 4. (d) the appropriate to be used in pairwise comparisons is $\frac {0.05}{4} = 0.0125$ since there are four groups. 5.45 Prison isolation experiment, Part II. Exercise 5.35 introduced an experiment that was conducted with the goal of identifying a treatment that reduces subjects' psychopathic deviant T scores, where this score measures a person's need for control or his rebellion against control. In Exercise 5.35 you evaluated the success of each treatment individually. An alternative analysis involves comparing the success of treatments. The relevant ANOVA output is given below. Df Sum Sq Mean Sq F value Pr (>F) treatment Residuals 2 39 639.48 3740.43 319.74 95.91 3.33 0.0461 1. (a) What are the hypotheses? 2. (b) What is the conclusion of the test? Use a 5% significance level. 3. (c) If in part (b) you determined that the test is signi cant, conduct pairwise tests to determine which groups are different from each other. If you did not reject the null hypothesis in part (b), recheck your solution. Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./05%3A_Inference_for_Numerical_Data/5.06%3A_Exercises.txt
Chapter 6 introduces inference in the setting of categorical data. We use these methods to answer questions like the following: • What proportion of the American public approves of the job the Supreme Court is doing? • The Pew Research Center conducted a poll about support for the 2010 health care law, and they used two forms of the survey question. Each respondent was randomly given one of the two questions. What is the difference in the support for respondents under the two question orderings? We will find that the methods we learned in previous chapters are very useful in these settings. For example, sample proportions are well characterized by a nearly normal distribution when certain conditions are satisfied, making it possible to employ the usual confidence interval and hypothesis testing tools. In other instances, such as those with contingency tables or when sample size conditions are not met, we will use a different distribution, though the core ideas remain the same. 06: Inference for Categorical Data According to a New York Times / CBS News poll in June 2012, only about 44% of the American public approves of the job the Supreme Court is doing.1 This poll included responses of 976 adults. Identifying when the Sample Proportion is Nearly Normal A sample proportion can be described as a sample mean. If we represent each "success" as a 1 and each "failure" as a 0, then the sample proportion is the mean of these numerical outcomes: $\hat {p} = \dfrac {0 + 1 + 1 + \dots + 0}{976} = 0.44$ The distribution of $\hat {p}$ is nearly normal when the distribution of 0's and 1's is not too strongly skewed for the sample size. The most common guideline for sample size and skew when working with proportions is to ensure that we expect to observe a minimum number of successes and failures, typically at least 10 of each. 1nytimes.com/2012/06/08/us/politics/44-percent-of-americans-approve-of-supreme-court-in-new-poll.html Conditions for the sampling distribution of $\hat {p}$ being nearly normal The sampling distribution for $\hat {p}$, taken from a sample of size n from a population with a true proportion p, is nearly normal when 1. the sample observations are independent and 2. we expected to see at least 10 successes and 10 failures in our sample, i.e. $np \ge 10$ and $n(1 - p) \ge 10$. This is called the success-failure condition. If these conditions are met, then the sampling distribution of $\hat {p}$ is nearly normal with mean p and standard error $SE_{\hat {p}} = \sqrt {\dfrac {p(1 - p)}{n}} \label{6.1}$ Typically we do not know the true proportion, $p$, so must substitute some value to check conditions and to estimate the standard error. For confidence intervals, usually $\hat {p}$ is used to check the success-failure condition and compute the standard error. For hypothesis tests, typically the null value - that is, the proportion claimed in the null hypothesis - is used in place of p. Examples are presented for each of these cases in Sections 6.1.2 and 6.1.3. TIP: Reminder on checking independence of observations If data come from a simple random sample and consist of less than 10% of the population, then the independence assumption is reasonable. Alternatively, if the data come from a random process, we must evaluate the independence condition more carefully. Confidence Intervals for a Proportion We may want a confidence interval for the proportion of Americans who approve of the job the Supreme Court is doing. Our point estimate, based on a sample of size n = 976 from the NYTimes/CBS poll, is $\hat {p} = 0.44$. To use the general confidence interval formula from Section 4.5, we must check the conditions to ensure that the sampling distribution of $\hat {p}$ is nearly normal. We also must determine the standard error of the estimate. The data are based on a simple random sample and consist of far fewer than 10% of the U.S. population, so independence is confirmed. The sample size must also be sufficiently large, which is checked via the success-failure condition: there were approximately $976 \times \hat {p} = 429$ "successes" and $976 \times (1- \hat {p}) = 547$ "failures" in the sample, both easily greater than 10. With the conditions met, we are assured that the sampling distribution of $\hat {p}$ is nearly normal. Next, a standard error for $\hat {p}$ is needed, and then we can employ the usual method to construct a confidence interval. Exercise $1$ Estimate the standard error of $\hat {p} = 0.44$ using Equation \ref{6.1}. Because $p$ is unknown and the standard error is for a confidence interval, use $\hat {p}$ in place of $p$. Answer $SE = \sqrt {\dfrac {p(1- p)}{n}} \approx \sqrt {\dfrac {0.44(1-0.44)}{976}} = 0.016 \nonumber$ Example $1$ Construct a 95% confidence interval for $p$, the proportion of Americans who trust federal officials most of the time. Solution Using the standard error estimate from Exercise $1$, the point estimate 0.44, and z* = 1:96 for a 95% confidence interval, the confidence interval may be computed as $\text {point estimate} \pm z^*SE \rightarrow 0.44 \pm 1.96 \times 0.016 \rightarrow (0.409, 0.471)$ We are 95% confident that the true proportion of Americans who approve of the job of the Supreme Court (in June 2012) is between 0.409 and 0.471. If the proportion has not changed since this poll, than we can say with high confidence that the job approval of the Supreme Court is below 50%. Constructing a confidence interval for a proportion • Verify the observations are independent and also verify the success-failure condition using $\hat {p}$ and n. • If the conditions are met, the sampling distribution of $\hat {p}$ may be well-approximated by the normal model. • Construct the standard error using $\hat {p}$ in place of p and apply the general confidence interval formula. Hypothesis Testing for a Proportion To apply the normal distribution framework in the context of a hypothesis test for a proportion, the independence and success-failure conditions must be satisfied. In a hypothesis test, the success-failure condition is checked using the null proportion: we verify $np_0$ and $n(1 - p_0)$ are at least 10, where $p_0$ is the null value. Exercise $2$ Deborah Toohey is running for Congress, and her campaign manager claims she has more than 50% support from the district's electorate. Set up a onesided hypothesis test to evaluate this claim. Answer Is there convincing evidence that the campaign manager is correct? • H0 : p = 0.50, • HA : p > 0.50. Example $2$ A newspaper collects a simple random sample of 500 likely voters in the district and estimates Toohey's support to be 52%. Does this provide convincing evidence for the claim of Toohey's manager at the 5% significance level? Solution Because this is a simple random sample that includes fewer than 10% of the population, the observations are independent. In a one-proportion hypothesis test, the success-failure condition is checked using the null proportion, $p_0 = 0.5: np_0 = n(1 - p_0) = 500 \times 0.5 = 250 > 10.$ With these conditions verified, the normal model may be applied to $\hat {p}$. Next the standard error can be computed. The null value is used again here, because this is a hypothesis test for a single proportion. \begin{align*} SE &= \sqrt {\dfrac {p_0 \times (1 - p_0)}{n}} \[5pt] &= \sqrt {\dfrac {0.5 (1 - 0.5)}{500}} = 0.022 \end{align*} A picture of the normal model is shown in Figure $1$ with the p-value represented by the shaded region. Based on the normal model, the test statistic can be computed as the Z score of the point estimate: \begin{align*} Z &= \dfrac {\text {point estimate - null value}}{SE} \[5pt] &= \dfrac {0.52 - 0.50}{0.022} = 0.89 \end{align*} The upper tail area, representing the p-value, is 0.1867. Because the p-value is larger than 0.05, we do not reject the null hypothesis, and we do not find convincing evidence to support the campaign manager's claim. Hypothesis test for a proportion Set up hypotheses and verify the conditions using the null value, $p_0$, to ensure $\hat {p}$ is nearly normal under H0. If the conditions hold, construct the standard error, again using p0, and show the p-value in a drawing. Lastly, compute the p-value and evaluate the hypotheses. Choosing a sample size when estimating a proportion We first encountered sample size computations in Section 4.6, which considered the case of estimating a single mean. We found that these computations were helpful in planning a study to control the size of the standard error of a point estimate. The task was to find a sample size n so that the sample mean would be within some margin of error m of the actual mean with a certain level of confidence. For example, the margin of error for a point estimate using 95% confidence can be written as $1.96 \times SE$. We set up a general equation to represent the problem: $ME = z^*SE \le m$ where ME represented the actual margin of error and $z^*$ was chosen to correspond to the confidence level. The standard error formula is specified to correspond to the particular setting. For instance, in the case of means, the standard error was given as $\dfrac {\sigma}{\sqrt {n}}$. In the case of a single proportion, we use $\sqrt {p(1 - p)}{n}$ for the standard error. Planning a sample size before collecting data is equally important when estimating a proportion. For instance, if we are conducting a university survey to determine whether students support a \$200 per year increase in fees to pay for a new football stadium, how big of a sample is needed to be sure the margin of error is less than 0.04 using a 95% confidence level? Example $3$ Find the smallest sample size n so that the margin of error of the point estimate $\hat {p}$ will be no larger than $m = 0.04$ when using a 95% confidence interval. Solution For a 95% confidence level, the value z* corresponds to 1.96, and we can write the margin of error expression as follows: $ME = z^*SE = 1.96 \times \sqrt {\dfrac {p(1 - p)}{n}} \le 0.04$ There are two unknowns in the equation: p and n. If we have an estimate of p, perhaps from a similar survey, we could use that value. If we have no such estimate, we must use some other value for p. It turns out that the margin of error is largest when p is 0.5, so we typically use this worst case estimate if no other estimate is available: $1.96 \times \sqrt {\dfrac {0.5(1 - 0.5)}{n}} \le 0.04$ $1.96^2 \times \dfrac {0.5(1 - 0.5)}{n} \le 0.04^2$ $1.96^2 \times \dfrac {0.5(1 - 0.5)}{0.04^2} \le n$ $600.25 \le n$ We would need at least 600.25 participants, which means we need 601 participants or more, to ensure the sample proportion is within 0.04 of the true proportion with 95% confidence. No estimate of the true proportion is required in sample size computations for a proportion, whereas an estimate of the standard deviation is always needed when computing a sample size for a margin of error for the sample mean. However, if we have an estimate of the proportion, we should use it in place of the worst case estimate of the proportion, 0.5. Example $4$ A manager is about to oversee the mass production of a new tire model in her factory, and she would like to estimate what proportion of these tires will be rejected through quality control. The quality control team has monitored the last three tire models produced by the factory, failing 1.7% of tires in the first model, 6.2% of the second model, and 1.3% of the third model. The manager would like to examine enough tires to estimate the failure rate of the new tire model to within about 2% with a 90% confidence level. 1. There are three different failure rates to choose from. Perform the sample size computation for each separately, and identify three sample sizes to consider. 2. The sample sizes in (b) vary widely. Which of the three would you suggest using? What would influence your choice? Solution (a) For the 1.7% estimate of p, we estimate the appropriate sample size as follows: $1.65 \times \sqrt {\dfrac {p(1 - p)}{n}} \approx 1.65 \times \sqrt {\dfrac {0.017(1 - 0.017)}{n}} \le 0.02 \rightarrow n \ge 113.7 \nonumber$ Using the estimate from the first model, we would suggest examining 114 tires (round up!). A similar computation can be accomplished using 0.062 and 0.013 for p: 396 and 88. (b) We could examine which of the old models is most like the new model, then choose the corresponding sample size. Or if two of the previous estimates are based on small samples while the other is based on a larger sample, we should consider the value corresponding to the larger sample. (Answers will vary.) Exercise $4$ A recent estimate of Congress' approval rating was 17%.5 What sample size does this estimate suggest we should use for a margin of error of 0.04 with 95% confidence? Answer We complete the same computations as before, except now we use 0.17 instead of 0.5 for p: $1.96 \times \sqrt {\dfrac {p(1 - p)}{n}} \approx 1.96 \times \sqrt {\dfrac {0.17(1 - 0.17)}{n}} \le 0.04 \rightarrow n \ge 338.8 \nonumber$ A sample size of 339 or more would be reasonable. Contributors • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.01%3A_Inference_for_a_Single_Proportion.txt
We would like to make conclusions about the difference in two population proportions: $p_1 - p_2$. We consider three examples. In the first, we compare the approval of the 2010 healthcare law under two different question phrasings. In the second application, a company weighs whether they should switch to a higher quality parts manufacturer. In the last example, we examine the cancer risk to dogs from the use of yard herbicides. In our investigations, we first identify a reasonable point estimate of $p_1 - p_2$ based on the sample. You may have already guessed its form: $\hat {p}_1 - \hat {p}_2$. Next, in each example we verify that the point estimate follows the normal model by checking certain conditions. Finally, we compute the estimate's standard error and apply our inferential framework. Sample Distribution of the Difference of Two Proportions We must check two conditions before applying the normal model to $\hat {p}_1 - \hat {p}_2$. First, the sampling distribution for each sample proportion must be nearly normal, and secondly, the samples must be independent. Under these two conditions, the sampling distribution of $\hat {p}_1 - \hat {p}_2$ may be well approximated using the normal model. Conditions for the sampling distribution of $\hat {p}_1 - \hat {p}_2$ to be normal The difference $\hat {p}_1 - \hat {p}_2$ tends to follow a normal model when each proportion separately follows a normal model, and the samples are independent. The standard error of the difference in sample proportions is \begin{align} SE_{\hat {p}_1 - \hat {p}_2} &= \sqrt {SE^2_{\hat {p}_1} + SE^2_{\hat {p}_2}} \[5pt] &= \sqrt {\dfrac {p_1(1 - p_1)}{n_1} + \dfrac {p_2(1 - p_2)}{n_2}} \label {6.9} \end{align} where $p_1$ and $p_2$ represent the population proportions, and n1 and n2 represent the sample sizes. For the difference in two means, the standard error formula took the following form: $SE_{\hat {x}_1 - \hat {x}_2} = \sqrt {SE^2_{\hat {x}_1} + SE^2_{\hat {x}_2}}$ The standard error for the difference in two proportions takes a similar form. The reasons behind this similarity are rooted in the probability theory of Section 2.4, which is described for this context in Exercise 5.14. 5www.gallup.com/poll/155144/Congress-Approval-June.aspx Table $1$: Results for a Pew Research Center poll where the ordering of two statements in a question regarding healthcare were randomized. Sample size (ni) Approve law (%) Disapprove law (%) Other "people who cannot afford it will receive financial help from the government" is given second 771 47 49 3 "people who do not buy it will pay a penalty" is given second 732 34 63 3 Intervals and tests for $p_1 - p_2$ In the setting of confidence intervals, the sample proportions are used to verify the success/failure condition and also compute standard error, just as was the case with a single proportion. Example $1$ The way a question is phrased can influence a person's response. For example, Pew Research Center conducted a survey with the following question:7 As you may know, by 2014 nearly all Americans will be required to have health insurance. [People who do not buy insurance will pay a penalty] while [People who cannot afford it will receive financial help from the government]. Do you approve or disapprove of this policy? For each randomly sampled respondent, the statements in brackets were randomized: either they were kept in the order given above, or the two statements were reversed. Table 6.2 shows the results of this experiment. Create and interpret a 90% confidence interval of the difference in approval. Solution First the conditions must be verified. Because each group is a simple random sample from less than 10% of the population, the observations are independent, both within the samples and between the samples. The success-failure condition also holds for each sample. Because all conditions are met, the normal model can be used for the point estimate of the difference in support, where $p_1$ corresponds to the original ordering and $p_2$ to the reversed ordering: $\hat {p}_1 - \hat {p}_2 = 0.47 - 0.34 = 0.13$ The standard error may be computed from Equation \ref{6.9} using the sample proportions: $SE \approx \sqrt {\dfrac {0.47(1 - 0.47)}{771} + \dfrac {0.34(1 - 0.34)}{732}} = 0.025$ For a 90% con dence interval, we use z* = 1.65: $\text {point estimate} \pm z^*SE \approx 0.13 \pm 1.65 \times 0.025 \rightarrow (0.09, 0.17)$ We are 90% confident that the approval rating for the 2010 healthcare law changes between 9% and 17% due to the ordering of the two statements in the survey question. The Pew Research Center reported that this modestly large difference suggests that the opinions of much of the public are still uid on the health insurance mandate. 7www.people-press.org/2012/03/26/public-remains-split-on-health-care-bill-opposed-to-mandate/. Sample sizes for each polling group are approximate. Exercise $1$ A remote control car company is considering a new manufacturer for wheel gears. The new manufacturer would be more expensive but their higher quality gears are more reliable, resulting in happier customers and fewer warranty claims. However, management must be convinced that the more expensive gears are worth the conversion before they approve the switch. If there is strong evidence of a more than 3% improvement in the percent of gears that pass inspection, management says they will switch suppliers, otherwise they will maintain the current supplier. Set up appropriate hypotheses for the test.8 Answer Add texts here. Do not delete this text first. Example $2$ The quality control engineer from Exercise 6.11 collects a sample of gears, examining 1000 gears from each company and nds that 899 gears pass inspection from the current supplier and 958 pass inspection from the prospective supplier. Using these data, evaluate the hypothesis setup of Exercise 6.11 using a signi cance level of 5%. Solution First, we check the conditions. The sample is not necessarily random, so to proceed we must assume the gears are all independent; for this sample we will suppose this assumption is reasonable, but the engineer would be more knowledgeable as to whether this assumption is appropriate. The success-failure condition also holds for each sample. Thus, the difference in sample proportions, 0.958 - 0.899 = 0.059, can be said to come from a nearly normal distribution. The standard error can be found using Equation \ref{6.9}: $SE = \sqrt { \dfrac {0.958(1 - 0.958)}{1000} + \dfrac {0.899(1 - 0.899)}{1000}} = 0.0114$ In this hypothesis test, the sample proportions were used. We will discuss this choice more in Section 6.2.3. Next, we compute the test statistic and use it to nd the p-value, which is depicted in Figure $1$. $Z = \dfrac {\text {point estimate - null value}}{SE} = \dfrac {0.059 - 0.03}{0.0114} = 2.54$ Using the normal model for this test statistic, we identify the right tail area as 0.006. Since this is a one-sided test, this single tail area is also the p-value, and we reject the null hypothesis because 0.006 is less than 0.05. That is, we have statistically significant evidence that the higher quality gears actually do pass inspection more than 3% as often as the currently used gears. Based on these results, management will approve the switch to the new supplier. 8H0: The higher quality gears will pass inspection no more than 3% more frequently than the standard quality gears. $p_{highQ} - p_{standard} = 0.03$. HA: The higher quality gears will pass inspection more than 3% more often than the standard quality gears. $p_{highQ} - p_{standard} > 0.03$. Figure $1$: Distribution of the test statistic if the null hypothesis was true. The p-value is represented by the shaded area. Hypothesis testing when H0: $p_1 = p_2$ Here we use a new example to examine a special estimate of standard error when H0 :$p_1 = p_2$. We investigate whether there is an increased risk of cancer in dogs that are exposed to the herbicide 2,4-dichlorophenoxyacetic acid (2,4-D). A study in 1994 examined 491 dogs that had developed cancer and 945 dogs as a control group.9 Of these two groups, researchers identified which dogs had been exposed to 2,4-D in their owner's yard. The results are shown in Table $2$. Table $2$: Summary results for cancer in dogs and the use of 2,4-D by the dog's owner. cancer no cancer 2,4 - D 191 304 no 2,4 - D 300 641 Exercise $1$ Is this study an experiment or an observational study? Answer The owners were not instructed to apply or not apply the herbicide, so this is an observational study. This question was especially tricky because one group was called the control group, which is a term usually seen in experiments. Exercise $1$ Exercise 6.14 Set up hypotheses to test whether 2,4-D and the occurrence of cancer in dogs are related. Use a one-sided test and compare across the cancer and no cancer groups.11 9Hayes HM, Tarone RE, Cantor KP, Jessen CR, McCurnin DM, and Richardson RC. 1991. CaseControl Study of Canine Malignant Lymphoma: Positive Association With Dog Owner's Use of 2, 4-Dichlorophenoxyacetic Acid Herbicides. Journal of the National Cancer Institute 83(17):1226-1231. Answer Using the proportions within the cancer and no cancer groups may seem odd. We intuitively may desire to compare the fraction of dogs with cancer in the 2,4-D and no 2,4-D groups, since the herbicide is an explanatory variable. However, the cancer rates in each group do not necessarily reect the cancer rates in reality due to the way the data were collected. For this reason, computing cancer rates may greatly alarm dog owners. • H0: the proportion of dogs with exposure to 2,4-D is the same in "cancer" and \no cancer" dogs, $p_c-p_n = 0$. • HA: dogs with cancer are more likely to have been exposed to 2,4-D than dogs without cancer, $p_c-p_n > 0$. Example $1$: pooled estimate Fire the conditions met to use the normal model and make inference on the results? (1) It is unclear whether this is a random sample. However, if we believe the dogs in both the cancer and no cancer groups are representative of each respective population and that the dogs in the study do not interact in any way, then we may find it reasonable to assume independence between observations. (2) The success-failure condition holds for each sample. Under the assumption of independence, we can use the normal model and make statements regarding the canine population based on the data. In your hypotheses for Exercise $1$, the null is that the proportion of dogs with exposure to 2,4-D is the same in each group. The point estimate of the difference in sample proportions is $\hat {p}_c - \hat {p}_n = 0.067$. To identify the p-value for this test, we first check conditions (Example 6.15) and compute the standard error of the difference: $SE = \sqrt {\dfrac {p_c(1 - p_c)}{n_c} + \dfrac {p_n(1 - p_n)}{n_n}}$ In a hypothesis test, the distribution of the test statistic is always examined as though the null hypothesis is true, i.e. in this case, $p_c = p_n$. The standard error formula should reflect this equality in the null hypothesis. We will use p to represent the common rate of dogs that are exposed to 2,4-D in the two groups: $SE = \sqrt {\dfrac {p(1 - p}{n_c} + \dfrac {p(1 - p)}{n_n}}$ We don't know the exposure rate, p, but we can obtain a good estimate of it by pooling the results of both samples: $\hat {p} = \dfrac {\text {# of "successes"}}{\text {# of cases}} = \dfrac {191 + 304}{191 + 300 + 304 + 641} = 0.345$ This is called the pooled estimate of the sample proportion, and we use it to compute the standard error when the null hypothesis is that $p_1 = p_2$ (e.g. $p_c = p_n$ or $p_c - p_n = 0)$. We also typically use it to verify the success-failure condition. Pooled estimate of a proportion When the null hypothesis is $p_1 = p_2$, it is useful to nd the pooled estimate of the shared proportion: $\hat {p} = \dfrac {\text {number of "successes"}}{\text {number of cases}} = \dfrac {\hat {p}_1n_1 + \hat {p}_2n_2}{n_1 + n_2}$ Here $\hat {p}_1n_1$ represents the number of successes in sample 1 since $\hat {p}_1 = \dfrac {\text {number of successes in sample 1}}{n_1}$ Similarly, $\hat {p}_2n_2$ represents the number of successes in sample 2. : $p_1 = p_2$ When the null hypothesis suggests the proportions are equal, we use the pooled proportion estimate $(\hat {p}$) to verify the success-failure condition and also to estimate the standard error: $SE = \sqrt {\dfrac {\hat {p}(1 - \hat {p})}{n_c} + \dfrac {\hat {p}(1 - \hat {p})}{n_n}} \label {6.16}$ Exercise $1$ Using Equation \ref{6.16}, $\hat {p} = 0.345, n_1 = 491$, and $n_2 = 945$, verify the estimate for the standard error is SE = 0.026. Next, complete the hypothesis test using a significance level of 0.05. Be certain to draw a picture, compute the p-value, and state your conclusion in both statistical language and plain language. Answer Compute the test statistic: $Z = \dfrac {\text {point estimate - null value}}{SE} = \dfrac {0.067 - 0}{0.026} = 2.58 \nonumber$ We leave the picture to you. Looking up Z = 2.58 in the normal probability table: 0.9951. However this is the lower tail, and the upper tail represents the p-value: 1- 0.9951 = 0.0049. We reject the null hypothesis and conclude that dogs getting cancer and owners using 2,4-D are associated. Contributors • David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.02%3A_Difference_of_Two_Proportions.txt
In this section, we develop a method for assessing a null model when the data are binned. This technique is commonly used in two circumstances: • Given a sample of cases that can be classified into several groups, determine if the sample is representative of the general population. • Evaluate whether data resemble a particular distribution, such as a normal distribution or a geometric distribution. Each of these scenarios can be addressed using the same statistical test: a chi-square test. In the rst case, we consider data from a random sample of 275 jurors in a small county. Jurors identified their racial group, as shown in Table 6.5, and we would like to determine if these jurors are racially representative of the population. If the jury is representative of the population, then the proportions in the sample should roughly reect the population of eligible jurors, i.e. registered voters. Table 6.5: Representation by race in a city's juries and population. Race White Black Hispanic Other Total Representation in juries 205 26 25 19 275 Registered voters 0.72 0.07 0.12 0.09 1.00 While the proportions in the juries do not precisely represent the population proportions, it is unclear whether these data provide convincing evidence that the sample is not representative. If the jurors really were randomly sampled from the registered voters, we might expect small differences due to chance. However, unusually large differences may provide convincing evidence that the juries were not representative. 12Compute the test statistic: $Z = \dfrac {\text {point estimate - null value}}{SE} = \dfrac {0.067 - 0}{0.026} = 2.58$ We leave the picture to you. Looking up Z = 2.58 in the normal probability table: 0.9951. However this is the lower tail, and the upper tail represents the p-value: 1- 0.9951 = 0.0049. We reject the null hypothesis and conclude that dogs getting cancer and owners using 2,4-D are associated. A second application, assessing the t of a distribution, is presented at the end of this section. Daily stock returns from the S&P500 for the years 1990-2011 are used to assess whether stock activity each day is independent of the stock's behavior on previous days. In these problems, we would like to examine all bins simultaneously, not simply compare one or two bins at a time, which will require us to develop a new test statistic. Creating a test statistic for one-way tables Example $1$: Of the people in the city, 275 served on a jury. If the individuals are randomly selected to serve on a jury, about how many of the 275 people would we expect to be white? How many would we expect to be black? Solution About 72% of the population is white, so we would expect about 72% of the jurors to be white: $0.72 \times 275 = 198$. Similarly, we would expect about 7% of the jurors to be black, which would correspond to about $0.07 \times 275 = 19.25$ black jurors. Exercise $1$ Twelve percent of the population is Hispanic and 9% represent other races. How many of the 275 jurors would we expect to be Hispanic or from another race? Answer Answers can be found in Table 6.6. Table 6.6: Actual and expected make-up of the jurors. Race White Black Hispanic Other Total Observed data 205 26 25 19 275 Expected count 198 19.25 33 24.75 275 The sample proportion represented from each race among the 275 jurors was not a precise match for any ethnic group. While some sampling variation is expected, we would expect the sample proportions to be fairly similar to the population proportions if there is no bias on juries. We need to test whether the differences are strong enough to provide convincing evidence that the jurors are not a random sample. These ideas can be organized into hypotheses: • H0: The jurors are a random sample, i.e. there is no racial bias in who serves on a jury, and the observed counts reflect natural sampling fluctuation. • HA: The jurors are not randomly sampled, i.e. there is racial bias in juror selection. To evaluate these hypotheses, we quantify how different the observed counts are from the expected counts. Strong evidence for the alternative hypothesis would come in the form of unusually large deviations in the groups from what would be expected based on sampling variation alone. The chi-square test statistic In previous hypothesis tests, we constructed a test statistic of the following form: $\dfrac {\text {point estimate - null value}}{\text {SE of point estimate}}$ This construction was based on (1) identifying the difference between a point estimate and an expected value if the null hypothesis was true, and (2) standardizing that difference using the standard error of the point estimate. These two ideas will help in the construction of an appropriate test statistic for count data. Our strategy will be to rst compute the difference between the observed counts and the counts we would expect if the null hypothesis was true, then we will standardize the difference: $Z_1 = \dfrac {\text {observed white count - null white count}}{\text {SE of observed white count}}$ The standard error for the point estimate of the count in binned data is the square root of the count under the null.13 Therefore: $Z_1 = \dfrac {205 - 198}{\sqrt {198}} = 0.50$ The fraction is very similar to previous test statistics: first compute a difference, then standardize it. These computations should also be completed for the black, Hispanic, and other groups: $\begin {matrix} Black & Hispanic & Other \ Z_2 = \dfrac {26 - 19.25}{\sqrt {19.25}} = 154 & Z_3 = \dfrac {25 - 33}{\sqrt {33}} = -1.39 & Z_4 = \dfrac {19 - 24:75}{\sqrt {24.75}} = -1.16\end {matrix}$ We would like to use a single test statistic to determine if these four standardized differences are irregularly far from zero. That is, $Z_1, Z_2, Z_3$, and $Z_4$ must be combined somehow to help determine if they - as a group - tend to be unusually far from zero. A first thought might be to take the absolute value of these four standardized differences and add them up: $|Z_1| + |Z_2| + |Z_3| + |Z_4| = 4.58$ Indeed, this does give one number summarizing how far the actual counts are from what was expected. However, it is more common to add the squared values: $Z^2_1 + Z^2_2 + Z^2_3 + Z^2_4 = 5.89$ Squaring each standardized difference before adding them together does two things: • Any standardized difference that is squared will now be positive. • Differences that already look unusual - e.g. a standardized difference of 2.5 - will become much larger after being squared. The test statistic $X^2$, which is the sum of the $Z^2$ values, is generally used for these reasons. We can also write an equation for $X^2$ using the observed counts and null counts: $X^2 = \dfrac {\text {(observed count_1 - null count_1)}^2}{\text {null count_1}} + \dots + \dfrac {\text {(observed count_4 - null count_4)}^2}{\text {null count_4}}$ 13Using some of the rules learned in earlier chapters, we might think that the standard error would be $np(1 - p)$, where n is the sample size and p is the proportion in the population. This would be correct if we were looking only at one count. However, we are computing many standardized differences and adding them together. It can be shown - though not here - that the square root of the count is a better way to standardize the count differences. The final number $X^2$ summarizes how strongly the observed counts tend to deviate from the null counts. In Section 6.3.4, we will see that if the null hypothesis is true, then $X^2$ follows a new distribution called a chi-square distribution. Using this distribution, we will be able to obtain a p-value to evaluate the hypotheses. The chi-square distribution and finding areas The chi-square distribution is sometimes used to characterize data sets and statistics that are always positive and typically right skewed. Recall the normal distribution had two parameters - mean and standard deviation - that could be used to describe its exact characteristics. The chi-square distribution has just one parameter called degrees of freedom (df), which inuences the shape, center, and spread of the distribution. Exercise $1$ Figure 6.7 shows three chi-square distributions. (a) How does the center of the distribution change when the degrees of freedom is larger? (b) What about the variability (spread)? (c) How does the shape change?14 Answer 14(a) The center becomes larger. If we look carefully, we can see that the center of each distribution is equal to the distribution's degrees of freedom. (b) The variability increases as the degrees of freedom increases. (c) The distribution is very strongly skewed for df = 2, and then the distributions become more symmetric for the larger degrees of freedom df = 4 and df = 9. We would see this trend continue if we examined distributions with even more larger degrees of freedom. Figure 6.7 and Exercise 6.20 demonstrate three general properties of chi-square distributions as the degrees of freedom increases: the distribution becomes more symmetric, the center moves to the right, and the variability inates. Our principal interest in the chi-square distribution is the calculation of p-values, which (as we have seen before) is related to finding the relevant area in the tail of a distribution. To do so, a new table is needed: the chi-square table, partially shown in Table 6.8. A more complete table is presented in Appendix B.3 on page 412. This table is very similar to the t table from Sections 5.3 and 5.4: we identify a range for the area, and we examine a particular row for distributions with different degrees of freedom. One important difference from the t table is that the chi-square table only provides upper tail values. Table 6.8: A section of the chi-square table. A complete table is in Appendix B.3 on page 412. Upper tail 0.3 0.2 0.1 0.05 0.02 0.01 0.005 0.001 df 1 2 3 4 5 2.41 3.22 4.61 5.99 3.66 4.64 6.25 7.81 4.88 5.99 7.78 9.49 6.06 7.29 9.24 11.07 7.82 9.21 10.60 13.82 9.84 11.34 12.84 16.27 11.67 13.28 14.86 18.47 13.39 15.09 16.75 20.52 6 7 7.23 8.56 10.64 12.59 8.38 9.80 12.02 14.07 15.03 16.81 18.55 22.46 16.62 18.48 20.28 24.32 Example 6.21 Figure 6.9(a) shows a chi-square distribution with 3 degrees of freedom and an upper shaded tail starting at 6.25. Use Table 6.8 to estimate the shaded area. This distribution has three degrees of freedom, so only the row with 3 degrees of freedom (df) is relevant. This row has been italicized in the table. Next, we see that the value { 6.25 { falls in the column with upper tail area 0.1. That is, the shaded upper tail of Figure 6.9(a) has area 0.1. Example 6.22 We rarely observe the exact value in the table. For instance, Figure 6.9(b) shows the upper tail of a chi-square distribution with 2 degrees of freedom. The bound for this upper tail is at 4.3, which does not fall in Table 6.8. Find the approximate tail area. The cutoff 4.3 falls between the second and third columns in the 2 degrees of freedom row. Because these columns correspond to tail areas of 0.2 and 0.1, we can be certain that the area shaded in Figure 6.9(b) is between 0.1 and 0.2. Example 6.23 Figure 6.9(c) shows an upper tail for a chi-square distribution with 5 degrees of freedom and a cutoff of 5.1. Find the tail area. Looking in the row with 5 df, 5.1 falls below the smallest cutoff for this row (6.06). That means we can only say that the area is greater than 0.3. Exercise 6.24 Figure 6.9(d) shows a cutoff of 11.7 on a chi-square distribution with 7 degrees of freedom. Find the area of the upper tail.15 Exercise 6.25 Figure 6.9(e) shows a cutoff of 10 on a chi-square distribution with 4 degrees of freedom. Find the area of the upper tail.16 Exercise 6.26 Figure 6.9(f) shows a cutoff of 9.21 with a chi-square distribution with 3 df. Find the area of the upper tail.17 15The value 11.7 falls between 9.80 and 12.02 in the 7 df row. Thus, the area is between 0.1 and 0.2. 16The area is between 0.02 and 0.05. 17Between 0.02 and 0.05. Finding a p-value for a chi-square distribution In Section 6.3.2, we identified a new test statistic ($X^2$) within the context of assessing whether there was evidence of racial bias in how jurors were sampled. The null hypothesis represented the claim that jurors were randomly sampled and there was no racial bias. The alternative hypothesis was that there was racial bias in how the jurors were sampled. We determined that a large $X^2$ value would suggest strong evidence favoring the alternative hypothesis: that there was racial bias. However, we could not quantify what the chance was of observing such a large test statistic ($X^2 = 5.89$) if the null hypothesis actually was true. This is where the chi-square distribution becomes useful. If the null hypothesis was true and there was no racial bias, then $X^2$ would follow a chi-square distribution, with three degrees of freedom in this case. Under certain conditions, the statistic $X^2$ follows a chi-square distribution with k - 1 degrees of freedom, where k is the number of bins. Example $1$: How many categories were there in the juror example? How many degrees of freedom should be associated with the chi-square distribution used for $X^2$? Solution In the jurors example, there were k = 4 categories: white, black, Hispanic, and other. According to the rule above, the test statistic $X^2$ should then follow a chi-square distribution with k - 1 = 3 degrees of freedom if H0 is true. Just like we checked sample size conditions to use the normal model in earlier sections, we must also check a sample size condition to safely apply the chi-square distribution for $X^2$. Each expected count must be at least 5. In the juror example, the expected counts were 198, 19.25, 33, and 24.75, all easily above 5, so we can apply the chi-square model to the test statistic, $X^2 = 5.89$. Example $1$: If the null hypothesis is true, the test statistic $X^2 = 5.89$ would be closely associated with a chi-square distribution with three degrees of freedom. Using this distribution and test statistic, identify the p-value. The chi-square distribution and p-value are shown in Figure 6.10. Because larger chi-square values correspond to stronger evidence against the null hypothesis, we shade the upper tail to represent the p-value. Using the chi-square table in Appendix B.3 or the short table on page 277, we can determine that the area is between 0.1 and 0.2. That is, the p-value is larger than 0.1 but smaller than 0.2. Generally we do not reject the null hypothesis with such a large p-value. In other words, the data do not provide convincing evidence of racial bias in the juror selection. Chi-square test for one-way table Suppose we are to evaluate whether there is convincing evidence that a set of observed counts $O_1, O_2, \dots, O_k$ in k categories are unusually different from what might be expected under a null hypothesis. Call the expected counts that are based on the null hypothesis $E_1, E_2, \dots, E_k$. If each expected count is at least 5 and the null hypothesis is true, then the test statistic below follows a chi-square distribution with k - 1 degrees of freedom: $X^2 = \dfrac {(O_1 - E_1)^2}{E_1} + \dfrac {(O_2 - E_2)^2}{E_2} + \dots + \dfrac {(O_k - E_k)^2}{E_k}$ The p-value for this test statistic is found by looking at the upper tail of this chi-square distribution. We consider the upper tail because larger values of $X^2$ would provide greater evidence against the null hypothesis. Conditions for the chi-square test There are three conditions that must be checked before performing a chi-square test: • Independence. Each case that contributes a count to the table must be independent of all the other cases in the table. • Sample size / distribution. Each particular scenario (i.e. cell count) must have at least 5 expected cases. • Degrees of freedom We only apply the chi-square technique when the table is associated with a chi-square distribution with 2 or more degrees of freedom. Failing to check conditions may affect the test's error rates. When examining a table with just two bins, pick a single bin and use the one proportion methods introduced in Section 6.1. Evaluating goodness of fit for a distribution Section 3.3 would be useful background reading for this example, but it is not a prerequisite. We can apply our new chi-square testing framework to the second problem in this section: evaluating whether a certain statistical model ts a data set. Daily stock returns from the S&P500 for 1990-2011 can be used to assess whether stock activity each day is independent of the stock's behavior on previous days. This sounds like a very complex question, and it is, but a chi-square test can be used to study the problem. We will label each day as Up or Down (D) depending on whether the market was up or down that day. For example, consider the following changes in price, their new labels of up and down, and then the number of days that must be observed before each Up day: $\begin {matrix} \text {Change in price} & 2.52 & -1.46 & 0.51 & -4.07 & 3.36 & 1.10 & -5.46 & -1.03 & -2.99 & 1.71\ \text {Outcome} & Up & D & Up & D & Up & Up & D & D & D & Up\ \text {Days to Up} &1 & -& 2 & -& 2 & 1& -& - & - & 4 \end {matrix}$ If the days really are independent, then the number of days until a positive trading day should follow a geometric distribution. The geometric distribution describes the probability of waiting for the kth trial to observe the rst success. Here each up day (Up) represents a success, and down (D) days represent failures. In the data above, it took only one day until the market was up, so the first wait time was 1 day. It took two more days before we observed our next Up trading day, and two more for the third Up day. We would like to determine if these counts (1, 2, 2, 1, 4, and so on) follow the geometric distribution. Table 6.11 shows the number of waiting days for a positive trading day during 1990-2011for the S&P500. Table 6.11: Observed distribution of the waiting time until a positive trading day for the S&P500, 1990-2011. Days 1 2 3 4 5 6 7+ Total Observed 1532 760 338 194 74 33 17 2948 We consider how many days one must wait until observing an Up day on the S&P500 stock exchange. If the stock activity was independent from one day to the next and the probability of a positive trading day was constant, then we would expect this waiting time to follow a geometric distribution. We can organize this into a hypothesis framework: H0: The stock market being up or down on a given day is independent from all other days. We will consider the number of days that pass until an Up day is observed. Under this hypothesis, the number of days until an Up day should follow a geometric distribution. HA: The stock market being up or down on a given day is not independent from all other days. Since we know the number of days until an Up day would follow a geometric distribution under the null, we look for deviations from the geometric distribution, which would support the alternative hypothesis. There are important implications in our result for stock traders: if information from past trading days is useful in telling what will happen today, that information may provide an advantage over other traders. We consider data for the S&P500 from 1990 to 2011 and summarize the waiting times in Table 6.12 and Figure 6.13. The S&P500 was positive on 53.2% of those days. Because applying the chi-square framework requires expected counts to be at least 5, we have binned together all the cases where the waiting time was at least 7 days to ensure each expected count is well above this minimum. The actual data, shown in the Observed row in Table 6.12, can be compared to the expected counts from the Geometric Model row. The method for computing expected counts is discussed in Table 6.12. In general, the expected counts are determined by (1) identifying the null proportion associated with each Table 6.12: Distribution of the waiting time until a positive trading day. The expected counts based on the geometric model are shown in the last row. To find each expected count, we identify the probability of waiting D days based on the geometric model $(P(D) = (1 - 0.532)^{D-1}(0.532))$ and multiply by the total number of streaks, 2948. For example, waiting for three days occurs under the geometric model about $0.468^2 \times 0.532$ = 11.65% of the time, which corresponds to $0.1165 \times 2948 = 343$ streaks. Days 1 2 3 4 5 6 7+ Total Observed 1532 760 338 194 74 33 17 2948 Geometric Model 1569 734 343 161 75 35 31 2948 bin, then (2) multiplying each null proportion by the total count to obtain the expected counts. That is, this strategy identifies what proportion of the total count we would expect to be in each bin. Example 6.29 Do you notice any unusually large deviations in the graph? Can you tell if these deviations are due to chance just by looking? It is not obvious whether differences in the observed counts and the expected counts from the geometric distribution are significantly different. That is, it is not clear whether these deviations might be due to chance or whether they are so strong that the data provide convincing evidence against the null hypothesis. However, we can perform a chi-square test using the counts in Table 6.12. Exercise 6.30 Table 6.12 provides a set of count data for waiting times ($O_1 = 1532, O_2 = 760,\dots$) and expected counts under the geometric distribution ($E_1 = 1569, E_2 = 734, \dots$). Compute the chi-square test statistic, $X^2$.18 Exercise 6.31 Because the expected counts are all at least 5, we can safely apply the chi-square distribution to $X^2$. However, how many degrees of freedom should we use?19 Example 6.32 If the observed counts follow the geometric model, then the chi-square test statistic $X^2 = 15.08$ would closely follow a chi-square distribution with df = 6. Using this information, compute a p-value. Figure 6.14 shows the chi-square distribution, cutoff, and the shaded p-value. If we look up the statistic $X^2 = 15.08$ in Appendix B.3, we nd that the p-value is between 0.01 and 0.02. In other words, we have sufficient evidence to reject the notion that 18$X^2 = \dfrac {{(1532-1569)}^2}{1569} + \dfrac {{(760 - 734)}^2}{734} + \dots+ \dfrac {{(17 - 31)}^2}{31} = 15.08$ 19There are k = 7 groups, so we use df = k - 1 = 6. the wait times follow a geometric distribution, i.e. trading days are not independent and past days may help predict what the stock market will do today. Example 6.33 In Example 6.32, we rejected the null hypothesis that the trading days are independent. Why is this so important? Because the data provided strong evidence that the geometric distribution is not appropriate, we reject the claim that trading days are independent. While it is not obvious how to exploit this information, it suggests there are some hidden patterns in the data that could be interesting and possibly useful to a stock trader.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.03%3A_Testing_for_Goodness_of_Fit_using_Chi-Square_%28Special_Topic%29.txt
Google is constantly running experiments to test new search algorithms. For example, Google might test three algorithms using a sample of 10,000 google.com search queries. Table 6.15 shows an example of 10,000 queries split into three algorithm groups.20 The group sizes were specified before the start of the experiment to be 5000 for the current algorithm and 2500 for each test algorithm. Table 6.15: Google experiment breakdown of test subjects into three search groups. Search algorithm Counts current 5000 test 1 2500 test 2 2500 Total 10000 20Google regularly runs experiments in this manner to help improve their search engine. It is entirely possible that if you perform a search and so does your friend, that you will have different search results. While the data presented in this section resemble what might be encountered in a real experiment, these data are simulated. Example $1$ What is the ultimate goal of the Google experiment? What are the null and alternative hypotheses, in regular words? The ultimate goal is to see whether there is a difference in the performance of the algorithms. The hypotheses can be described as the following: • H0: The algorithms each perform equally well. • HA: The algorithms do not perform equally well. In this experiment, the explanatory variable is the search algorithm. However, an outcome variable is also needed. This outcome variable should somehow reect whether the search results align with the user's interests. One possible way to quantify this is to determine whether (1) the user clicked one of the links provided and did not try a new search, or (2) the user performed a related search. Under scenario (1), we might think that the user was satis ed with the search results. Under scenario (2), the search results probably were not relevant, so the user tried a second search. Table 6.16 provides the results from the experiment. These data are very similar to the count data in Section 6.3. However, now the different combinations of two variables are binned in a two-way table. In examining these data, we want to evaluate whether there is strong evidence that at least one algorithm is performing better than the others. To do so, we apply a chi-square test to this two-way table. The ideas of this test are similar to those ideas in the one-way table case. However, degrees of freedom and expected counts are computed a little differently than before. Table 6.16: Results of the Google search algorithm experiment. Search algorithm current test 1 test 2 Total No new search New search 3511 1489 1749 751 1818 682 7078 2922 Total 5000 2500 2500 10000 What is so different about one-way tables and two-way tables? A one-way table describes counts for each outcome in a single variable. A two-way table describes counts for combinations of outcomes for two variables. When we consider a two-way table, we often would like to know, are these variables related in any way? That is, are they dependent (versus independent)? The hypothesis test for this Google experiment is really about assessing whether there is statistically significant evidence that the choice of the algorithm affects whether a user performs a second search. In other words, the goal is to check whether the search variable is independent of the algorithm variable. Expected Counts in Two-way Tables Example 6.35 From the experiment, we estimate the proportion of users who were satisfied with their initial search (no new search) as $\frac {7078}{10000} = 0.7078$. If there really is no difference among the algorithms and 70.78% of people are satisfied with the search results, how many of the 5000 people in the "current algorithm" group would be expected to not perform a new search? About 70.78% of the 5000 would be satis ed with the initial search: $0.7078 \times 5000 = 3539 users$ That is, if there was no difference between the three groups, then we would expect 3539 of the current algorithm users not to perform a new search. Exercise $1$ Exercise 6.36 Using the same rationale described in Example 6.35, about how many users in each test group would not perform a new search if the algorithms were equally helpful?21 21We would expect 0.7078 * 2500 = 1769.5. It is okay that this is a fraction. We can compute the expected number of users who would perform a new search for each group using the same strategy employed in Example 6.35 and Exercise 6.36. These expected counts were used to construct Table 6.17, which is the same as Table 6.16, except now the expected counts have been added in parentheses. Table 6.17: The observed counts and the (Expected counts) Search algorithm current test 1 test 2 Total No new search New search 3511 (3539) 1489 (1461) 1749 (1769.5) 751 (730.5) 1818 (1769.5) 682 (730.5) 7078 2922 Total 5000 2500 2500 10000 The examples and exercises above provided some help in computing expected counts. In general, expected counts for a two-way table may be computed using the row totals, column totals, and the table total. For instance, if there was no difference between the groups, then about 70.78% of each column should be in the rst row: $0.7078 \times \text {(column 1 total)} = 3539$ $0.7078 \times \text {(column 2 total)} = 1769.5$ $0.7078 \times \text {(column 3 total)} = 1769.5$ Looking back to how the fraction 0.7078 was computed - as the fraction of users who did not perform a new search ($\frac {7078}{10000}$) - these three expected counts could have been computed as $\frac {\text {row 1 total}}{\text {table total}} \text {(column 1 total)} = 3539$ $\frac {\text {row 1 total}}{\text {table total}} \text {(column 2 total)} = 1769.5$ $\frac {\text {row 1 total}}{\text {table total}} \text {(column 3 total)} = 1769.5$ This leads us to a general formula for computing expected counts in a two-way table when we would like to test whether there is strong evidence of an association between the column variable and row variable. Computing expected counts in a two-way table To identify the expected count for the ith row and jth column, compute $\text {Expected Count}_{\text{row i, col j}} = \frac {\text {(row i total)} \times \text {(column j total)}}{\text {table total}}$ The chi-square Test for Two-way Tables The chi-square test statistic for a two-way table is found the same way it is found for a one-way table. For each table count, compute $\text {General formula} \frac {\text {(observed count - expected count)}^2}{\text {expected count}}$ $\text {Row 1, Col 1} \frac {(3511 - 3539)^2}{3539} = 0.222$ $\text {Row 1, Col 2} \frac {(1749 - 1769.5)^2}{1769.5} = 0.237$ $\vdots \vdots$ $\text {Row 2, Col 3} \frac {(682 - 730.5)^2}{730.5} = 3.220$ Adding the computed value for each cell gives the chi-square test statistic X2: $X^2 = 0.222 + 0.237 + \dots + 3.220 = 6.120$ Just like before, this test statistic follows a chi-square distribution. However, the degrees of freedom are computed a little differently for a two-way table.22 For two way tables, the degrees of freedom is equal to $df = \text {(number of rows minus 1)} \times \text {(number of columns minus 1)}$ In our example, the degrees of freedom parameter is $df = (2 - 1) \times (3 - 1) = 2$ If the null hypothesis is true (i.e. the algorithms are equally useful), then the test statistic X2 = 6.12 closely follows a chi-square distribution with 2 degrees of freedom. Using this information, we can compute the p-value for the test, which is depicted in Figure 6.18. Definition: degrees of freedom for a two-way table When applying the chi-square test to a two-way table, we use $df = (R - 1) \times (C - 1)$ where R is the number of rows in the table and C is the number of columns. 22Recall: in the one-way table, the degrees of freedom was the number of cells minus 1. Table 6.19: Pew Research poll results of a March 2012 poll. Congress Obama Democrats Republicans Total Approve Disapprove 842 616 736 646 541 842 2119 2104 Total 1458 1382 1383 4223 TIP: Use two-proportion methods for 2-by-2 contingency tables When analyzing 2-by-2 contingency tables, use the two-proportion methods introduced in Section 6.2. Example $1$ Compute the p-value and draw a conclusion about whether the search algorithms have different performances. Solution Looking in Appendix B.3 on page 412, we examine the row corresponding to 2 degrees of freedom. The test statistic, X2 = 6.120, falls between the fourth and fth columns, which means the p-value is between 0.02 and 0.05. Because we typically test at a significance level of $\alpha$ = 0.05 and the p-value is less than 0.05, the null hypothesis is rejected. That is, the data provide convincing evidence that there is some difference in performance among the algorithms. Example $1$ Table 6.19 summarizes the results of a Pew Research poll.23 We would like to determine if there are actually differences in the approval ratings of Barack Obama, Democrats in Congress, and Republicans in Congress. What are appropriate hypotheses for such a test? Solution • H0: There is no difference in approval ratings between the three groups. • HA: There is some difference in approval ratings between the three groups, e.g. perhaps Obama's approval differs from Democrats in Congress. 23See the Pew Research website: www.people-press.org/2012/03/14/romney-leads-gop-contest-trails-in-matchup-with-obama. The counts in Table 6.19 are approximate. Exercise $1$ A chi-square test for a two-way table may be used to test the hypotheses in Example 6.38. As a rst step, compute the expected values for each of the six table cells.24 24The expected count for row one / column one is found by multiplying the row one total (2119) and column one total (1458), then dividing by the table total (4223): $\frac {2119 \times 1458}{3902} = 731.6$. Similarly for the first column and the second row: $\frac {2104 \times 1458}{4223} = 726.4$. Column 2: 693.5 and 688.5. Column 3: 694.0 and 689.0 Exercise $1$ Compute the chi-square test statistic.25 25For each cell, compute $\frac {\text {(obs - exp)}^2}{exp}$. For instance, the rst row and rst column: $\frac {(842-731.6)^2}{731.6} = 16.7$. Adding the results of each cell gives the chi-square test statistic: $X^2 = 16.7 + \dots + 34.0 = 106.4$. Exercise $1$ Because there are 2 rows and 3 columns, the degrees of freedom for the test is df = (2 - 1) (3 - 1) = 2. Use X2 = 106.4, df = 2, and the chi-square table on page 412 to evaluate whether to reject the null hypothesis.26 26The test statistic is larger than the right-most column of the df = 2 row of the chi-square table, meaning the p-value is less than 0.001. That is, we reject the null hypothesis because the p-value is less than 0.05, and we conclude that Americans' approval has differences among Democrats in Congress, Republicans in Congress, and the president.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.04%3A_Testing_for_Independence_in_Two-Way_Tables_%28Special_Topic%29.txt
In this section we develop inferential methods for a single proportion that are appropriate when the sample size is too small to apply the normal model to $\hat {p}$. Just like the methods related to the t distribution, these methods can also be applied to large samples. When the Success-Failure Condition is Not Met People providing an organ for donation sometimes seek the help of a special "medical consultant". These consultants assist the patient in all aspect of the surgery, with the goal of reducing the possibility of complications during the medical procedure and recovery. Patients might choose a consultant based in part on the historical complication rate of the consultant's clients. One consultant tried to attract patients by noting the average complication rate for liver donor surgeries in the US is about 10%, but her clients have only had 3 complications in the 62 liver donor surgeries she has facilitated. She claims this is strong evidence that her work meaningfully contributes to reducing complications (and therefore she should be hired!). Exercise $1$ Exercise 6.42 We will let p represent the true complication rate for liver donors working with this consultant. Estimate p using the data, and label this value $\hat {p}$. Solution The sample proportion: $\hat {p} = \frac {3}{62} = 0.048$ Example $1$ Is it possible to assess the consultant's claim using the data provided? Solution No. The claim is that there is a causal connection, but the data are observational. Patients who hire this medical consultant may have lower complication rates for other reasons. While it is not possible to assess this causal claim, it is still possible to test for an association using these data. For this question we ask, could the low complication rate of $\hat {p} = 0.048$ be due to chance? Exercise $1$ Write out hypotheses in both plain and statistical language to test for the association between the consultant's work and the true complication rate, p, for this consultant's clients. Solution • H0: There is no association between the consultant's contributions and the clients' complication rate. In statistical language, p = 0.10. • HA: Patients who work with the consultant tend to have a complication rate lower than 10%, i.e. p < 0.10. Example $1$ In the examples based on large sample theory, we modeled $\hat {p}$ using the normal distribution. Why is this not appropriate here? Solution The independence assumption may be reasonable if each of the surgeries is from a different surgical team. However, the success-failure condition is not satis ed. Under the null hypothesis, we would anticipate seeing $62 \times 0.10 = 6.2$ complications, not the 10 required for the normal approximation. The uncertainty associated with the sample proportion should not be modeled using the normal distribution. However, we would still like to assess the hypotheses from Exercise 6.44 in absence of the normal framework. To do so, we need to evaluate the possibility of a sample value (^p) this far below the null value, $p_0 = 0.10$. This possibility is usually measured with a p-value. The p-value is computed based on the null distribution, which is the distribution of the test statistic if the null hypothesis is true. Supposing the null hypothesis is true, we can compute the p-value by identifying the chance of observing a test statistic that favors the alternative hypothesis at least as strongly as the observed test statistic. This can be done using simulation. Generating the null distribution and p-value by simulation We want to identify the sampling distribution of the test statistic ($\hat {p}$) if the null hypothesis was true. In other words, we want to see how the sample proportion changes due to chance alone. Then we plan to use this information to decide whether there is enough evidence to reject the null hypothesis. Under the null hypothesis, 10% of liver donors have complications during or after surgery. Suppose this rate was really no different for the consultant's clients. If this was the case, we could simulate 62 clients to get a sample proportion for the complication rate from the null distribution. Each client can be simulated using a deck of cards. Take one red card, nine black cards, and mix them up. Then drawing a card is one way of simulating the chance a patient has a complication if the true complication rate is 10% for the data. If we do this 62 times and compute the proportion of patients with complications in the simulation, $\hat {p}_{sim}$, then this sample proportion is exactly a sample from the null distribution. An undergraduate student was paid \$2 to complete this simulation. There were 5 simulated cases with a complication and 57 simulated cases without a complication, i.e. $\hat {p}_{sim} = \frac {5}{62} = 0.081$. Example $1$ Is this one simulation enough to determine whether or not we should reject the null hypothesis from Exercise 6.44? Explain. Solution No. To assess the hypotheses, we need to see a distribution of many $\hat {p}_{sim}$, not just a single draw from this sampling distribution. One simulation isn't enough to get a sense of the null distribution; many simulation studies are needed. Roughly 10,000 seems sufficient. However, paying someone to simulate 10,000 studies by hand is a waste of time and money. Instead, simulations are typically programmed into a computer, which is much more efficient. Figure 6.20 shows the results of 10,000 simulated studies. The proportions that are equal to or less than $\hat {p} = 0.048$ are shaded. The shaded areas represent sample proportions under the null distribution that provide at least as much evidence as $\hat {p}$ favoring the alternative hypothesis. There were 1222 simulated sample proportions with $\hat {p}_{sim} \le 0.048$. We use these to construct the null distribution's left-tail area and nd the p-value: $\text {left tail} = \frac {\text {Number of observed simulations with} \hat {p}_{sim} \le 0.048}{10000} \tag {6.47}$ Of the 10,000 simulated $\hat {p}_{sim}$, 1222 were equal to or smaller than $\hat {p}$. Since the hypothesis test is one-sided, the estimated p-value is equal to this tail area: 0.1222. Exercise $1$ Because the estimated p-value is 0.1222, which is larger than the signi cance level 0.05, we do not reject the null hypothesis. Explain what this means in plain language in the context of the problem. Solution There isn't sufficiently strong evidence to support an association between the consultant's work and fewer surgery complications. Exercise $1$ Does the conclusion in Exercise 6.48 imply there is no real association between the surgical consultant's work and the risk of complications? Explain. Solution No. It might be that the consultant's work is associated with a reduction but that there isn't enough data to convincingly show this connection. One-sided hypothesis test for p with a small sample The p-value is always derived by analyzing the null distribution of the test statistic. The normal model poorly approximates the null distribution for $\hat {p}$ when the success-failure condition is not satisfied. As a substitute, we can generate the null distribution using simulated sample proportions ($\hat {p}_{sim}$) and use this distribution to compute the tail area, i.e. the p-value. We continue to use the same rule as before when computing the p-value for a twosided test: double the single tail area, which remains a reasonable approach even when the sampling distribution is asymmetric. However, this can result in p-values larger than 1 when the point estimate is very near the mean in the null distribution; in such cases, we write that the p-value is 1. Also, very large p-values computed in this way (e.g. 0.85), may also be slightly inated. Exercise 6.48 said the p-value is estimated. It is not exact because the simulated null distribution itself is not exact, only a close approximation. However, we can generate an exact null distribution and p-value using the binomial model from Section 3.4. Generating the exact null distribution and p-value The number of successes in n independent cases can be described using the binomial model, which was introduced in Section 3.4. Recall that the probability of observing exactly k successes is given by $P(\text {k successes}) = \binom {n}{k} p^k(1 - p)^{n-k} = \frac {n!}{k!(n - k)!} p^k (1 - p)^{n-k} \tag {6.50}$ where p is the true probability of success. The expression $\binom {n}{k}$ is read as n choose k, and the exclamation points represent factorials. For instance, 3! is equal to $3 \times 2 \times 1 = 6, 4!$ is equal to $4 \times 3 \times 2 \times 1 = 24$, and so on (see Section 3.4). The tail area of the null distribution is computed by adding up the probability in Equation (6.50) for each k that provides at least as strong of evidence favoring the alternative hypothesis as the data. If the hypothesis test is one-sided, then the p-value is represented by a single tail area. If the test is two-sided, compute the single tail area and double it to get the p-value, just as we have done in the past. Example $1$ Compute the exact p-value to check the consultant's claim that her clients' complication rate is below 10%. Solution Exactly k = 3 complications were observed in the n = 62 cases cited by the consultant. Since we are testing against the 10% national average, our null hypothesis is p = 0.10. We can compute the p-value by adding up the cases where there are 3 or fewer complications: $\text {p-value} = \sum \limits ^3_{j=0} \binom {n}{j} p^j(1 - p)^{n-j}$ $= \sum \limits ^3_{j=0} \binom {62}{j} 0.1^j(1 - 0.1)^{62-j}$ $= \binom {62}{0} 0.1^0(1 - 0.1)^{62-0} + \binom {62}{1} 0.1^1(1 - 0.1)^{62-1} + \binom {62}{2}0.1^2(1 - 0.1)^{62-2} + \binom {62}{3} 0.1^3(1 - 0.1)^{62-3}$ $= 0.0015 + 0.0100 + 0.0340 + 0.0755$ $= 0.1210$ This exact p-value is very close to the p-value based on the simulations (0.1222), and we come to the same conclusion. We do not reject the null hypothesis, and there is not statistically signi cant evidence to support the association. If it were plotted, the exact null distribution would look almost identical to the simulated null distribution shown in Figure 6.20 on page 290. Using simulation for goodness of fit tests Simulation methods may also be used to test goodness of t. In short, we simulate a new sample based on the purported bin probabilities, then compute a chi-square test statistic $X^2_{sim}$. We do this many times (e.g. 10,000 times), and then examine the distribution of these simulated chi-square test statistics. This distribution will be a very precise null distribution for the test statistic X2 if the probabilities are accurate, and we can nd the upper tail of this null distribution, using a cutoff of the observed test statistic, to calculate the p-value. Example $1$ Section 6.3 introduced an example where we considered whether jurors were racially representative of the population. Would our ndings differ if we used a simulation technique? Solution Since the minimum bin count condition was satis ed, the chi-square distribution is an excellent approximation of the null distribution, meaning the results should be very similar. Figure 6.21 shows the simulated null distribution using 100,000 simulated $X^2_{sim}$ values with an overlaid curve of the chi-square distribution. The distributions are almost identical, and the p-values are essentially indistinguishable: 0.115 for the simulated null distribution and 0.117 for the theoretical null distribution.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.05%3A_Small_Sample_Hypothesis_Testing_for_a_Proportion_%28Special_Topic%29.txt
Cardiopulmonary resuscitation (CPR) is a procedure commonly used on individuals suffering a heart attack when other emergency resources are not available. This procedure is helpful in maintaining some blood circulation, but the chest compressions involved can also cause internal injuries. Internal bleeding and other injuries complicate additional treatment efforts following arrival at a hospital. For instance, blood thinners may be used to help release a clot that is causing the heart attack. However, the blood thinner would negatively affect an internal injury. Here we consider an experiment for patients who underwent CPR for a heart attack and were subsequently admitted to a hospital.( Efficacy and safety of thrombolytic therapy after initially unsuccessful cardiopulmonary resuscitation: a prospective clinical trial, by Bottiger et al., The Lancet, 2001.) These patients were randomly divided into a treatment group where they received a blood thinner or the control group where they did not receive the blood thinner. The outcome variable of interest was whether the patients survived for at least 24 hours. Example $1$ Form hypotheses for this study in plain and statistical language. Let $p_c$ represent the true survival proportion in the control group and $p_t$ represent the survival proportion for the treatment group. Solution We are interested in whether the blood thinners are helpful or harmful, so this should be a two-sided test. • H0: Blood thinners do not have an overall survival effect, i.e. the survival proportions are the same in each group. $p_t - p_c = 0.$ • HA: Blood thinners do have an impact on survival. $p_t - p_c \ne 0.$ Large Sample Framework for a Difference in Two Proportions There were 50 patients in the experiment who did not receive the blood thinner and 40 patients who did. The study results are shown in Table 6.22. Table 6.22: Results for the CPR study. Patients in the treatment group were given a blood thinner, and patients in the control group were not. Survived Died Total Control Treatment 11 14 39 26 50 40 Total 25 65 90 Exercise $1$ What is the observed survival rate in the control group? And in the treatment group? Also, provide a point estimate of the difference in survival proportions of the two groups: $\hat {p}_t - \hat {p}_c$. Solution Observed control survival rate: $p_c = \dfrac {11}{50} = 0.22.$ Treatment survival rate: $p_t = \dfrac {14}{40} = 0.35.$ Observed difference: $\hat {p}_t - \hat {p}_c = 0.35 - 0.22 = 0.13.$ According to the point estimate, there is a 13% increase in the survival proportion when patients who have undergone CPR outside of the hospital are treated with blood thinners. However, we wonder if this difference could be due to chance. We'd like to investigate this using a large sample framework, but we first need to check the conditions for such an approach. Example $2$ Can the point estimate of the difference in survival proportions be adequately modeled using a normal distribution? Solution We will assume the patients are independent, which is probably reasonable. The success-failure condition is also satisfied. Since the proportions are equal under the null, we can compute the pooled proportion, $\hat {p} = \dfrac {(11 + 14)}{(50 + 40)} = 0.278,$ for checking conditions. We nd the expected number of successes (13.9, 11.1) and failures (36.1, 28.9) are above 10. The normal model is reasonable. While we can apply a normal framework as an approximation to nd a p-value, we might keep in mind that the expected number of successes is only 13.9 in one group and 11.1 in the other. Below we conduct an analysis relying on the large sample normal theory. We will follow up with a small sample analysis and compare the results. Example $3$ Assess the hypotheses presented in Example 6.53 using a large sample framework. Use a significance level of $\alpha = 0.05$. Solution We suppose the null distribution of the sample difference follows a normal distribution with mean 0 (the null value) and a standard deviation equal to the standard error of the estimate. The null hypothesis in this case would be that the two proportions are the same, so we compute the standard error using the pooled standard error formula from Equation (6.16) on page 273: $SE = \sqrt {\dfrac {p(1 - p)}{n_t} + \dfrac {p(1 - p)}{n_c}} \approx \sqrt {\dfrac {0.278(1 - 0.278)}{40} + \dfrac {0.278(1 - 0.278)}{50}} = 0.095$ where we have used the pooled estimate $( \hat {p} = \dfrac {11+14}{50+40} = 0.278)$ in place of the true proportion, p. The null distribution with mean zero and standard deviation 0.095 is shown in Figure 6.23. We compute the tail areas to identify the p-value. To do so, we use the Z score of the point estimate: $Z = \dfrac {(\hat {p}_t - \hat {p}_c) - \text {null value}}{SE} = \dfrac {0.13 - 0}{0.095} = 1.37$ If we look this Z score up in Appendix B.1, we see that the right tail has area 0.0853. The p-value is twice the single tail area: 0.176. This p-value does not provide convincing evidence that the blood thinner helps. Thus, there is insufficient evidence to conclude whether or not the blood thinner helps or hurts. (Remember, we never "accept" the null hypothesis - we can only reject or fail to reject.) The p-value 0.176 relies on the normal approximation. We know that when the samples sizes are large, this approximation is quite good. However, when the sample sizes are relatively small as in this example, the approximation may only be adequate. Next we develop a simulation technique, apply it to these data, and compare our results. In general, the small sample method we develop may be used for any size sample, small or large, and should be considered as more accurate than the corresponding large sample technique. Simulating a Difference under the null Distribution The ideas in this section were first introduced in the optional Section 1.8. Suppose the null hypothesis is true. Then the blood thinner has no impact on survival and the 13% difference was due to chance. In this case, we can simulate null differences that are due to chance using a randomization technique. (The test procedure we employ in this section is formally called a permutation test). By randomly assigning "fake treatment" and "fake control" stickers to the patients' files, we could get a new grouping - one that is completely due to chance. The expected difference between the two proportions under this simulation is zero. We run this simulation by taking 40 treatment fake and 50 control fake labels and randomly assigning them to the patients. The label counts of 40 and 50 correspond to the number of treatment and control assignments in the actual study. We use a computer program to randomly assign these labels to the patients, and we organize the simulation results into Table 6.24. Table 6.24: Simulated results for the CPR study under the null hypothesis. The labels were randomly assigned and are independent of the outcome of the patient. Survived Died Total Control_fake 15 35 50 Treatment_fake 10 30 40 Total 25 65 90 Exercise $2$ What is the difference in survival rates between the two fake groups in Table 6.24? How does this compare to the observed 13% in the real groups? Solution The difference is $\hat {p}_{\text {t;fake}} - \hat {p}_{\text {c;fake}} = \dfrac {10}{40} - \dfrac {15}{50} = -0.05$, which is closer to the null value p0 = 0 than what we observed. The difference computed in Exercise 6.57 represents a draw from the null distribution of the sample differences. Next we generate many more simulated experiments to build up the null distribution, much like we did in Section 6.5.2 to build a null distribution for a one sample proportion. Caution: Simulation in the two proportion case requires that the null difference is zero The technique described here to simulate a difference from the null distribution relies on an important condition in the null hypothesis: there is no connection between the two variables considered. In some special cases, the null difference might not be zero, and more advanced methods (or a large sample approximation, if appropriate) would be necessary. Null distribution for the difference in two proportions We build up an approximation to the null distribution by repeatedly creating tables like the one shown in Table 6.24 and computing the sample differences. The null distribution from 10,000 simulations is shown in Figure 6.25. Example $4$ Compare Figures 6.23 and 6.25. How are they similar? How are they different? Solution The shapes are similar, but the simulated results show that the continuous approximation of the normal distribution is not very good. We might wonder, how close are the p-values? Exercise $3$ The right tail area is about 0.13. (It is only a coincidence that we also have $\hat {p}_t - \hat {p}_c = 0.13$.) The p-value is computed by doubling the right tail area: 0.26. How does this value compare with the large sample approximation for the p-value? Solution The approximation in this case is fairly poor (p-values: 0.174 vs. 0.26), though we come to the same conclusion. The data do not provide convincing evidence showing the blood thinner helps or hurts patients. In general, small sample methods produce more accurate results since they rely on fewer assumptions. However, they often require some extra work or simulations. For this reason, many statisticians use small sample methods only when conditions for large sample methods are not satisfied. Randomization for two-way tables and chi-square Randomization methods may also be used for the contingency tables. In short, we create a randomized contingency table, then compute a chi-square test statistic $X^2_{sim}$. We repeat this many times using a computer, and then we examine the distribution of these simulated test statistics. This randomization approach is valid for any sized sample, and it will be more accurate for cases where one or more expected bin counts do not meet the minimum threshold of 5. When the minimum threshold is met, the simulated null distribution will very closely resemble the chi-square distribution. As before, we use the upper tail of the null distribution to calculate the p-value.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.06%3A_Randomization_Test_%28Special_Topic%29.txt
Inference for a single proportion 6.1 Vegetarian college students. Suppose that 8% of college students are vegetarians. Determine if the following statements are true or false, and explain your reasoning. (a) The distribution of the sample proportions of vegetarians in random samples of size 60 is approximately normal since $n \ge 30$. (b) The distribution of the sample proportions of vegetarian college students in random samples of size 50 is right skewed. (c) A random sample of 125 college students where 12% are vegetarians would be considered unusual. (d) A random sample of 250 college students where 12% are vegetarians would be considered unusual. (e) The standard error would be reduced by one-half if we increased the sample size from 125 to 250. 6.2 Young Americans, Part I. About 77% of young adults think they can achieve the American dream. Determine if the following statements are true or false, and explain your reasoning.36 (a) The distribution of sample proportions of young Americans who think they can achieve the American dream in samples of size 20 is left skewed. (b) The distribution of sample proportions of young Americans who think they can achieve the American dream in random samples of size 40 is approximately normal since $n \ge 30$. (c) A random sample of 60 young Americans where 85% think they can achieve the American dream would be considered unusual. (d) A random sample of 120 young Americans where 85% think they can achieve the American dream would be considered unusual. 6.3 Orange tabbies. Suppose that 90% of orange tabby cats are male. Determine if the following statements are true or false, and explain your reasoning. (a) The distribution of sample proportions of random samples of size 30 is left skewed. (b) Using a sample size that is 4 times as large will reduce the standard error of the sample proportion by one-half. (c) The distribution of sample proportions of random samples of size 140 is approximately normal. (d) The distribution of sample proportions of random samples of size 280 is approximately normal. 6.4 Young Americans, Part II. About 25% of young Americans have delayed starting a family due to the continued economic slump. Determine if the following statements are true or false, and explain your reasoning.37 (a) The distribution of sample proportions of young Americans who have delayed starting a family due to the continued economic slump in random samples of size 12 is right skewed. (b) In order for the the distribution of sample proportions of young Americans who have delayed starting a family due to the continued economic slump to be approximately normal, we need random samples where the sample size is at least 40. (c) A random sample of 50 young Americans where 20% have delayed starting a family due to the continued economic slump would be considered unusual. (d) A random sample of 150 young Americans where 20% have delayed starting a family due to the continued economic slump would be considered unusual. (e) Tripling the sample size will reduce the standard error of the sample proportion by one-third. 36A. Vaughn. "Poll finds young adults optimistic, but not about money". In: Los Angeles Times (2011). 37Demos.org. "The State of Young America: The Poll". In: (2011). 6.5 Prop 19 in California. In a 2010 Survey USA poll, 70% of the 119 respondents between the ages of 18 and 34 said they would vote in the 2010 general election for Prop 19, which would change California law to legalize marijuana and allow it to be regulated and taxed. At a 95% confidence level, this sample has an 8% margin of error. Based on this information, determine if the following statements are true or false, and explain your reasoning.38 (a) We are 95% con dent that between 62% and 78% of the California voters in this sample support Prop 19. (b) We are 95% con dent that between 62% and 78% of all California voters between the ages of 18 and 34 support Prop 19. (c) If we considered many random samples of 119 California voters between the ages of 18 and 34, and we calculated 95% confidence intervals for each, 95% of them will include the true population proportion of Californians who support Prop 19. (d) In order to decrease the margin of error to 4%, we would need to quadruple (multiply by 4) the sample size. (e) Based on this con dence interval, there is sufficient evidence to conclude that a majority of California voters between the ages of 18 and 34 support Prop 19. 6.6 2010 Healthcare Law. On June 28, 2012 the U.S. Supreme Court upheld the much debated 2010 healthcare law, declaring it constitutional. A Gallup poll released the day after this decision indicates that 46% of 1,012 Americans agree with this decision. At a 95% confidence level, this sample has a 3% margin of error. Based on this information, determine if the following statements are true or false, and explain your reasoning.39 (a) We are 95% con dent that between 43% and 49% of Americans in this sample support the decision of the U.S. Supreme Court on the 2010 healthcare law. (b) We are 95% con dent that between 43% and 49% of Americans support the decision of the U.S. Supreme Court on the 2010 healthcare law. (c) If we considered many random samples of 1,012 Americans, and we calculated the sample proportions of those who support the decision of the U.S. Supreme Court, 95% of those sample proportions will be between 43% and 49%. (d) The margin of error at a 90% con dence level would be higher than 3%. 6.7 Fireworks on July 4th. In late June 2012, Survey USA published results of a survey stating that 56% of the 600 randomly sampled Kansas residents planned to set off reworks on July 4th. Determine the margin of error for the 56% point estimate using a 95% con dence level.40 6.8 Elderly drivers. In January 2011, The Marist Poll published a report stating that 66% of adults nationally think licensed drivers should be required to retake their road test once they reach 65 years of age. It was also reported that interviews were conducted on 1,018 American adults, and that the margin of error was 3% using a 95% con dence level.41 (a) Verify the margin of error reported by The Marist Poll. (b) Based on a 95% con dence interval, does the poll provide convincing evidence that more than 70% of the population think that licensed drivers should be required to retake their road test once they turn 65? 38Survey USA, Election Poll #16804, data collected July 8-11, 2010. 39Gallup, Americans Issue Split Decision on Healthcare Ruling, data collected June 28, 2012. 40Survey USA, News Poll #19333, data collected on June 27, 2012. 41Marist Poll, Road Rules: Re-Testing Drivers at Age 65?, March 4, 2011. 6.9 Life after college. We are interested in estimating the proportion of graduates at a mid-sized university who found a job within one year of completing their undergraduate degree. Suppose we conduct a survey and nd out that 348 of the 400 randomly sampled graduates found jobs. The graduating class under consideration included over 4500 students. (a) Describe the population parameter of interest. What is the value of the point estimate of this parameter? (b) Check if the conditions for constructing a con dence interval based on these data are met. (c) Calculate a 95% con dence interval for the proportion of graduates who found a job within one year of completing their undergraduate degree at this university, and interpret it in the context of the data. (d) What does "95% confidence" mean? (e) Now calculate a 99% con dence interval for the same parameter and interpret it in the context of the data. (f) Compare the widths of the 95% and 99% con dence intervals. Which one is wider? Explain. 6.10 Life rating in Greece. Greece has faced a severe economic crisis since the end of 2009. A Gallup poll surveyed 1,000 randomly sampled Greeks in 2011 and found that 25% of them said they would rate their lives poorly enough to be considered "suffering".42 (a) Describe the population parameter of interest. What is the value of the point estimate of this parameter? (b) Check if the conditions required for constructing a con dence interval based on these data are met. (c) Construct a 95% con dence interval for the proportion of Greeks who are "suffering". (d) Without doing any calculations, describe what would happen to the con dence interval if we decided to use a higher confidence level. (e) Without doing any calculations, describe what would happen to the con dence interval if we used a larger sample. 6.11 Study abroad. A survey on 1,509 high school seniors who took the SAT and who completed an optional web survey between April 25 and April 30, 2007 shows that 55% of high school seniors are fairly certain that they will participate in a study abroad program in college.43 (a) Is this sample a representative sample from the population of all high school seniors in the US? Explain your reasoning. (b) Let's suppose the conditions for inference are met. Even if your answer to part (a) indicated that this approach would not be reliable, this analysis may still be interesting to carry out (though not report). Construct a 90% con dence interval for the proportion of high school seniors (of those who took the SAT) who are fairly certain they will participate in a study abroad program in college, and interpret this interval in context. (c) What does "90% confidence" mean? (d) Based on this interval, would it be appropriate to claim that the majority of high school seniors are fairly certain that they will participate in a study abroad program in college? 42Gallup World, More Than One in 10 "Suffering" Worldwide, data collected throughout 2011. 43studentPOLL, College-Bound Students' Interests in Study Abroad and Other International Learning Activities, January 2008. 6.12 Legalization of marijuana, Part I. The 2010 General Social Survey asked 1,259 US residents: "Do you think the use of marijuana should be made legal, or not?" 48% of the respondents said it should be made legal.44 (a) Is 48% a sample statistic or a population parameter? Explain. (b) Construct a 95% con dence interval for the proportion of US residents who think marijuana should be made legal, and interpret it in the context of the data. (c) A critic points out that this 95% con dence interval is only accurate if the statistic follows a normal distribution, or if the normal model is a good approximation. Is this true for these data? Explain. (d) A news piece on this survey's ndings states, \Majority of Americans think marijuana should be legalized." Based on your confidence interval, is this news piece's statement justified? 6.13 Public option, Part I. A Washington Post article from 2009 reported that "support for a government-run health-care plan to compete with private insurers has rebounded from its summertime lows and wins clear majority support from the public." More speci cally, the article says "seven in 10 Democrats back the plan, while almost nine in 10 Republicans oppose it. Independents divide 52 percent against, 42 percent in favor of the legislation." There were were 819 Democrats, 566 Republicans and 783 Independents surveyed.45 (a) A political pundit on TV claims that a majority of Independents oppose the health care public option plan. Do these data provide strong evidence to support this statement? (b) Would you expect a con dence interval for the proportion of Independents who oppose the public option plan to include 0.5? Explain. 6.14 The Civil War. A national survey conducted in 2011 among a simple random sample of 1,507 adults shows that 56% of Americans think the Civil War is still relevant to American politics and political life.46 (a) Conduct a hypothesis test to determine if these data provide strong evidence that the majority of the Americans think the Civil War is still relevant. (b) Interpret the p-value in this context. (c) Calculate a 90% con dence interval for the proportion of Americans who think the Civil War is still relevant. Interpret the interval in this context, and comment on whether or not the con dence interval agrees with the conclusion of the hypothesis test. 6.15 Browsing on the mobile device. A 2012 survey of 2,254 American adults indicates that 17% of cell phone owners do their browsing on their phone rather than a computer or other device.47 (a) According to an online article, a report from a mobile research company indicates that 38 percent of Chinese mobile web users only access the internet through their cell phones.48 Conduct a hypothesis test to determine if these data provide strong evidence that the proportion of Americans who only use their cell phones to access the internet is different than the Chinese proportion of 38%. (b) Interpret the p-value in this context. (c) Calculate a 95% con dence interval for the proportion of Americans who access the internet on their cell phones, and interpret the interval in this context. 44National Opinion Research Center, General Social Survey, 2010. 45D. Balz and J. Cohen. "Most support public option for health insurance, poll nds". In: The Washington Post (2009). 46Pew Research Center Publications, Civil War at 150: Still Relevant, Still Divisive, data collected between March 30 - April 3, 2011. 47Pew Internet, Cell Internet Use 2012, data collected between March 15 - April 13, 2012. 48S. Chang. "The Chinese Love to Use Feature Phone to Access the Internet". In: M.I.C Gadget (2012). 6.16 Is college worth it? Part I. Among a simple random sample of 331 American adults who do not have a four-year college degree and are not currently enrolled in school, 48% said they decided not to go to college because they could not afford school.49 (a) A newspaper article states that only a minority of the Americans who decide not to go to college do so because they cannot afford it and uses the point estimate from this survey as evidence. Conduct a hypothesis test to determine if these data provide strong evidence supporting this statement. (b) Would you expect a con dence interval for the proportion of American adults who decide not to go to college because they cannot afford it to include 0.5? Explain. 6.17 Taste test. Some people claim that they can tell the difference between a diet soda and a regular soda in the first sip. A researcher wanting to test this claim randomly sampled 80 such people. He then lled 80 plain white cups with soda, half diet and half regular through random assignment, and asked each person to take one sip from their cup and identify the soda as diet or regular. 53 participants correctly identi ed the soda. (a) Do these data provide strong evidence that these people are able to detect the difference between diet and regular soda, in other words, are the results signi cantly better than just random guessing? (b) Interpret the p-value in this context. 6.18 Is college worth it? Part II. Exercise 6.16 presents the results of a poll where 48% of 331 Americans who decide to not go to college do so because they cannot afford it. (a) Calculate a 90% con dence interval for the proportion of Americans who decide to not go to college because they cannot afford it, and interpret the interval in context. (b) Suppose we wanted the margin of error for the 90% con dence level to be about 1.5%. How large of a survey would you recommend? 6.19 College smokers. We are interested in estimating the proportion of students at a university who smoke. Out of a random sample of 200 students from this university, 40 students smoke. (a) Calculate a 95% con dence interval for the proportion of students at this university who smoke, and interpret this interval in context. (Reminder: check conditions) (b) If we wanted the margin of error to be no larger than 2% at a 95% confidence level for the proportion of students who smoke, how big of a sample would we need? 6.20 Legalize Marijuana, Part II. As discussed in Exercise 6.12, the 2010 General Social Survey reported a sample where about 48% of US residents thought marijuana should be made legal. If we wanted to limit the margin of error of a 95% confidence interval to 2%, about how many Americans would we need to survey ? 6.21 Public option, Part II. Exercise 6.13 presents the results of a poll evaluating support for the health care public option in 2009, reporting that 52% of Independents in the sample opposed the public option. If we wanted to estimate this number to within 1% with 90% confidence, what would be an appropriate sample size? 6.22 Acetaminophen and liver damage. It is believed that large doses of acetaminophen (the active ingredient in over the counter pain relievers like Tylenol) may cause damage to the liver. A researcher wants to conduct a study to estimate the proportion of acetaminophen users who have liver damage. For participating in this study, he will pay each subject \$20 and provide a free medical consultation if the patient has liver damage. (a) If he wants to limit the margin of error of his 98% confidence interval to 2%, what is the minimum amount of money he needs to set aside to pay his subjects? (b) The amount you calculated in part (a) is substantially over his budget so he decides to use fewer subjects. How will this affect the width of his con dence interval? 49Pew Research Center Publications, Is College Worth It?, data collected between March 15-29, 2011. Difference of two proportions 6.23 Social experiment, Part I. A "social experiment" conducted by a TV program questioned what people do when they see a very obviously bruised woman getting picked on by her boyfriend. On two different occasions at the same restaurant, the same couple was depicted. In one scenario the woman was dressed "provocatively" and in the other scenario the woman was dressed "conservatively". The table below shows how many restaurant diners were present under each scenario, and whether or not they intervened. Scenario Provocative Conservative Total Yes No 5 15 15 10 20 25 Total 20 25 45 Explain why the sampling distribution of the difference between the proportions of interventions under provocative and conservative scenarios does not follow an approximately normal distribution. 6.24 Heart transplant success. The Stanford University Heart Transplant Study was conducted to determine whether an experimental heart transplant program increased lifespan. Each patient entering the program was officially designated a heart transplant candidate, meaning that he was gravely ill and might bene t from a new heart. Patients were randomly assigned into treatment and control groups. Patients in the treatment group received a transplant, and those in the control group did not. The table below displays how many patients survived and died in each group.50 Control treatment alive dead 4 30 24 45 A hypothesis test would reject the conclusion that the survival rate is the same in each group, and so we might like to calculate a con dence interval. Explain why we cannot construct such an interval using the normal approximation. What might go wrong if we constructed the confidence interval despite this problem? 6.25 Gender and color preference. A 2001 study asked 1,924 male and 3,666 female undergraduate college students their favorite color. A 95% confidence interval for the difference between the proportions of males and females whose favorite color is black ($p_{male}-p_{female}$) was calculated to be (0.02, 0.06). Based on this information, determine if the following statements are true or false, and explain your reasoning for each statement you identify as false.51 (a) We are 95% con dent that the true proportion of males whose favorite color is black is 2% lower to 6% higher than the true proportion of females whose favorite color is black. (b) We are 95% con dent that the true proportion of males whose favorite color is black is 2% to 6% higher than the true proportion of females whose favorite color is black. (c) 95% of random samples will produce 95% con dence intervals that include the true difference between the population proportions of males and females whose favorite color is black. (d) We can conclude that there is a signi cant difference between the proportions of males and females whose favorite color is black and that the difference between the two sample proportions is too large to plausibly be due to chance. (e) The 95% con dence interval for ($p_{female} - p_{male}$) cannot be calculated with only the information given in this exercise. 50B. Turnbull et al. "Survivorship of Heart Transplant Data". In: Journal of the American Statistical Association 69 (1974), pp. 74 - 80. 51L Ellis and C Ficek. "Color preferences according to gender and sexual orientation". In: Personality and Individual Differences 31.8 (2001), pp. 1375-1379. 6.26 The Daily Show. A 2010 Pew Research foundation poll indicates that among 1,099 college graduates, 33% watch The Daily Show. Meanwhile, 22% of the 1,110 people with a high school degree but no college degree in the poll watch The Daily Show. A 95% con dence interval for ($p_{college grad} - p_{HS or less}$), where p is the proportion of those who watch The Daily Show, is (0.07, 0.15). Based on this information, determine if the following statements are true or false, and explain your reasoning if you identify the statement as false.52 (a) At the 5% significance level, the data provide convincing evidence of a difference between the proportions of college graduates and those with a high school degree or less who watch The Daily Show. (b) We are 95% con dent that 7% less to 15% more college graduates watch The Daily Show than those with a high school degree or less. (c) 95% of random samples of 1,099 college graduates and 1,110 people with a high school degree or less will yield differences in sample proportions between 7% and 15%. (d) A 90% confidence interval for ($p_{college grad} - p_{HS or less}$) would be wider. (e) A 95% confidence interval for ($p_{HS or less} - p_{college grad}$) is (-0.15,-0.07). 6.27 Public Option, Part III. Exercise 6.13 presents the results of a poll evaluating support for the health care public option plan in 2009. 70% of 819 Democrats and 42% of 783 Independents support the public option. (a) Calculate a 95% confidence interval for the difference between ($p_D - p_I$ ) and interpret it in this context. We have already checked conditions for you. (b) True or false: If we had picked a random Democrat and a random Independent at the time of this poll, it is more likely that the Democrat would support the public option than the Independent. 6.28 Sleep deprivation, CA vs. OR, Part I. According to a report on sleep deprivation by the Centers for Disease Control and Prevention, the proportion of California residents who reported insufficient rest or sleep during each of the preceding 30 days is 8.0%, while this proportion is 8.8% for Oregon residents. These data are based on simple random samples of 11,545 California and 4,691 Oregon residents. Calculate a 95% con dence interval for the difference between the proportions of Californians and Oregonians who are sleep deprived and interpret it in context of the data.53 6.29 Offshore drilling, Part I. A 2010 survey asked 827 randomly sampled registered voters in California "Do you support? Or do you oppose? Drilling for oil and natural gas off the Coast of California? Or do you not know enough to say?" Below is the distribution of responses, separated based on whether or not the respondent graduated from college.54 College Grad Yes No Support Oppose Do not know 154 180 104 132 126 131 Total 438 389 (a) What percent of college graduates and what percent of the non-college graduates in this sample do not know enough to have an opinion on drilling for oil and natural gas off the Coast of California? (b) Conduct a hypothesis test to determine if the data provide strong evidence that the proportion of college graduates who do not have an opinion on this issue is different than that of non-college graduates. 52The Pew Research Center, Americans Spending More Time Following the News, data collected June 8-28, 2010. 53CDC, Perceived Insu�cient Rest or Sleep Among Adults - United States, 2008. 54Survey USA, Election Poll #16804, data collected July 8-11, 2010. 6.30 Sleep deprivation, CA vs. OR, Part II. Exercise 6.28 provides data on sleep deprivation rates of Californians and Oregonians. The proportion of California residents who reported insufficient rest or sleep during each of the preceding 30 days is 8.0%, while this proportion is 8.8% for Oregon residents. These data are based on simple random samples of 11,545 California and 4,691 Oregon residents. (a) Conduct a hypothesis test to determine if these data provide strong evidence the rate of sleep deprivation is different for the two states. (Reminder: check conditions) (b) It is possible the conclusion of the test in part (a) is incorrect. If this is the case, what type of error was made? 6.31 Offshore drilling, Part II. Results of a poll evaluating support for drilling for oil and natural gas off the coast of California were introduced in Exercise 6.29. College Grad Yes No Support Oppose Do not know 154 180 104 132 126 131 Total 438 389 (a) What percent of college graduates and what percent of the non-college graduates in this sample support drilling for oil and natural gas off the Coast of California? (b) Conduct a hypothesis test to determine if the data provide strong evidence that the proportion of college graduates who support offshore drilling in California is different than that of noncollege graduates. 6.32 Full body scan, Part I. A news article reports that "Americans have differing views on two potentially inconvenient and invasive practices that airports could implement to uncover potential terrorist attacks." This news piece was based on a survey conducted among a random sample of 1,137 adults nationwide, interviewed by telephone November 7-10, 2010, where one of the questions on the survey was "Some airports are now using `full-body' digital x-ray machines to electronically screen passengers in airport security lines. Do you think these new x-ray machines should or should not be used at airports?" Below is a summary of responses based on party affiliation.55 Party Affiliation Republican Democrat Independent Should Should not Don't know/No answer 264 38 16 299 55 15 351 77 22 Total 318 369 450 (a) Conduct an appropriate hypothesis test evaluating whether there is a difference in the proportion of Republicans and Democrats who think the full-body scans should be applied in airports. Assume that all relevant conditions are met. (b) The conclusion of the test in part (a) may be incorrect, meaning a testing error was made. If an error was made, was it a Type I or a Type II error? Explain. 55S. Condon. "Poll: 4 in 5 Support Full-Body Airport Scanners". In: CBS News (2010). 6.33 Sleep deprived transportation workers. The National Sleep Foundation conducted a survey on the sleep habits of randomly sampled transportation workers and a control sample of non-transportation workers. The results of the survey are shown below.56 Transportation Professionals Control Pilots Truck Drivers Train Operators Bux/Taxi/Limo Drivers Less than 6 hours of sleep 6 to 8 hours of sleep More than 8 hours 35 193 64 19 132 51 35 117 51 29 119 32 21 131 58 Tota 292 202 203 180 210 Conduct a hypothesis test to evaluate if these data provide evidence of a difference between the proportions of truck drivers and non-transportation workers (the control group) who get less than 6 hours of sleep per day, i.e. are considered sleep deprived. 6.34 Prenatal vitamins and Autism. Researchers studying the link between prenatal vitamin use and autism surveyed the mothers of a random sample of children aged 24 - 60 months with autism and conducted another separate random sample for children with typical development. The table below shows the number of mothers in each group who did and did not use prenatal vitamins during the three months before pregnancy (periconceptional period).57 Autism Autism Typical development Total No vitamin Vitamin 111 143 70 159 181 302 Total 254 229 483 (a) State appropriate hypotheses to test for independence of use of prenatal vitamins during the three months before pregnancy and autism. (b) Complete the hypothesis test and state an appropriate conclusion. (Reminder: verify any necessary conditions for the test.) (c) A New York Times article reporting on this study was titled "Prenatal Vitamins May Ward Off Autism". Do you nd the title of this article to be appropriate? Explain your answer. Additionally, propose an alternative title.58 6.35 HIV in sub-Saharan Africa. In July 2008 the US National Institutes of Health announced that it was stopping a clinical study early because of unexpected results. The study population consisted of HIV-infected women in sub-Saharan Africa who had been given single dose Nevaripine (a treatment for HIV) while giving birth, to prevent transmission of HIV to the infant. The study was a randomized comparison of continued treatment of a woman (after successful childbirth) with Nevaripine vs. Lopinavir, a second drug used to treat HIV. 240 women participated in the study; 120 were randomized to each of the two treatments. Twenty-four weeks after starting the study treatment, each woman was tested to determine if the HIV infection was becoming worse (an outcome called virologic failure). Twenty-six of the 120 women treated with Nevaripine experienced virologic failure, while 10 of the 120 women treated with the other drug experienced virologic failure.59 (a) Create a two-way table presenting the results of this study. (b) State appropriate hypotheses to test for independence of treatment and virologic failure. (c) Complete the hypothesis test and state an appropriate conclusion. (Reminder: verify any necessary conditions for the test.) 56National Sleep Foundation, 2012 Sleep in America Poll: Transportation Workers Sleep, 2012. 57R.J. Schmidt et al. \Prenatal vitamins, one-carbon metabolism gene variants, and risk for autism". In: Epidemiology 22.4 (2011), p. 476. 58R.C. Rabin. "Patterns: Prenatal Vitamins May Ward Off Autism". In: New York Times (2011). 59S. Lockman et al. "Response to antiretroviral therapy after a single, peripartum dose of nevirapine". In: Obstetrical & gynecological survey 62.6 (2007), p. 361. 6.36 Diabetes and unemployment. A 2012 Gallup poll surveyed Americans about their employment status and whether or not they have diabetes. The survey results indicate that 1.5% of the 47,774 employed (full or part time) and 2.5% of the 5,855 unemployed 18-29 year olds have diabetes.60 (a) Create a two-way table presenting the results of this study. (b) State appropriate hypotheses to test for independence of incidence of diabetes and employment status. (c) The sample difference is about 1%. If we completed the hypothesis test, we would nd that the p-value is very small (about 0), meaning the difference is statistically signi cant. Use this result to explain the difference between statistically signi cant and practically significant findings. Testing for goodness of t using chi-square 6.37 True or false, Part I. Determine if the statements below are true or false. For each false statement, suggest an alternative wording to make it a true statement. (a) The chi-square distribution, just like the normal distribution, has two parameters, mean and standard deviation. (b) The chi-square distribution is always right skewed, regardless of the value of the degrees of freedom parameter. (c) The chi-square statistic is always positive. (d) As the degrees of freedom increases, the shape of the chi-square distribution becomes more skewed. 6.38 True or false, Part II. Determine if the statements below are true or false. For each false statement, suggest an alternative wording to make it a true statement. (a) As the degrees of freedom increases, the mean of the chi-square distribution increases. (b) If you found $X^2$ = 10 with df = 5 you would fail to reject H0 at the 5% signi cance level. (c) When nding the p-value of a chi-square test, we always shade the tail areas in both tails. (d) As the degrees of freedom increases, the variability of the chi-square distribution decreases. 6.39 Open source textbook. A professor using an open source introductory statistics book predicts that 60% of the students will purchase a hard copy of the book, 25% will print it out from the web, and 15% will read it online. At the end of the semester he asks his students to complete a survey where they indicate what format of the book they used. Of the 126 students, 71 said they bought a hard copy of the book, 30 said they printed it out from the web, and 25 said they read it online. (a) State the hypotheses for testing if the professor's predictions were inaccurate. (b) How many students did the professor expect to buy the book, print the book, and read the book exclusively online? (c) This is an appropriate setting for a chi-square test. List the conditions required for a test and verify they are satisfied. (d) Calculate the chi-squared statistic, the degrees of freedom associated with it, and the p-value. (e) Based on the p-value calculated in part (d), what is the conclusion of the hypothesis test? Interpret your conclusion in this context. 60Gallup Wellbeing, Employed Americans in Better Health Than the Unemployed, data collected Jan. 2, 2011 - May 21, 2012. 6.40 Evolution vs. creationism. A Gallup Poll released in December 2010 asked 1019 adults living in the Continental U.S. about their belief in the origin of humans. These results, along with results from a more comprehensive poll from 2001 (that we will assume to be exactly accurate), are summarized in the table below:61 Year Response 2010 2001 Humans evolved, with God guiding (1) Humans evolved, but God had no part in process (2) God created humans in present form (3) Other / No opinion (4) 38% 16% 40% 6% 37% 12% 45% 6% (a) Calculate the actual number of respondents in 2010 that fall in each response category. (b) State hypotheses for the following research question: have beliefs on the origin of human life changed since 2001? (c) Calculate the expected number of respondents in each category under the condition that the null hypothesis from part (b) is true. (d) Conduct a chi-square test and state your conclusion. (Reminder: verify conditions.) Testing for independence in two-way tables 6.41 Quitters. Does being part of a support group affect the ability of people to quit smoking? A county health department enrolled 300 smokers in a randomized experiment. 150 participants were assigned to a group that used a nicotine patch and met weekly with a support group; the other 150 received the patch and did not meet with a support group. At the end of the study, 40 of the participants in the patch plus support group had quit smoking while only 30 smokers had quit in the other group. (a) Create a two-way table presenting the results of this study. (b) Answer each of the following questions under the null hypothesis that being part of a support group does not affect the ability of people to quit smoking, and indicate whether the expected values are higher or lower than the observed values. i. How many subjects in the "patch + support" group would you expect to quit? ii. How many subjects in the "only patch" group would you expect to not quit? 6.42 Full body scan, Part II. The table below summarizes a data set we rst encountered in Exercise 6.32 regarding views on full-body scans and political affiliation. The differences in each political group may be due to chance. Complete the following computations under the null hypothesis of independence between an individual's party affiliation and his support of full-body scans. It may be useful to rst add on an extra column for row totals before proceeding with the computations. Party Affiliation Republican Democrat Independent Should Should not Don't know/No answer 264 38 16 299 55 15 351 77 22 Total 318 369 450 (a) How many Republicans would you expect to not support the use of full-body scans? (b) How many Democrats would you expect to support the use of full-body scans? (c) How many Independents would you expect to not know or not answer? 61Four in 10 Americans Believe in Strict Creationism, December 17, 2010, http://www.gallup.com/poll/145286/Four-Americans-Believe-Strict-Creationism.aspx. 6.43 Offshore drilling, Part III. The table below summarizes a data set we rst encountered in Exercise 6.29 that examines the responses of a random sample of college graduates and nongraduates on the topic of oil drilling. Complete a chi-square test for these data to check whether there is a statistically signi cant difference in responses from college graduates and non-graduates. College Grad Yes No Support Oppose Do not know 154 180 104 132 126 131 Total 438 389 6.44 Coffee and Depression. Researchers conducted a study investigating the relationship between caffeinated coffee consumption and risk of depression in women. They collected data on 50,739 women free of depression symptoms at the start of the study in the year 1996, and these women were followed through 2006. The researchers used questionnaires to collect data on caffeinated coffee consumption, asked each individual about physician-diagnosed depression, and also asked about the use of antidepressants. The table below shows the distribution of incidences of depression by amount of caffeinated coffee consumption.62 Caffeinated coffee consumption $\le$ 1cup/week 2-6 cups/week 1 cup/day 2-3 cups/day $\ge$ 4 cups/day Total Yes No 670 11,545 373 6,244 905 16,329 564 11,726 95 2,288 2,607 48,132 Total 12,215 6,617 17,234 12,290 2,383 50,739 (a) What type of test is appropriate for evaluating if there is an association between coffee intake and depression? (b) Write the hypotheses for the test you identi ed in part (a). (c) Calculate the overall proportion of women who do and do not suffer from depression. (d) Identify the expected count for the highlighted cell, and calculate the contribution of this cell to the test statistic, i.e. $\frac {(Observed - Expected)^2}{Expected}$. (e) The test statistic is X2 = 20.93. What is the p-value? (f) What is the conclusion of the hypothesis test? (g) One of the authors of this study was quoted on the NYTimes as saying it was "too early to recommend that women load up on extra coffee" based on just this study.63 Do you agree with this statement? Explain your reasoning. 62M. Lucas et al. "Coffee, caffeine, and risk of depression among women". In: Archives of internal medicine 171.17 (2011), p. 1571. 63A. O'Connor. "Coffee Drinking Linked to Less Depression in Women". In: New York Times (2011). 6.45 Privacy on Facebook. A 2011 survey asked 806 randomly sampled adult Facebook users about their Facebook privacy settings. One of the questions on the survey was, "Do you know how to adjust your Facebook privacy settings to control what people can and cannot see?" The responses are cross-tabulated based on gender.64 Gender Male Female Total Yes No Not sure 288 61 10 378 62 7 666 123 17 Total 359 447 806 (a) State appropriate hypotheses to test for independence of gender and whether or not Facebook users know how to adjust their privacy settings. (b) Verify any necessary conditions for the test and determine whether or not a chi-square test can be completed. 6.46 Shipping holiday gifts. A December 2010 survey asked 500 randomly sampled Los Angeles residents which shipping carrier they prefer to use for shipping holiday gifts. The table below shows the distribution of responses by age group as well as the expected counts for each cell (shown in parentheses). Age 18-34 35-54 55+ Total USPS UPS FedEx Something else Not sure 72 (81) 52 (53) 31 (21) 7 (5) 3 (5) 97 (102) 76 (68) 24 (27) 6 (7) 6 (5) 76 (62) 34 (41) 9 (16) 3 (4) 4 (3) 245 162 64 16 13 Total 165 209 126 500 (a) State the null and alternative hypotheses for testing for independence of age and preferred shipping method for holiday gifts among Los Angeles residents. (b) Are the conditions for inference using a chi-square test satisfied? Small sample hypothesis testing for a proportion 6.47 Bullying in schools. A 2012 Survey USA poll asked Florida residents how big of a problem they thought bullying was in local schools. 9 out of 191 18-34 year olds responded that bullying is no problem at all. Using these data, is it appropriate to construct a con dence interval using the formula $\hat {p} \pm z^* \sqrt {\frac {\hat {p}(1 - \hat {p})}{n}}$ for the true proportion of 18-34 year old Floridians who think bullying is no problem at all? If it is appropriate, construct the con dence interval. If it is not, explain why. 64Survey USA, News Poll #17960, data collected February 16-17, 2011. 6.48 Choose a test. We would like to test the following hypotheses: H0 : p = 0.1 HA : $p \ne 0.1$ The sample size is 120 and the sample proportion is 8.5%. Determine which of the below test(s) is/are appropriate for this situation and explain your reasoning. I. Z test for a proportion, i.e. proportion test using normal model II. Z test for comparing two proportions III. $X^2$ test of independence IV. Simulation test for a proportion V. t test for a mean VI. ANOVA 6.49 The Egyptian Revolution. A popular uprising that started on January 25, 2011 in Egypt led to the 2011 Egyptian Revolution. Polls show that about 69% of American adults followed the news about the political crisis and demonstrations in Egypt closely during the rst couple weeks following the start of the uprising. Among a random sample of 30 high school students, it was found that only 17 of them followed the news about Egypt closely during this time.65 (a) Write the hypotheses for testing if the proportion of high school students who followed the news about Egypt is different than the proportion of American adults who did. (b) Calculate the proportion of high schoolers in this sample who followed the news about Egypt closely during this time. (c) Based on large sample theory, we modeled ^p using the normal distribution. Why should we be cautious about this approach for these data? (d) The normal approximation will not be as reliable as a simulation, especially for a sample of this size. Describe how to perform such a simulation and, once you had results, how to estimate the p-value. (e) Below is a histogram showing the distribution of ^psim in 10,000 simulations under the null hypothesis. Estimate the p-value using the plot and determine the conclusion of the hypothesis test. 65Gallup Politics, Americans' Views of Egypt Sharply More Negative, data collected February 2-5, 2011. 6.50 Assisted Reproduction. Assisted Reproductive Technology (ART) is a collection of techniques that help facilitate pregnancy (e.g. in vitro fertilization). A 2008 report by the Centers for Disease Control and Prevention estimated that ART has been successful in leading to a live birth in 31% of cases66. A new fertility clinic claims that their success rate is higher than average. A random sample of 30 of their patients yielded a success rate of 40%. A consumer watchdog group would like to determine if this provides strong evidence to support the company's claim. (a) Write the hypotheses to test if the success rate for ART at this clinic is signi cantly higher than the success rate reported by the CDC. (b) Based on large sample theory, we modeled ^p using the normal distribution. Why is this not appropriate here? (c) The normal approximation would be less reliable here, so we should use a simulation strategy. Describe a setup for a simulation that would be appropriate in this situation and how the p-value can be calculated using the simulation results. (d) Below is a histogram showing the distribution of ^psim in 10,000 simulations under the null hypothesis. Estimate the p-value using the plot and use it to evaluate the hypotheses. (e) After performing this analysis, the consumer group releases the following news headline: "Infertility clinic falsely advertises better success rates". Comment on the appropriateness of this statement. 66CDC. 2008 Assisted Reproductive Technology Report. Hypothesis testing for two proportions 6.51 Social experiment, Part II. Exercise 6.23 introduces a "social experiment" conducted by a TV program that questioned what people do when they see a very obviously bruised woman getting picked on by her boyfriend. On two different occasions at the same restaurant, the same couple was depicted. In one scenario the woman was dressed "provocatively" and in the other scenario the woman was dressed "conservatively". The table below shows how many restaurant diners were present under each scenario, and whether or not they intervened. Scenario Provocative Conservative Total Yes No 5 15 15 10 20 25 Total 20 25 45 A simulation was conducted to test if people react differently under the two scenarios. 10,000 simulated differences were generated to construct the null distribution shown. The value $\hat {p}_{pr;sim}$ represents the proportion of diners who intervened in the simulation for the provocatively dressed woman, and $\hat {p}_{con;sim}$ is the proportion for the conservatively dressed woman. (a) What are the hypotheses? For the purposes of this exercise, you may assume that each observed person at the restaurant behaved independently, though we would want to evaluate this assumption more rigorously if we were reporting these results. (b) Calculate the observed difference between the rates of intervention under the provocative and conservative scenarios: $\hat {p}_{pr} - \hat {p}_{con}$. (c) Estimate the p-value using the gure above and determine the conclusion of the hypothesis test. 6.52 Is yawning contagious? An experiment conducted by the MythBusters, a science entertainment TV program on the Discovery Channel, tested if a person can be subconsciously inuenced into yawning if another person near them yawns. 50 people were randomly assigned to two groups: 34 to a group where a person near them yawned (treatment) and 16 to a group where there wasn't a person yawning near them (control). The following table shows the results of this experiment.67 Group Treatment Control Total Yawn Not Yawn 10 24 4 12 14 36 Total 34 16 50 A simulation was conducted to understand the distribution of the test statistic under the assumption of independence: having someone yawn near another person has no inuence on if the other person will yawn. In order to conduct the simulation, a researcher wrote yawn on 14 index cards and not yawn on 36 index cards to indicate whether or not a person yawned. Then he shuffled the cards and dealt them into two groups of size 34 and 16 for treatment and control, respectively. He counted how many participants in each simulated group yawned in an apparent response to a nearby yawning person, and calculated the difference between the simulated proportions of yawning as $\hat {p}_{trtmt;sim} - \hat {p}_{ctrl;sim}$. This simulation was repeated 10,000 times using software to obtain 10,000 differences that are due to chance alone. The histogram shows the distribution of the simulated differences. (a) What are the hypotheses? (b) Calculate the observed difference between the yawning rates under the two scenarios. (c) Estimate the p-value using the gure above and determine the conclusion of the hypothesis test. 67MythBusters, Season 3, Episode 28. Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./06%3A_Inference_for_Categorical_Data/6.07%3A_Exercises.txt
Linear regression is a very powerful statistical technique. Many people have some familiarity with regression just from reading the news, where graphs with straight lines are overlaid on scatterplots. Linear models can be used for prediction or to evaluate whether there is a linear relationship between two numerical variables. • 7.1: Prelude to Linear Regression Imagine what a perfect linear relationship would mean: you would know the exact value of y just by knowing the value of x. This is unrealistic in almost any natural process. For example, if we took family income x, this value would provide some useful information about how much financial support y a college may offer a prospective student. However, there would still be variability in financial support, even when comparing students whose families have similar financial backgrounds. • 7.2: Line Fitting, Residuals, and Correlation In this section, we examine criteria for identifying a linear model and introduce a new statistic, correlation. • 7.3: Fitting a Line by Least Squares Regression Fitting linear models by eye is open to criticism since it is based on an individual preference. In this section, we use least squares regression as a more rigorous approach. • 7.4: Types of Outliers in Linear Regression In this section, we identify criteria for determining which outliers are important and influential. Outliers in regression are observations that fall far from the "cloud" of points. These points are especially important because they can have a strong influence on the least squares line. • 7.5: Inference for Linear Regression In this section we discuss uncertainty in the estimates of the slope and y-intercept for a regression line. Just as we identi ed standard errors for point estimates in previous chapters, we first discuss standard errors for these new estimates. However, in the case of regression, we will identify standard errors using statistical software. • 7.6: Exercises Exercises for Chapter 7 of the "OpenIntro Statistics" textmap by Diez, Barr and Çetinkaya-Rundel. 07: Introduction to Linear Regression Linear regression is a very powerful statistical technique. Many people have some familiarity with regression just from reading the news, where graphs with straight lines are overlaid on scatterplots. Linear models can be used for prediction or to evaluate whether there is a linear relationship between two numerical variables. Figure $1$ shows two variables whose relationship can be modeled perfectly with a straight line. The equation for the line is $y = 5 + 57.49x$ Imagine what a perfect linear relationship would mean: you would know the exact value of $y$ just by knowing the value of $x$. This is unrealistic in almost any natural process. For example, if we took family income $x$, this value would provide some useful information about how much financial support $y$ a college may offer a prospective student. However, there would still be variability in financial support, even when comparing students whose families have similar financial backgrounds. Linear regression assumes that the relationship between two variables, $x$ and $y$, can be modeled by a straight line: $y = \beta _0 + \beta _1x \label{7.1}$ where $\beta _0$ and $\beta _1$ represent two model parameters ( $\beta$ is the Greek letter beta). These parameters are estimated using data, and we write their point estimates as $\beta_0$ and $\beta_1$. When we use $x$ to predict $y$, we usually call $x$ the explanatory or predictor variable, and we call $y$ the response. It is rare for all of the data to fall on a straight line, as seen in the three scatterplots in Figure $2$. In each case, the data fall around a straight line, even if none of the observations fall exactly on the line. The first plot shows a relatively strong downward linear trend, where the remaining variability in the data around the line is minor relative to the strength of the relationship between $x$ and $y$. The second plot shows an upward trend that, while evident, is not as strong as the first. The last plot shows a very weak downward trend in the data, so slight we can hardly notice it. In each of these examples, we will have some uncertainty regarding our estimates of the model parameters, $\beta _0$ and $\beta _1$. For instance, we might wonder, should we move the line up or down a little, or should we tilt it more or less? As we move forward in this chapter, we will learn different criteria for line-fitting, and we will also learn about the uncertainty associated with estimates of model parameters.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.01%3A_Prelude_to_Linear_Regression.txt
We will also see examples in this chapter where fitting a straight line to the data, even if there is a clear relationship between the variables, is not helpful. One such case is shown in Figure $1$ where there is a very strong relationship between the variables even though the trend is not linear. We will discuss nonlinear trends in this chapter and the next, but the details of fitting nonlinear models discussed elsewhere. In this section, we examine criteria for identifying a linear model and introduce a new statistic, correlation. Beginning with Straight Lines Scatterplots were introduced in Chapter 1 as a graphical technique to present two numerical variables simultaneously. Such plots permit the relationship between the variables to be examined with ease. Figure $2$ shows a scatterplot for the head length and total length of 104 brushtail possums from Australia. Each point represents a single possum from the data. The head and total length variables are associated. Possums with an above average total length also tend to have above average head lengths. While the relationship is not perfectly linear, it could be helpful to partially explain the connection between these variables with a straight line. Straight lines should only be used when the data appear to have a linear relationship, such as the case shown in the left panel of Figure $4$. The right panel of Figure $4$ shows a case where a curved line would be more useful in understanding the relationship between the two variables. Caution: Watch out for curved trends We only consider models based on straight lines in this chapter. If data show a nonlinear trend, like that in the right panel of Figure $4$, more advanced techniques should be used. Fitting a line "By Eye" We want to describe the relationship between the head length and total length variables in the possum data set using a line. In this example, we will use the total length as the predictor variable, x, to predict a possum's head length, y. We could fit the linear relationship by eye, as in Figure $5$. The equation for this line is $\hat {y} = 41 + 0.59x \tag {7.2}$ We can use this line to discuss properties of possums. For instance, the equation predicts a possum with a total length of 80 cm will have a head length of \begin{align} \hat {y} &= 41 + 0.59 \times 80 \[5pt] &= 88.2 \end{align} A "hat" on y is used to signify that this is an estimate. This estimate may be viewed as an average: the equation predicts that possums with a total length of 80 cm will have an average head length of 88.2 mm. Absent further information about an 80 cm possum, the prediction for head length that uses the average is a reasonable estimate. Residuals Residuals are the leftover variation in the data after accounting for the model fit: $\text {Data} = \text {Fit + Residual}$ Each observation will have a residual. If an observation is above the regression line, then its residual, the vertical distance from the observation to the line, is positive. Observations below the line have negative residuals. One goal in picking the right linear model is for these residuals to be as small as possible. Three observations are noted specially in Figure $5$. The observation marked by an "X" has a small, negative residual of about -1; the observation marked by "+" has a large residual of about +7; and the observation marked by $\Delta$ has a moderate residual of about -4. The size of a residual is usually discussed in terms of its absolute value. For example, the residual for $\Delta$ is larger than that of "X" because | - 4| is larger than | - 1|. Residual: difference between observed and expected The residual of the fifth observation ($x_i, y_i$) is the difference of the observed response ($y_i$) and the response we would predict based on the model fit ($\hat {y}_i$): $e_i = y_i - \hat {y}_i$ We typically identify $\hat {y}_i$ by plugging xi into the model. Example $1$ The linear fit shown in Figure $5$ is given as $\hat {y} = 41 + 0.59x$. Based on this line, formally compute the residual of the observation (77.0, 85.3). This observation is denoted by "X" on the plot. Check it against the earlier visual estimate, -1. Solution We first compute the predicted value of point "X" based on the model: $\hat {y} = 41 + 0.59x_x = 41 + 0.59 \times 77.0 = 86.4$ Next we compute the difference of the actual head length and the predicted head length: $e_x = y_x - \hat {y}_x = 85.3 - 86.4 = -1.1$ This is very close to the visual estimate of -1. Exercise $\PageIndex{1A}$ If a model underestimates an observation, will the residual be positive or negative? What about if it overestimates the observation? Answer If a model underestimates an observation, then the model estimate is below the actual. The residual, which is the actual observation value minus the model estimate, must then be positive. The opposite is true when the model overestimates the observation: the residual is negative. Exercise $\PageIndex{1B}$ Compute the residuals for the observations (85.0, 98.6) ("+" in Figure $5$) and (95.5, 94.0) ("$\Delta$") using the linear relationship $\hat {y} = 41 + 0.59x.$ Answer (+) First compute the predicted value based on the model: $\hat {y}_+ = 41 + 0.59x_+ = 41 + 0.59 \times 85.0 = 91.15$ Then the residual is given by $e_+ = y_+ - \hat {y}_+ = 98.6 - 91.15 = 7.45$ This was close to the earlier estimate of 7. $(\Delta) \hat {y}_{\Delta} = 41 + 0.59x_{\Delta} = 97.3. e_{\Delta} = y_{\Delta} - \hat {y}_{\Delta} = -3.3$, close to the estimate of -4. Residuals are helpful in evaluating how well a linear model fits a data set. We often display them in a residual plot such as the one shown in Figure $6$ for the regression line in Figure $5$. The residuals are plotted at their original horizontal locations but with the vertical coordinate as the residual. For instance, the point (85.0, 98.6)+ had a residual of 7.45, so in the residual plot it is placed at (85.0, 7.45). Creating a residual plot is sort of like tipping the scatterplot over so the regression line is horizontal. Example $1$ One purpose of residual plots is to identify characteristics or patterns still apparent in data after fitting a model. Figure $7$ shows three scatterplots with linear models in the first row and residual plots in the second row. Can you identify any patterns remaining in the residuals? Solution In the first data set (first column), the residuals show no obvious patterns. The residuals appear to be scattered randomly around the dashed line that represents 0. The second data set shows a pattern in the residuals. There is some curvature in the scatterplot, which is more obvious in the residual plot. We should not use a straight line to model these data. Instead, a more advanced technique should be used. The last plot shows very little upwards trend, and the residuals also show no obvious patterns. It is reasonable to try to fit a linear model to the data. However, it is unclear whether there is statistically significant evidence that the slope parameter is different from zero. The point estimate of the slope parameter, labeled b1, is not zero, but we might wonder if this could just be due to chance. We will address this sort of scenario in Section 7.4. Describing Linear Relationships with Correlation We can compute the correlation using a formula, just as we did with the sample mean and standard deviation. However, this formula is rather complex, so we generally perform the calculations on a computer or calculator. Figure $8$ shows eight plots and their corresponding correlations. Only when the relationship is perfectly linear is the correlation either -1 or 1. If the relationship is strong and positive, the correlation will be near +1. If it is strong and negative, it will be near -1. If there is no apparent linear relationship between the variables, then the correlation will be near zero. Formally, we can compute the correlation for observations $(x_1, y_1), (x_2, y_2),\dots, (x_n, y_n)$ using the formula $R = \frac {1}{n - 1} \sum \limits ^n_{i=1} \frac {x_i - \bar {x}}{s_x} \frac {y_i - \bar {y}}{s_y}$ where $\bar {x}, \bar {y}, s_x$, and $s_y$ are the sample means and standard deviations for each variable. Correlation: strength of a linear relationship Correlation, which always takes values between -1 and 1, describes the strength of the linear relationship between two variables. We denote the correlation by R. The correlation is intended to quantify the strength of a linear trend. Nonlinear trends, even when strong, sometimes produce correlations that do not reflect the strength of the relationship; see three such examples in Figure $9$. Exercise $1$ It appears no straight line would fit any of the datasets represented in Figure $9$. Try drawing nonlinear curves on each plot. Once you create a curve for each, describe what is important in your fit.4 Answer We'll leave it to you to draw the lines. In general, the lines you draw should be close to most points and reflect overall trends in the data.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.02%3A_Line_Fitting_Residuals_and_Correlation.txt
Fitting linear models by eye is open to criticism since it is based on an individual preference. In this section, we use least squares regression as a more rigorous approach. This section considers family income and gift aid data from a random sample of fifty students in the 2011 freshman class of Elmhurst College in Illinois. Gift aid is financial aid that is a gift, as opposed to a loan. A scatterplot of the data is shown in Figure $1$ along with two linear fits. The lines follow a negative trend in the data; students who have higher family incomes tended to have lower gift aid from the university. Exercise $1$ Is the correlation positive or negative in Figure $1$?6 Solution 6Larger family incomes are associated with lower amounts of aid, so the correlation will be negative. Using a computer, the correlation can be computed: -0.499. An Objective Measure for Finding the Best Line We begin by thinking about what we mean by "best". Mathematically, we want a line that has small residuals. Perhaps our criterion could minimize the sum of the residual magnitudes: $|e_1| + |e_2| + \dots + |e_n| \label{7.9}$ which we could accomplish with a computer program. The resulting dashed line shown in Figure $1$ demonstrates this fit can be quite reasonable. However, a more common practice is to choose the line that minimizes the sum of the squared residuals: $e^2_1 + e^2_2 +\dots + e^2_n \label {7.10}$ The line that minimizes this least squares criterion is represented as the solid line in Figure $1$. This is commonly called the least squares line. The following are three possible reasons to choose Criterion \ref{7.10} over Criterion \ref{7.9}: 1. It is the most commonly used method. 2. Computing the line based on Criterion \ref{7.10} is much easier by hand and in most statistical software. 3. In many applications, a residual twice as large as another residual is more than twice as bad. For example, being off by 4 is usually more than twice as bad as being off by squaring the residuals accounts for this discrepancy. The first two reasons are largely for tradition and convenience; the last reason explains why Criterion \ref{7.10} is typically most helpful. There are applications where Criterion \ref{7.9} may be more useful, and there are plenty of other criteria we might consider. However, this book only applies the least squares criterion. Conditions for the Least Squares Line When fitting a least squares line, we generally require • Linearity. The data should show a linear trend. If there is a nonlinear trend (e.g. left panel of Figure $2$), an advanced regression method from another book or later course should be applied. • Nearly normal residuals. Generally the residuals must be nearly normal. When this condition is found to be unreasonable, it is usually because of outliers or concerns about influential points, which we will discuss in greater depth in Section 7.3. An example of non-normal residuals is shown in the second panel of Figure $2$. • Constant variability. The variability of points around the least squares line remains roughly constant. An example of non-constant variability is shown in the third panel of Figure $2$. Be cautious about applying regression to data collected sequentially in what is called a time series. Such data may have an underlying structure that should be considered in a model and analysis. There are other instances where correlations within the data are important. This topic will be further discussed in Chapter 8. Exercise $2$ Should we have concerns about applying least squares regression to the Elmhurst data in Figure $1$? Solution The trend appears to be linear, the data fall around the line with no obvious outliers, the variance is roughly constant. These are also not time series observations. Least squares regression can be applied to these data. Finding the Least Squares Line For the Elmhurst data, we could write the equation of the least squares regression line as $\hat {aid} = \beta _0 + \beta _1\times \text { family income}$ Here the equation is set up to predict gift aid based on a student's family income, which would be useful to students considering Elmhurst. These two values, $\beta _0$ and $\beta _1$, are the parameters of the regression line. As in Chapters 4-6, the parameters are estimated using observed data. In practice, this estimation is done using a computer in the same way that other estimates, like a sample mean, can be estimated using a computer or calculator. However, we can also find the parameter estimates by applying two properties of the least squares line: • The slope of the least squares line can be estimated by $b_1 = \dfrac {s_y}{s_x} R \label{7.12}$ where R is the correlation between the two variables, and $s_x$ and $s_y$ are the sample standard deviations of the explanatory variable and response, respectively. • If $\bar {x}$ is the mean of the horizontal variable (from the data) and $\bar {y}$ is the mean of the vertical variable, then the point ($\bar {x}, \bar {y}$) is on the least squares line. We use $b_0$ and $b_1$ to represent the point estimates of the parameters $\beta _0$ and $\beta _1$. Exercise $3$ Table 7.14 shows the sample means for the family income and gift aid as $101,800 and$19,940, respectively. Plot the point (101.8, 19.94) on Figure $1$ on page 324 to verify it falls on the least squares line (the solid line).9 Table 7.14: Summary statistics for family income and gift aid. family income, in $1000s ("x") gift aid, in$1000s ("y") mean sd $\bar {x}$ = 101.8 $s_x$ = 63.2 $\bar {y}$ = 19.94 $s_y$ = 5.46 R = -0.499 9If you need help finding this location, draw a straight line up from the x-value of 100 (or thereabout). Then draw a horizontal line at 20 (or thereabout). These lines should intersect on the least squares line. Exercise $4$ sing the summary statistics in Table 7.14, compute the slope for the regression line of gift aid against family income. Hint: Apply Equation \ref{7.12} with the summary statistics from Table 7.14 to compute the slope: $b_1 = \dfrac {s_y}{s_x} R = \dfrac {5.46}{63.2} (-0.499) = -0.0431$ You might recall the point-slope form of a line from math class (another common form is slope-intercept). Given the slope of a line and a point on the line, ($x_0, y_0$), the equation for the line can be written as $y - y_0 = \text {slope} \times (x - x_0) \label {7.15}$ A common exercise to become more familiar with foundations of least squares regression is to use basic summary statistics and point-slope form to produce the least squares line. TIP: Identifying the least squares line from summary statistics To identify the least squares line from summary statistics: • Estimate the slope parameter, $b_1$, using Equation \ref{7.12}. • Noting that the point ($\bar {x}, \bar {y}$) is on the least squares line, use $x_0 = \bar {x}$ and $y_0 = \bar {y}$ along with the slope $b_1$ in the point-slope equation: $y - \bar {y} = b_1(x - \bar {x})$ • Simplify the equation. Example $1$ Using the point (101.8, 19.94) from the sample means and the slope estimate $b_1 = -0.0431$ from Exercise 7.14, and the least-squares line for predicting aid based on family income. Solution Apply the point-slope equation using (101.8, 19.94) and the slope $b_1 = -0.0431$: $y - y_0 = b_1(x - x_0)$ $y - 19.94 = -0.0431(x - 101.8)$ Expanding the right side and then adding 19.94 to each side, the equation simplifies: $\hat {aid} = 24.3 - 0.0431\times \text { family income}$ Here we have replaced y with $\hat {aid}$ and x with $family_{ income}$ to put the equation in context. We mentioned earlier that a computer is usually used to compute the least squares line. A summary table based on computer output is shown in Table 7.15 for the Elmhurst data. The first column of numbers provides estimates for b0 and b1, respectively. Compare these to the result from Example 7.16. Table 7.15: Summary of least squares t for the Elmhurst data. Compare the parameter estimates in the rst column to the results of Example 7.16. Estimate Std. Error t value Pr(>|t|) (Intercept) family_income 24.3193 -0.0431 1.2915 0.0108 18.83 -3.98 0.0000 0.0002 Example $2$ Examine the second, third, and fourth columns in Table 7.15. Can you guess what they represent? Solution We'll describe the meaning of the columns using the second row, which corresponds to $\beta _1$. The first column provides the point estimate for $\beta _1$, as we calculated in an earlier example: -0.0431. The second column is a standard error for this point estimate: 0.0108. The third column is a t test statistic for the null hypothesis that $\beta _1 = \beta _0: T = -3.98$. The last column is the p-value for the t test statistic for the null hypothesis $\beta _1 = 0$ and a two-sided alternative hypothesis: 0.0002. We will get into more of these details in Section 7.4. Example $3$ Suppose a high school senior is considering Elmhurst College. Can she simply use the linear equation that we have estimated to calculate her nancial aid from the university? Solution She may use it as an estimate, though some qualifiers on this approach are important. First, the data all come from one freshman class, and the way aid is determined by the university may change from year to year. Second, the equation will provide an imperfect estimate. While the linear equation is good at capturing the trend in the data, no individual student's aid will be perfectly predicted. Interpreting Regression Line Parameter Estimates Interpreting parameters in a regression model is often one of the most important steps in the analysis. Example $2$ The slope and intercept estimates for the Elmhurst data are -0.0431 and 24.3. What do these numbers really mean? Solution Interpreting the slope parameter is helpful in almost any application. For each additional $1,000 of family income, we would expect a student to receive a net difference of $1,000 \times (-0.0431) = -43.10$ in aid on average, i.e.$43.10 less. Note that a higher family income corresponds to less aid because the coefficient of family income is negative in the model. We must be cautious in this interpretation: while there is a real association, we cannot interpret a causal connection between the variables because these data are observational. That is, increasing a student's family income may not cause the student's aid to drop. (It would be reasonable to contact the college and ask if the relationship is causal, i.e. if Elmhurst College's aid decisions are partially based on students' family income.) The estimated intercept $b_0 = 24.3$ (in $1000s) describes the average aid if a student's family had no income. The meaning of the intercept is relevant to this application since the family income for some students at Elmhurst is$0. In other applications, the intercept may have little or no practical value if there are no observations where x is near zero. Interpreting parameters estimated by least squares The slope describes the estimated difference in the y variable if the explanatory variable x for a case happened to be one unit larger. The intercept describes the average outcome of y if x = 0 and the linear model is valid all the way to x = 0, which in many applications is not the case. Extrapolation is Treacherous When those blizzards hit the East Coast this winter, it proved to my satisfaction that global warming was a fraud. That snow was freezing cold. But in an alarming trend, temperatures this spring have risen. Consider this: On February 6th it was 10 degrees. Today it hit almost 80. At this rate, by August it will be 220 degrees. So clearly folks the climate debate rages on. $\label {Stephen Colbert}$ 11http://www.colbertnation.com/the-col...videos/269929/ Linear models can be used to approximate the relationship between two variables. However, these models have real limitations. Linear regression is simply a modeling framework. The truth is almost always much more complex than our simple line. For example, we do not know how the data outside of our limited window will behave. Example $3$ Use the model $\hat {aid} = 24.3 - 0.0431$ family income to estimate the aid of another freshman student whose family had income of $1 million. Recall that the units of family income are in$1000s, so we want to calculate the aid for family income = 1000: $24.3 - 0.0431 \times \text {family income} = 24.3 - 0.0431 \times 1000 = -18.8$ The model predicts this student will have -$18,800 in aid (!). Elmhurst College cannot (or at least does not) require any students to pay extra on top of tuition to attend. Applying a model estimate to values outside of the realm of the original data is called extrapolation. Generally, a linear model is only an approximation of the real relationship between two variables. If we extrapolate, we are making an unreliable bet that the approximate linear relationship will be valid in places where it has not been analyzed. Using R2 to describe the strength of a fit We evaluated the strength of the linear relationship between two variables earlier using the correlation, R. However, it is more common to explain the strength of a linear t using R2, called R-squared. If provided with a linear model, we might like to describe how closely the data cluster around the linear fit. The R2 of a linear model describes the amount of variation in the response that is explained by the least squares line. For example, consider the Elmhurst data, shown in Figure 7.16. The variance of the response variable, aid received, is $s^2_{aid} = 29.8$. However, if we apply our least squares line, then this model reduces our uncertainty in predicting aid using a student's family income. The variability in the residuals describes how much variation remains after using the model: $s^2_{RES} = 22.4$. In short, there was a reduction of $\dfrac {s^2_{aid} - s^2_{RES}}{s^2_{GPA}} = \dfrac {29.9 - 22.4}{29.9} = \dfrac {7.5}{29.9} = 0.25$ or about 25% in the data's variation by using information about family income for predicting aid using a linear model. This corresponds exactly to the R-squared value: $R = -0.499 R^2 = 0.25$ Exercise $5$ If a linear model has a very strong negative relationship with a correlation of -0.97, how much of the variation in the response is explained by the explanatory variable?12 Categorical Predictors with two Levels Categorical variables are also useful in predicting outcomes. Here we consider a categorical predictor with two levels (recall that a level is the same as a category). We'll consider Ebay auctions for a video game, Mario Kart for the Nintendo Wii, where both the total price of the auction and the condition of the game were recorded.13 Here we want to predict total price based on game condition, which takes values used and new. A plot of the auction data is shown in Figure 7.17. To incorporate the game condition variable into a regression equation, we must convert the categories into a numerical form. We will do so using an indicator variable called cond new, which takes value 1 when the game is new and 0 when the game is used. Using this indicator variable, the linear model may be written as $\hat {price} = \beta _0 + \beta _1 \times \text {cond new}$ 12About $R^2 = (-0.97)^2 = 0.94$ or 94% of the variation is explained by the linear model. 13These data were collected in Fall 2009 and may be found at openintro.org. Table 7.18: Least squares regression summary for the nal auction price against the condition of the game. Estimate Std. Error t value Pr(>|t|) (Intercept) cond_new 42.87 10.90 0.81 1.26 52.67 8.66 0.0000 0.0000 The fitted model is summarized in Table 7.18, and the model with its parameter estimates is given as $\hat {price} = 42.87 + 10.90 \times \text {cond new}$ For categorical predictors with just two levels, the linearity assumption will always be satis ed. However, we must evaluate whether the residuals in each group are approximately normal and have approximately equal variance. As can be seen in Figure 7.17, both of these conditions are reasonably satis ed by the auction data. Example 7.22 Interpret the two parameters estimated in the model for the price of Mario Kart in eBay auctions. The intercept is the estimated price when cond new takes value 0, i.e. when the game is in used condition. That is, the average selling price of a used version of the game is$42.87. The slope indicates that, on average, new games sell for about \$10.90 more than used games. TIP: Interpreting model estimates for categorical predictors. The estimated intercept is the value of the response variable for the first category (i.e. the category corresponding to an indicator value of 0). The estimated slope is the average change in the response variable between the two categories. We'll elaborate further on this Ebay auction data in Chapter 8, where we examine the influence of many predictor variables simultaneously using multiple regression. In multiple regression, we will consider the association of auction price with regard to each variable while controlling for the influence of other variables. This is especially important since some of the predictors are associated. For example, auctions with games in new condition also often came with more accessories.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.03%3A_Fitting_a_Line_by_Least_Squares_Regression.txt
In this section, we identify criteria for determining which outliers are important and influential. Outliers in regression are observations that fall far from the "cloud" of points. These points are especially important because they can have a strong influence on the least squares line. Example \(1\) There are six plots shown in Figure \(1\) along with the least squares line and residual plots. For each scatter plot and residual plot pair, identify any obvious outliers and note how they influence the least squares line. Recall that an outlier is any point that doesn't appear to belong with the vast majority of the other points. 1. There is one outlier far from the other points, though it only appears to slightly influence the line. 2. There is one outlier on the right, though it is quite close to the least squares line, which suggests it wasn't very influential. 3. There is one point far away from the cloud, and this outlier appears to pull the least squares line up on the right; examine how the line around the primary cloud doesn't appear to t very well. 4. There is a primary cloud and then a small secondary cloud of four outliers. The secondary cloud appears to be influencing the line somewhat strongly, making the least square line t poorly almost everywhere. There might be an interesting explanation for the dual clouds, which is something that could be investigated. 5. There is no obvious trend in the main cloud of points and the outlier on the right appears to largely control the slope of the least squares line. 6. There is one outlier far from the cloud, however, it falls quite close to the least squares line and does not appear to be very influential. Examine the residual plots in Figure \(1\). You will probably nd that there is some trend in the main clouds of (3) and (4). In these cases, the outliers influenced the slope of the least squares lines. In (5), data with no clear trend were assigned a line with a large trend simply due to one outlier (!). Definition: Leverage Points that fall horizontally away from the center of the cloud tend to pull harder on the line, so we call them points with high leverage. Points that fall horizontally far from the line are points of high leverage; these points can strongly influence the slope of the least squares line. If one of these high leverage points does appear to actually invoke its influence on the slope of the line (as in cases (3), (4), and (5) of Example \(1\)) then we call it an influential point. Usually we can say a point is influential if, had we plotted the line without it, the influential point would have been unusually far from the least squares line. It is tempting to remove outliers. Do not do this without a very good reason. Models that ignore exceptional (and interesting) cases often perform poorly. For instance, if a financial firm ignored the largest market swings - the "outliers" - they would soon go bankrupt by making poorly thought-out investments. Caution: Don't ignore outliers when fitting a final model If there are outliers in the data, they should not be removed or ignored without a good reason. Whatever final model is fit to the data would not be very helpful if it ignores the most exceptional cases. Caution: Outliers for a categorical predictor with two levels Be cautious about using a categorical predictor when one of the levels has very few observations. When this happens, those few observations become inuential points.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.04%3A_Types_of_Outliers_in_Linear_Regression.txt
In this section we discuss uncertainty in the estimates of the slope and y-intercept for a regression line. Just as we identified standard errors for point estimates in previous chapters, we first discuss standard errors for these new estimates. However, in the case of regression, we will identify standard errors using statistical software. Midterm elections and unemployment Elections for members of the United States House of Representatives occur every two years, coinciding every four years with the U.S. Presidential election. The set of House elections occurring during the middle of a Presidential term are called midterm elections. In America's two-party system, one political theory suggests the higher the unemployment rate, the worse the President's party will do in the midterm elections. To assess the validity of this claim, we can compile historical data and look for a connection. We consider every midterm election from 1898 to 2010, with the exception of those elections during the Great Depression. Figure $1$ shows these data and the least-squares regression line: $\text {% change in House seats for President's party}$ $= -6.71 - 1.00 \times \text {(unemployment rate)}$ We consider the percent change in the number of seats of the President's party (e.g. percent change in the number of seats for Democrats in 2010) against the unemployment rate. Examining the data, there are no clear deviations from linearity, the constant variance condition, or in the normality of residuals (though we don't examine a normal probability plot here). While the data are collected sequentially, a separate analysis was used to check for any apparent correlation between successive observations; no such correlation was found. Exercise $1$ The data for the Great Depression (1934 and 1938) were removed because the unemployment rate was 21% and 18%, respectively. Do you agree that they should be removed for this investigation? Why or why not? Answer We will provide two considerations. Each of these points would have very high leverage on any least-squares regression line, and years with such high unemployment may not help us understand what would happen in other years where the unemployment is only modestly high. On the other hand, these are exceptional cases, and we would be discarding important information if we exclude them from a final analysis. There is a negative slope in the line shown in Figure $1$. However, this slope (and the y-intercept) are only estimates of the parameter values. We might wonder, is this convincing evidence that the "true" linear model has a negative slope? That is, do the data provide strong evidence that the political theory is accurate? We can frame this investigation into a one-sided statistical hypothesis test: • H0: $\beta _1 = 0$. The true linear model has slope zero. • HA: $\beta _1 < 0$. The true linear model has a slope less than zero. The higher the unemployment, the greater the losses for the President's party in the House of Representatives. We would reject H0 in favor of HA if the data provide strong evidence that the true slope parameter is less than zero. To assess the hypotheses, we identify a standard error for the estimate, compute an appropriate test statistic, and identify the p-value. Understanding regression output from software Just like other point estimates we have seen before, we can compute a standard error and test statistic for $\beta_1$. We will generally label the test statistic using a T, since it follows the t distribution. We will rely on statistical software to compute the standard error and leave the explanation of how this standard error is determined to a second or third statistics course. Table $1$ shows software output for the least squares regression line in Figure $1$. The row labeled unemp represents the information for the slope, which is the coefficient of the unemployment variable. Table $1$: Output from statistical software for the regression line modeling the midterm election losses for the President's party as a response to unemployment. Estimate Std. Error t value Pr(>|t|) (Intercept) -6.7142 5.4567 -1.23 0.2300 unemp -1.0010 0.8717 -1.15 0.2617 Example $2$ What do the first and second columns of Table $1$ represent? Solution The entries in the first column represent the least squares estimates, $\beta_0$ and $\beta_1$, and the values in the second column correspond to the standard errors of each estimate. We previously used a t test statistic for hypothesis testing in the context of numerical data. Regression is very similar. In the hypotheses we consider, the null value for the slope is 0, so we can compute the test statistic using the T (or Z) score formula: $T = \frac {\text {estimate - null value}}{SE} = \frac {-1.0010 - 0}{0.8717} = -1.15 \nonumber$ We can look for the one-sided p-value - shown in Figure $2$ - using the probability table for the t distribution in Appendix B.2 Exercise $2$ Table $1$ offers the degrees of freedom for the test statistic T: df = 25. Identify the p-value for the hypothesis test. Answer Add answer text here and it will automatically be hidden if you have a "AutoNum" template active on the page. Looking in the 25 degrees of freedom row in Appendix B.2, we see that the absolute value of the test statistic is smaller than any value listed, which means the tail area and therefore also the p-value is larger than 0.100 (one tail!). Because the p-value is so large, we fail to reject the null hypothesis. That is, the data do not provide convincing evidence that a higher unemployment rate has any correspondence with smaller or larger losses for the President's party in the House of Representatives in midterm elections. We could have identified the t test statistic from the software output in Table $1$, shown in the second row (unemp) and third column (t value). The entry in the second row and last column in Table $1$ represents the p-value for the two-sided hypothesis test where the null value is zero. The corresponding one-sided test would have a p-value half of the listed value. Inference for regression We usually rely on statistical software to identify point estimates and standard errors for parameters of a regression line. After verifying conditions hold for fitting a line, we can use the methods learned in Section 5.3 for the t distribution to create con dence intervals for regression parameters or to evaluate hypothesis tests. Caution: Don't carelessly use the p-value from regression output The last column in regression output often lists p-values for one particular hypothesis: a two-sided test where the null value is zero. If your test is one-sided and the point estimate is in the direction of HA, then you can halve the software's p-value to get the one-tail area. If neither of these scenarios match your hypothesis test, be cautious about using the software output to obtain the p-value. Example $3$ Examine Figure 7.16, which relates the Elmhurst College aid and student family income. How sure are you that the slope is statistically significantly different from zero? That is, do you think a formal hypothesis test would reject the claim that the true slope of the line should be zero? Solution While the relationship between the variables is not perfect, there is an evident decreasing trend in the data. This suggests the hypothesis test will reject the null claim that the slope is zero. Exercise $3$ Table $2$ shows statistical software output from tting the least squares regression line shown in Figure 7.16. Use this output to formally evaluate the following hypotheses. • H0: The true coefficient for family income is zero. • HA: The true coefficient for family income is not zero. Table $2$: Summary of least squares t for the Elmhurst College data. Estimate Std. Error t value Pr(>|t|) (Intercept) family_income 24.3193 -0.0431 1.2915 0.0108 18.83 -3.98 0.0000 0.0002 Answer We look in the second row corresponding to the family income variable. We see the point estimate of the slope of the line is -0.0431, the standard error of this estimate is 0.0108, and the t test statistic is -3.98. The p-value corresponds exactly to the two-sided test we are interested in: 0.0002. The p-value is so small that we reject the null hypothesis and conclude that family income and nancial aid at Elmhurst College for freshman entering in the year 2011 are negatively correlated and the true slope parameter is indeed less than 0, just as we believed in Example 7.27. TIP: Always check assumptions If conditions for tting the regression line do not hold, then the methods presented here should not be applied. The standard error or distribution assumption of the point estimate - assumed to be normal when applying the t test statistic - may not be valid. An alternative Test Statistic We considered the t test statistic as a way to evaluate the strength of evidence for a hypothesis test in Section 7.4.2. However, we could focus on R2. Recall that R2 described the proportion of variability in the response variable (y) explained by the explanatory variable (x). If this proportion is large, then this suggests a linear relationship exists between the variables. If this proportion is small, then the evidence provided by the data may not be convincing. This concept - considering the amount of variability in the response variable explained by the explanatory variable - is a key component in some statistical techniques. The analysis of variance (ANOVA) technique introduced in Section 5.5 uses this general principle. The method states that if enough variability is explained away by the categories, then we would conclude the mean varied between the categories. On the other hand, we might not be convinced if only a little variability is explained. ANOVA can be further employed in advanced regression modeling to evaluate the inclusion of explanatory variables, though we leave these details to a later course.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.05%3A_Inference_for_Linear_Regression.txt
Line tting, residuals, and correlation 7.1 Visualize the residuals. The scatterplots shown below each have a superimposed regression line. If we were to construct a residual plot (residuals versus x) for each, describe what those plots would look like. 7.2 Trends in the residuals. Shown below are two plots of residuals remaining after fitting a linear model to two different sets of data. Describe important features and determine if a linear model would be appropriate for these data. Explain your reasoning. 7.3 Identify relationships, Part I. For each of the six plots, identify the strength of the relationship (e.g. weak, moderate, or strong) in the data and whether tting a linear model would be reasonable. 7.4 Identify relationships, Part I. For each of the six plots, identify the strength of the relationship (e.g. weak, moderate, or strong) in the data and whether tting a linear model would be reasonable. 7.5 The two scatterplots below show the relationship between nal and mid-semester exam grades recorded during several years for a Statistics course at a university. (a) Based on these graphs, which of the two exams has the strongest correlation with the final exam grade? Explain. (b) Can you think of a reason why the correlation between the exam you chose in part (a) and the final exam is higher? 7.6 Husbands and wives, Part I. The Great Britain Office of Population Census and Surveys once collected data on a random sample of 170 married couples in Britain, recording the age (in years) and heights (converted here to inches) of the husbands and wives.16 The scatterplot on the left shows the wife's age plotted against her husband's age, and the plot on the right shows wife's height plotted against husband's height. (a) Describe the relationship between husbands' and wives' ages. (b) Describe the relationship between husbands' and wives' heights. (c) Which plot shows a stronger correlation? Explain your reasoning. (d) Data on heights were originally collected in centimeters, and then converted to inches. Does this conversion affect the correlation between husbands' and wives' heights? 7.7 Match the correlation, Part I. Match the calculated correlations to the corresponding scatterplot. (a) R = -0.7 (b) R = 0.45 (c) R = 0.06 (d) R = 0.92 7.8 Match the correlation, Part II. Match the calculated correlations to the corresponding scatterplot. (a) R = 0.49 (b) R = -0.48 (c) R = -0.03 (d) R = -0.85 16D.J. Hand. A handbook of small data sets. Chapman & Hall/CRC, 1994. 7.9 Speed and height. 1,302 UCLA students were asked to ll out a survey where they were asked about their height, fastest speed they have ever driven, and gender. The scatterplot on the left displays the relationship between height and fastest speed, and the scatterplot on the right displays the breakdown by gender in this relationship. (a) Describe the relationship between height and fastest speed. (b) Why do you think these variables are positively associated? (c) What role does gender play in the relationship between height and fastest driving speed? 7.10 Trees. The scatterplots below show the relationship between height, diameter, and volume of timber in 31 felled black cherry trees. The diameter of the tree is measured 4.5 feet above the ground.17 (a) Describe the relationship between volume and height of these trees. (b) Describe the relationship between volume and diameter of these trees. (c) Suppose you have height and diameter measurements for another black cherry tree. Which of these variables would be preferable to use to predict the volume of timber in this tree using a simple linear regression model? Explain your reasoning. 17Source: R Dataset, http://stat.ethz.ch/R-manual/R-patch...tml/trees.html. 7.11 The Coast Starlight, Part I. The Coast Starlight Amtrak train runs from Seattle to Los Angeles. The scatterplot below displays the distance between each stop (in miles) and the amount of time it takes to travel from one stop to another (in minutes). (a) Describe the relationship between distance and travel time. (b) How would the relationship change if travel time was instead measured in hours, and distance was instead measured in kilometers? (c) Correlation between travel time (in miles) and distance (in minutes) is R = 0.636. What is the correlation between travel time (in kilometers) and distance (in hours)? 7.12 Crawling babies, Part I. A study conducted at the University of Denver investigated whether babies take longer to learn to crawl in cold months, when they are often bundled in clothes that restrict their movement, than in warmer months.18 Infants born during the study year were split into twelve groups, one for each birth month. We consider the average crawling age of babies in each group against the average temperature when the babies are six months old (that's when babies often begin trying to crawl). Temperature is measured in degrees Fahrenheit (0F) and age is measured in weeks. (a) Describe the relationship between temperature and crawling age. (b) How would the relationship change if temperature was measured in degrees Celsius (0C) and age was measured in months? (c) The correlation between temperature in 0F and age in weeks was R = -0.70. If we converted the temperature to 0C and age to months, what would the correlation be? 18J.B. Benson. "Season of birth and onset of locomotion: Theoretical and methodological implications". In: Infant behavior and development 16.1 (1993), pp. 69-81. issn: 0163-6383. 7.13 Body measurements, Part I. Researchers studying anthropometry collected body girth measurements and skeletal diameter measurements, as well as age, weight, height and gender for 507 physically active individuals.19 The scatterplot below shows the relationship between height and shoulder girth (over deltoid muscles), both measured in centimeters. (a) Describe the relationship between shoulder girth and height. (b) How would the relationship change if shoulder girth was measured in inches while the units of height remained in centimeters? 7.14 Body measurements, Part II. The scatterplot below shows the relationship between weight measured in kilograms and hip girth measured in centimeters from the data described in Exercise 7.13. (a) Describe the relationship between hip girth and weight. (b) How would the relationship change if weight was measured in pounds while the units for hip girth remained in centimeters? 7.15 Correlation, Part I. What would be the correlation between the ages of husbands and wives if men always married woman who were (a) 3 years younger than themselves? (b) 2 years older than themselves? (c) half as old as themselves? 7.16 Correlation, Part II. What would be the correlation between the annual salaries of males and females at a company if for a certain type of position men always made (a) \$5,000 more than women? (b) 25% more than women? (c) 15% less than women? 19G. Heinz et al. "Exploring relationships in body dimensions". In: Journal of Statistics Education 11.2 (2003). Fitting a line by least squares regression 7.17 Tourism spending. The Association of Turkish Travel Agencies reports the number of foreign tourists visiting Turkey and tourist spending by year.20 The scatterplot below shows the relationship between these two variables along with the least squares fit. (a) Describe the relationship between number of tourists and spending. (b) What are the explanatory and response variables? (c) Why might we want to t a regression line to these data? (d) Do the data meet the conditions required for tting a least squares line? In addition to the scatterplot, use the residual plot and histogram to answer this question. 7.18 Nutrition at Starbucks, Part I. The scatterplot below shows the relationship between the number of calories and amount of carbohydrates (in grams) Starbucks food menu items contain.21 Since Starbucks only lists the number of calories on the display items, we are interested in predicting the amount of carbs a menu item has based on its calorie content. (a) Describe the relationship between number of calories and amount of carbohydrates (in grams) that Starbucks food menu items contain. (b) In this scenario, what are the explanatory and response variables? (c) Why might we want to t a regression line to these data? (d) Do these data meet the conditions required for tting a least squares line? 7.19 The Coast Starlight, Part II. Exercise 7.11 introduces data on the Coast Starlight Amtrak train that runs from Seattle to Los Angeles. The mean travel time from one stop to the next on the Coast Starlight is 129 mins, with a standard deviation of 113 minutes. The mean distance traveled from one stop to the next is 107 miles with a standard deviation of 99 miles. The correlation between travel time and distance is 0.636. (a) Write the equation of the regression line for predicting travel time. (b) Interpret the slope and the intercept in this context. (c) Calculate R2 of the regression line for predicting travel time from distance traveled for the Coast Starlight, and interpret R2 in the context of the application. (d) The distance between Santa Barbara and Los Angeles is 103 miles. Use the model to estimate the time it takes for the Starlight to travel between these two cities. (e) It actually takes the the Coast Starlight about 168 mins to travel from Santa Barbara to Los Angeles. Calculate the residual and explain the meaning of this residual value. (f) Suppose Amtrak is considering adding a stop to the Coast Starlight 500 miles away from Los Angeles. Would it be appropriate to use this linear model to predict the travel time from Los Angeles to this point? 21Source: Starbucks.com, collected on March 10, 2011, www.starbucks.com/menu/nutrition. 7.20 Body measurements, Part III. Exercise 7.13 introduces data on shoulder girth and height of a group of individuals. The mean shoulder girth is 108.20 cm with a standard deviation of 10.37 cm. The mean height is 171.14 cm with a standard deviation of 9.41 cm. The correlation between height and shoulder girth is 0.67. (a) Write the equation of the regression line for predicting height. (b) Interpret the slope and the intercept in this context. (c) Calculate R2 of the regression line for predicting height from shoulder girth, and interpret it in the context of the application. (d) A randomly selected student from your class has a shoulder girth of 100 cm. Predict the height of this student using the model. (e) The student from part (d) is 160 cm tall. Calculate the residual, and explain what this residual means. (f) A one year old has a shoulder girth of 56 cm. Would it be appropriate to use this linear model to predict the height of this child? 7.21 Grades and TV. Data were collected on the number of hours per week students watch TV and the grade they earned in a biology class on a 100 point scale. Based on the scatterplot and the residual plot provided, describe the relationship between the two variables, and determine if a simple linear model is appropriate to predict a student's grade from the number of hours per week the student watches TV. 7.22 Nutrition at Starbucks, Part II. Exercise 7.18 introduced a data set on nutrition information on Starbucks food menu items. Based on the scatterplot and the residual plot provided, describe the relationship between the protein content and calories of these menu items, and determine if a simple linear model is appropriate to predict amount of protein from the number of calories. 7.23 Helmets and lunches. The scatterplot shows the relationship between socioeconomic status measured as the percentage of children in a neighborhood receiving reduced-fee lunches at school (lunch) and the percentage of bike riders in the neighborhood wearing helmets (helmet). The average percentage of children receiving reduced-fee lunches is 30.8% with a standard deviation of 26.7% and the average percentage of bike riders wearing helmets is 38.8% with a standard deviation of 16.9%. (a) If the R2 for the least-squares regression line for these data is 72%, what is the correlation between lunch and helmet? (b) Calculate the slope and intercept for the leastsquares regression line for these data. (c) Interpret the intercept of the least-squares regression line in the context of the application. (d) Interpret the slope of the least-squares regression line in the context of the application. (e) What would the value of the residual be for a neighborhood where 40% of the children receive reduced-fee lunches and 40% of the bike riders wear helmets? Interpret the meaning of this residual in the context of the application. Types of outliers in linear regression 7.24 Outliers, Part I. Identify the outliers in the scatterplots shown below, and determine what type of outliers they are. Explain your reasoning. 7.25 Outliers, Part II. Identify the outliers in the scatterplots shown below and determine what type of outliers they are. Explain your reasoning. 7.26 Crawling babies, Part II. Exercise 7.12 introduces data on the average monthly temperature during the month babies first try to crawl (about 6 months after birth) and the average rst crawling age for babies born in a given month. A scatterplot of these two variables reveals a potential outlying month when the average temperature is about 530F and average crawling age is about 28.5 weeks. Does this point have high leverage? Is it an inuential point? 7.27 Urban homeowners, Part I. The scatterplot below shows the percent of families who own their home vs. the percent of the population living in urban areas in 2010.22 There are 52 observations, each corresponding to a state in the US. Puerto Rico and District of Columbia are also included. (a) Describe the relationship between the percent of families who own their home and the percent of the population living in urban areas in 2010. (b) The outlier at the bottom right corner is District of Columbia, where 100% of the population is considered urban. What type of outlier is this observation? Inference for linear regression In the following exercises, visually check the conditions for tting a least squares regression line, but you do not need to report these conditions in your solutions. 7.28 Beer and blood alcohol content. Many people believe that gender, weight, drinking habits, and many other factors are much more important in predicting blood alcohol content (BAC) than simply considering the number of drinks a person consumed. Here we examine data from sixteen student volunteers at Ohio State University who each drank a randomly assigned number of cans of beer. These students were evenly divided between men and women, and they differed in weight and drinking habits. Thirty minutes later, a police officer measured their blood alcohol content (BAC) in grams of alcohol per deciliter of blood.23 The scatterplot and regression table summarize the ndings. Estimate Std. Error t value Pr(>|t|) (Intercept) beers -0.0127 0.0180 0.0126 0.0024 -1.00 7.48 0.3320 0.0000 (a) Describe the relationship between the number of cans of beer and BAC. (b) Write the equation of the regression line. Interpret the slope and intercept in context. (c) Do the data provide strong evidence that drinking more cans of beer is associated with an increase in blood alcohol? State the null and alternative hypotheses, report the p-value, and state your conclusion. (d) The correlation coefficient for number of cans of beer and BAC is 0.89. Calculate R2 and interpret it in context. (e) Suppose we visit a bar, ask people how many drinks they have had, and also take their BAC. Do you think the relationship between number of drinks and BAC would be as strong as the relationship found in the Ohio State study? 22United States Census Bureau, 2010 Census Urban and Rural Classi cation and Urban Area Criteria and Housing Characteristics: 2010. 23J. Malkevitch and L.M. Lesser. For All Practical Purposes: Mathematical Literacy in Today's World. WH Freeman & Co, 2008. 7.29 Body measurements, Part IV. The scatterplot and least squares summary below show the relationship between weight measured in kilograms and height measured in centimeters of 507 physically active individuals. Estimate Std. Error t value Pr(>|t|) (Intercept) height -105.0113 1.0176 7.5394 0.0440 -13.93 23.13 0.0000 0.0000 (a) Describe the relationship between height and weight. (b) Write the equation of the regression line. Interpret the slope and intercept in context. (c) Do the data provide strong evidence that an increase in height is associated with an increase in weight? State the null and alternative hypotheses, report the p-value, and state your conclusion. (d) The correlation coefficient for height and weight is 0.72. Calculate R2 and interpret it in context. 7.30 Husbands and wives, Part II. Exercise 7.6 presents a scatterplot displaying the relationship between husbands' and wives' ages in a random sample of 170 married couples in Britain, where both partners' ages are below 65 years. Given below is summary output of the least squares fit for predicting wife's age from husband's age. Estimate Std. Error t value Pr(>|t|) (Intercept) age_husband 1.5740 0.9112 1.1501 0.0259 1.37 35.25 0.1730 0.0000 (a) We might wonder, is the age difference between husbands and wives constant over time? If this were the case, then the slope parameter would be 1 = 1. Use the information above to evaluate if there is strong evidence that the difference in husband and wife ages actually has changed. (b) Write the equation of the regression line for predicting wife's age from husband's age. (c) Interpret the slope and intercept in context. (d) Given that R2 = 0:88, what is the correlation of ages in this data set? (e) You meet a married man from Britain who is 55 years old. What would you predict his wife's age to be? How reliable is this prediction? (f) You meet another married man from Britain who is 85 years old. Would it be wise to use the same linear model to predict his wife's age? Explain. 7.31 Husbands and wives, Part III. The scatterplot below summarizes husbands' and wives' heights in a random sample of 170 married couples in Britain, where both partners' ages are below 65 years. Summary output of the least squares t for predicting wife's height from husband's height is also provided in the table. Estimate Std. Error t value Pr(>|t|) (Intercept) height_husband 43.5755 0.2863 4.6842 0.0686 9.30 4.17 0.0000 0.0000 (a) Is there strong evidence that taller men marry taller women? State the hypotheses and include any information used to conduct the test. (b) Write the equation of the regression line for predicting wife's height from husband's height. (c) Interpret the slope and intercept in the context of the application. (d) Given that R2 = 0:09, what is the correlation of heights in this data set? (e) You meet a married man from Britain who is 5'9" (69 inches). What would you predict his wife's height to be? How reliable is this prediction? (f) You meet another married man from Britain who is 6'7" (79 inches). Would it be wise to use the same linear model to predict his wife's height? Why or why not? 7.32 Urban homeowners, Part II. Exercise 7.27 gives a scatterplot displaying the relationship between the percent of families that own their home and the percent of the population living in urban areas. Below is a similar scatterplot, excluding District of Columbia, as well as the residuals plot. There were 51 cases. (a) For these data, R2 = 0:28. What is the correlation? How can you tell if it is positive or negative? (b) Examine the residual plot. What do you observe? Is a simple least squares fit appropriate for these data? 7.33 Babies. Is the gestational age (time between conception and birth) of a low birth-weight baby useful in predicting head circumference at birth? Twenty- ve low birth-weight babies were studied at a Harvard teaching hospital; the investigators calculated the regression of head circumference (measured in centimeters) against gestational age (measured in weeks). The estimated regression line is head circdumference = 3:91 + 0:78 gestational age (a) What is the predicted head circumference for a baby whose gestational age is 28 weeks? (b) The standard error for the coefficient of gestational age is 0.35, which is associated with df = 23. Does the model provide strong evidence that gestational age is signi cantly associated with head circumference? 7.34 Rate my professor. Some college students critique professors' teaching at RateMyProfessors.com, a web page where students anonymously rate their professors on quality, easiness, and attractiveness. Using the self-selected data from this public forum, researchers examine the relations between quality, easiness, and attractiveness for professors at various universities. In this exercise we will work with a portion of these data that the researchers made publicly available.24 The scatterplot on the right shows the relationship between teaching evaluation score (higher score means better) and standardized beauty score (a score of 0 means average, negative score means below average, and a positive score means above average) for a sample of 463 professors. Given below are associated diagnostic plots. Also given is a regression output for predicting teaching evaluation score from beauty score. 24J. Felton et al. "Web-based student evaluations of professors: the relations between perceived quality, easiness and sexiness". In: Assessment & Evaluation in Higher Education 29.1 (2004), pp. 91-108. Estimate Std. Error t value Pr(>|t|) (Intercept) beauty 4.010 ----------- 0.0255 0.0322 157.21 4.13 0.0000 0.0000 (a) Given that the average standardized beauty score is -0.0883 and average teaching evaluation score is 3.9983, calculate the slope. Alternatively, the slope may be computed using just the information provided in the model summary table. (b) Do these data provide convincing evidence that the slope of the relationship between teaching evaluation and beauty is positive? Explain your reasoning. (c) List the conditions required for linear regression and check if each one is satis ed for this model. Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./07%3A_Introduction_to_Linear_Regression/7.06%3A_Exercises.txt
The principles of simple linear regression lay the foundation for more sophisticated regression methods used in a wide range of challenging settings. In Chapter 8, we explore multiple regression, which introduces the possibility of more than one predictor, and logistic regression, a technique for predicting categorical outcomes with two possible categories. Thumbnail: The logistic sigmoid function. (Public Domain; Qef). 08: Multiple and Logistic Regression Multiple regression extends simple two-variable regression to the case that still has one response but many predictors (denoted $x_1, x_2, x_3, \dots$). The method is motivated by scenarios where many variables may be simultaneously connected to an output. We will consider Ebay auctions of a video game called Mario Kart for the Nintendo Wii. The outcome variable of interest is the total price of an auction, which is the highest bid plus the shipping cost. We will try to determine how total price is related to each characteristic in an auction while simultaneously controlling for other variables. For instance, all other characteristics held constant, are longer auctions associated with higher or lower prices? And, on average, how much more do buyers tend to pay for additional Wii wheels(plastic steering wheels that attach to the Wii controller) in auctions? Multiple regression will help us answer these and other questions. The data set mario kart includes results from 141 auctions.1 Four observations from this data set are shown in Table $1$, and descriptions for each variable are shown in Table $2$. Notice that the condition and stock photo variables are indicator variables. For instance, the cond new variable takes value 1 if the game up for auction is new and 0 if it is used. Using indicator variables in place of category names allows for these variables to be directly used in regression. See Section 7.2.7 for additional details. Multiple regression also allows for categorical variables with many levels, though we do not have any such variables in this analysis, and we save these details for a second or third course. 1Diez DM, Barr CD, and Cetinkaya-Rundel M. 2012. openintro: OpenIntro data sets and supplemental functions. cran.r-project.org/web/packages/openintro. Table $1$: Four observations from the mario kart data set. price cond new stock photo duration wheels 1 51.55 1 1 3 1 2 37.04 0 1 3 1 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 140 38.76 0 0 7 0 141 54.51 1 1 1 2 Table $2$: Variables and their descriptions for the mario kart data set. variable description price final auction price plus shipping costs, in US dollars cond_new a coded two-level categorical variable, which takes value 1 when the game is new and 0 if the game is used stock_photo a coded two-level categorical variable, which takes value 1 if the primary photo used in the auction was a stock photo and 0 if the photo was unique to that auction duration the length of the auction, in days, taking values from 1 to 10 wheels the number of Wii wheels included with the auction (a Wii wheel is a plastic racing wheel that holds the Wii controller and is an optional but helpful accessory for playing Mario Kart) A Single-Variable Model for the Mario Kart Data Let's fit a linear regression model with the game's condition as a predictor of auction price. The model may be written as $\hat {price} = 42.87 + 10.90 \times \text {cond_ new}$ Results of this model are shown in Table $3$ and a scatterplot for price versus game condition is shown in Figure $4$. Table $3$: Summary of a linear model for predicting auction price based on game condition. Estimate Std. Error t value Pr(>|t|) (Intercept) 42.8711 0.8140 52.67 0.0000 cond_new 10.8996 1.2583 8.66 0.0000 Exercise $1$ Figure $4$. Does the linear model seem reasonable? Answer Yes. Constant variability, nearly normal residuals, and linearity all appear reasonable. Exercise $2$ Interpret the coefficient for the game's condition in the model. Is this coefficient significantly different from 0? Note that cond new is a two-level categorical variable that takes value 1 when the game is new and value 0 when the game is used. So 10.90 means that the model predicts an extra \$10.90 for those games that are new versus those that are used. (See Section 7.2.7 for a review of the interpretation for two-level categorical predictor variables.) Examining the regression output in Table $3$, we can see that the p-value for cond new is very close to zero, indicating there is strong evidence that the coefficient is different from zero when using this simple one-variable model. Including and Assessing Many Variables in a Model Sometimes there are underlying structures or relationships between predictor variables. For instance, new games sold on Ebay tend to come with more Wii wheels, which may have led to higher prices for those auctions. We would like to fit a model that includes all potentially important variables simultaneously. This would help us evaluate the relationship between a predictor variable and the outcome while controlling for the potential influence of other variables. This is the strategy used in multiple regression. While we remain cautious about making any causal interpretations using multiple regression, such models are a common first step in providing evidence of a causal connection. We want to construct a model that accounts for not only the game condition, as in Section 8.1.1, but simultaneously accounts for three other variables: stock photo, duration, and wheels. $\hat {price} = \beta _0 + \beta _1 \times \text {cond_ new} + \beta _2 \times \text {stock_ photo} + \beta _3 \times \text {duration} + \beta _4 \times \text {wheels}$ $\hat {y} = \beta _0 + \beta _1x_1 + \beta _2x_2 + \beta _3x_3 + \beta _4x_4 \label {8.3}$ In this equation, $y$ represents the total price, $x_1$ indicates whether the game is new, $x_2$ indicates whether a stock photo was used, $x_3$ is the duration of the auction, and $x_4$ is the number of Wii wheels included with the game. Just as with the single predictor case, a multiple regression model may be missing important components or it might not precisely represent the relationship between the outcome and the available explanatory variables. While no model is perfect, we wish to explore the possibility that this one may fit the data reasonably well. We estimate the parameters $\beta _0, \beta _1, \dots, \beta _4$ in the same way as we did in the case of a single predictor. We select $b_0, b_1,\dots, b_4$ that minimize the sum of the squared residuals: $SSE = e^2_1 + e^2_2 + \dots + e^2_{141} = \sum \limits ^{141}_{i=1} e^2_i = \sum \limits ^{141}_{i=1} {(y_i - \hat {y}_i)}^2 \label {8.4}$ Here there are 141 residuals, one for each observation. We typically use a computer to minimize the sum in Equation (8.4) and compute point estimates, as shown in the sample output in Table $5$. Using this output, we identify the point estimates $b_i$ of each $\beta _i$, just as we did in the one-predictor case. Table $5$: Output for the regression model where price is the outcome and cond_new, stock_photo, duration, and wheels are the predictors. Estimate Std. Error t value Pr(>|t|) (Intercept) cond_new stock_photo duration wheels 36.2110 5.1306 1.0803 -0.0268 7.2852 1.5140 1.0511 1.0568 0.1904 0.5547 23.92 4.88 1.02 -0.14 13.13 0.0000 0.0000 0.3085 0.8882 0.0000 Multiple regression model A multiple regression model is a linear model with many predictors. In general, we write the model as $\hat {y} = \beta _0 + \beta _1x_1 + \beta _2x_2 + \dots + \beta _kx_k$ when there are k predictors. We often estimate the $\beta _i$ parameters using a computer. Exercise $3$ Write out the model in Equation (8.3) using the point estimates from Table $5$. How many predictors are there in this model?3 Answer $\hat {y} = 36.21 + 5.13x_1 + 1.08x_2 - 0.03x_3 + 7.29x_4$, and there are k = 4 predictor variables. Exercise $4$ What does $\beta _4$, the coefficient of variable $x_4$ (Wii wheels), represent? What is the point estimate of $\beta _4$? Answer It is the average difference in auction price for each additional Wii wheel included when holding the other variables constant. The point estimate is $b_4 = 7.29$. Exercise $5$ Compute the residual of the first observation in Table $1$ on page 355 using the equation identified in Exercise 8.5. Answer $e_i = y_i - \hat {y}_i = 51.55 - 49.62 = 1.93$, where 49.62 was computed using the variables values from the observation and the equation identified in Exercise $3$. Example $1$ We estimated a coefficient for cond new in Section 8.1.1 of $b_1 = 10.90$ with a standard error of $SE_{b1} = 1.26$ when using simple linear regression. Why might there be a difference between that estimate and the one in the multiple regression setting? If we examined the data carefully, we would see that some predictors are correlated. For instance, when we estimated the connection of the outcome price and predictor cond new using simple linear regression, we were unable to control for other variables like the number of Wii wheels included in the auction. That model was biased by the confounding variable wheels. When we use both variables, this particular underlying and unintentional bias is reduced or eliminated (though bias from other confounding variables may still remain). Example $1$ describes a common issue in multiple regression: correlation among predictor variables. We say the two predictor variables are collinear (pronounced as co-linear) when they are correlated, and this collinearity complicates model estimation. While it is impossible to prevent collinearity from arising in observational data, experiments are usually designed to prevent predictors from being collinear. Exercise $6$ The estimated value of the intercept is 36.21, and one might be tempted to make some interpretation of this coefficient, such as, it is the model's predicted price when each of the variables take value zero: the game is used, the primary image is not a stock photo, the auction duration is zero days, and there are no wheels included. Is there any value gained by making this interpretation? Solution Three of the variables (cond new, stock photo, and wheels) do take value 0, but the auction duration is always one or more days. If the auction is not up for any days, then no one can bid on it! That means the total auction price would always be zero for such an auction; the interpretation of the intercept in this setting is not insightful. Adjusted R2 as a better estimate of explained variance We first used R2 in Section 7.2 to determine the amount of variability in the response that was explained by the model: $R^2 = 1 - \dfrac {\text {variability in residuals}}{\text {variability in the outcome}} = 1 - \dfrac {V ar(e_i)}{V ar(y_i)}$ where $e_i$ represents the residuals of the model and yi the outcomes. This equation remains valid in the multiple regression framework, but a small enhancement can often be even more informative. Exercise $7$ The variance of the residuals for the model given in Exercise 8.7 is 23.34, and the variance of the total price in all the auctions is 83.06. Calculate $R^2$ for this model. Solution $R^2 = 1 - \dfrac {23.34}{83.06} = 0.719$. This strategy for estimating $R^2$ is acceptable when there is just a single variable. However, it becomes less helpful when there are many variables. The regular $R^2$ is actually a biased estimate of the amount of variability explained by the model. To get a better estimate, we use the adjusted $R^2$. Adjusted $R^2$ as a tool for model assessment The adjusted $R^2$ is computed as $R^2_{adj} = 1 - \dfrac {\dfrac {V ar(e_i)}{(n - k - 1)}}{\dfrac {V ar(y_i)}{(n - 1)}} = 1 - \dfrac {V ar(e_i)}{V ar(y_i)} \times \dfrac {n - 1}{n - k - 1}$ where n is the number of cases used to fit the model and $k$ is the number of predictor variables in the model. Because k is never negative, the adjusted $R^2$ will be smaller - often times just a little smaller - than the unadjusted $R^2$. The reasoning behind the adjusted $R^2$ lies in the degrees of freedom associated with each variance. In multiple regression, the degrees of freedom associated with the variance of the estimate of the residuals is n - k - 1, not n - 1. For instance, if we were to make predictions for new data using our current model, we would nd that the unadjusted R2 is an overly optimistic estimate of the reduction in variance in the response, and using the degrees of freedom in the adjusted R2 formula helps correct this bias. Exercise $8$ There were n = 141 auctions in the mario_kart data set and k = 4 predictor variables in the model. Use n, k, and the variances from Exercise 8.10 to calculate $R^2_{adj}$ for the Mario Kart model.9 Solution $R^2_{adj} = 1 - \frac {23.34}{83.06} \times \frac {141- 1}{141- 4 - 1} = 0.711$. Exercise $9$ Suppose you added another predictor to the model, but the variance of the errors $V ar(e_i)$ didn't go down. What would happen to the $R^2$? What would happen to the adjusted $R^2$? Solution The unadjusted $R^2$ would stay the same and the adjusted $R^2$ would go down.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./08%3A_Multiple_and_Logistic_Regression/8.01%3A_Introduction_to_Multiple_Regression.txt
The best model is not always the most complicated. Sometimes including variables that are not evidently important can actually reduce the accuracy of predictions. In this section we discuss model selection strategies, which will help us eliminate from the model variables that are less important. In this section, and in practice, the model that includes all available explanatory variables is often referred to as the full model. Our goal is to assess whether the full model is the best model. If it isn't, we want to identify a smaller model that is preferable. Identifying Variables in the Model that may not be Helpful Table 8.6 provides a summary of the regression output for the full model for the auction data. The last column of the table lists p-values that can be used to assess hypotheses of the following form: • H0: $\beta _i$ = 0 when the other explanatory variables are included in the model. • HA: $\beta _i \ne 0$ when the other explanatory variables are included in the model. Table 8.6: The fit for the full regression model, including the adjusted $R^2$. Estimate Std. Error t value Pr(>|t|) (Intercept) cond_new stock_photo duration wheels 36.2110 5.1306 1.0803 -0.0268 7.2852 1.5140 1.0511 1.0568 0.1904 0.5547 23.92 4.88 1.02 -0.14 13.13 0.0000 0.0000 0.3085 0.8882 0.0000 Example $1$ The coefficient of cond new has a t test statistic of T = 4.88 and a p-value for its corresponding hypotheses ($H_0 : \beta _1 = 0, H_A : \beta _1 \ne 0$) of about zero. How can this be interpreted? Solution If we keep all the other variables in the model and add no others, then there is strong evidence that a game's condition (new or used) has a real relationship with the total auction price. Example $2$ Is there strong evidence that using a stock photo is related to the total auction price? Solution The t test statistic for stock photo is T = 1.02 and the p-value is about 0.31. After accounting for the other predictors, there is not strong evidence that using a stock photo in an auction is related to the total price of the auction. We might consider removing the stock photo variable from the model. Exercise $1$ Identify the p-values for both the duration and wheels variables in the model. Is there strong evidence supporting the connection of these variables with the total price in the model? Answer The p-value for the auction duration is 0.8882, which indicates that there is not statistically significant evidence that the duration is related to the total auction price when accounting for the other variables. The p-value for the Wii wheels variable is about zero, indicating that this variable is associated with the total auction price. There is not statistically significant evidence that either the stock photo or duration variables contribute meaningfully to the model. Next we consider common strategies for pruning such variables from a model. TIP: Using adjusted $R^2$ instead of p-values for model selection The adjusted $R^2$ may be used as an alternative to p-values for model selection, where a higher adjusted $R^2$ represents a better model t. For instance, we could compare two models using their adjusted $R^2$, and the model with the higher adjusted $R^2$ would be preferred. This approach tends to include more variables in the final model when compared to the p-value approach. Two model selection strategies Two common strategies for adding or removing variables in a multiple regression model are called backward-selection and forward-selection. These techniques are often referred to as stepwise model selection strategies, because they add or delete one variable at a time as they "step" through the candidate predictors. We will discuss these strategies in the context of the p-value approach. Alternatively, we could have employed an $R^2_{adj}$ approach. The backward-elimination strategy starts with the model that includes all potential predictor variables. Variables are eliminated one-at-a-time from the model until only variables with statistically significant p-values remain. The strategy within each elimination step is to drop the variable with the largest p-value, re t the model, and reassess the inclusion of all variables. Example $3$ Results corresponding to the full model for the mario kart data are shown in Table 8.6. How should we proceed under the backward-elimination strategy? Solution There are two variables with coefficients that are not statistically different from zero: stock_photo and duration. We first drop the duration variable since it has a larger corresponding p-value, then we re t the model. A regression summary for the new model is shown in Table 8.7. In the new model, there is not strong evidence that the coefficient for stock photo is different from zero, even though the p-value decreased slightly, and the other p-values remain very small. Next, we again eliminate the variable with the largest non-significant p-value, stock photo, and re t the model. The updated regression summary is shown in Table 8.8. In the latest model, we see that the two remaining predictors have statistically significant coefficients with p-values of about zero. Since there are no variables remaining that could be eliminated from the model, we stop. The final model includes only the cond_new and wheels variables in predicting the total auction price: \begin{align} \hat {y} &= b_0 + b_1x_1 + b_4x_4 \ &= 36.78 + 5.58x_1 + 7.23x_4 \end{align} where $x_1$ represents cond new and x4 represents wheels. An alternative to using p-values in model selection is to use the adjusted $R^2$. At each elimination step, we refit the model without each of the variables up for potential elimination. For example, in the first step, we would fit four models, where each would be missing a different predictor. If one of these smaller models has a higher adjusted $R^2$ than our current model, we pick the smaller model with the largest adjusted $R^2$. We continue in this way until removing variables does not increase $R^2_{adj}$. Had we used the adjusted $R^2$ criteria, we would have kept the stock photo variable along with the cond new and wheels variables. Notice that the p-value for stock photo changed a little from the full model (0.309) to the model that did not include the duration variable (0.275). It is common for p-values of one variable to change, due to collinearity, after eliminating a different variable. This fluctuation emphasizes the importance of refitting a model after each variable elimination step. The p-values tend to change dramatically when the eliminated variable is highly correlated with another variable in the model. The forward-selection strategy is the reverse of the backward-elimination technique. Instead of eliminating variables one-at-a-time, we add variables one-at-a-time until we cannot nd any variables that present strong evidence of their importance in the model. Table 8.7: The output for the regression model where price is the outcome and the duration variable has been eliminated from the model. Estimate Std. Error t value Pr(>|t|) (Intercept) cond_new stock_photo wheels 36.0483 5.1763 1.1177 7.2984 0.9745 0.9961 1.0192 0.5448 36.99 5.20 1.10 13.40 0.0000 0.0000 0.2747 0.0000 Table 8.8: The output for the regression model where price is the outcome and the duration and stock photo variables have been eliminated from the model. Estimate Std. Error t value Pr(>|t|) (Intercept) cond_new wheels 36.7849 5.5848 7.2328 0.7066 0.9245 0.5419 52.06 6.04 13.35 0.0000 0.0000 0.0000 Example $4$: forward selection strategy Construct a model for the mario kart data set using the forward selection strategy. Solution We start with the model that includes no variables. Then we t each of the possible models with just one variable. That is, we fit the model including just the cond new predictor, then the model including just the stock photo variable, then a model with just duration, and a model with just wheels. Each of the four models (yes, we fit four models!) provides a p-value for the coefficient of the predictor variable. Out of these four variables, the wheels variable had the smallest p-value. Since its p-value is less than 0.05 (the p-value was smaller than 2e-16), we add the Wii wheels variable to the model. Once a variable is added in forward-selection, it will be included in all models considered as well as the nal model. Since we successfully found a first variable to add, we consider adding another. We fit three new models: (1) the model including just the cond_new and wheels variables (output in Table 8.8), (2) the model including just the stock photo and wheels variables, and (3) the model including only the duration and wheels variables. Of these models, the first had the lowest p-value for its new variable (the p-value corresponding to cond new was 1.4e-08). Because this p-value is below 0.05, we add the cond_new variable to the model. Now the final model is guaranteed to include both the condition and wheels variables. We must then repeat the process a third time, fitting two new models: (1) the model including the stock photo, cond_new, and wheels variables (output in Table 8.7) and (2) the model including the duration, cond new, and wheels variables. The p-value corresponding to stock photo in the first model (0.275) was smaller than the p-value corresponding to duration in the second model (0.682). However, since this smaller p-value was not below 0.05, there was not strong evidence that it should be included in the model. Therefore, neither variable is added and we are finished. The final model is the same as that arrived at using the backward-selection strategy. Example $5$: backward-selection strategy As before, we could have used the $R^2_{adj}$ criteria instead of examining p-values in selecting variables for the model. Rather than look for variables with the smallest p-value, we look for the model with the largest $R^2_{adj}$. What would the result of forward-selection be using the adjusted $R^2$ approach? Solution Using the forward-selection strategy, we start with the model with no predictors. Next we look at each model with a single predictor. If one of these models has a larger $R^2_{adj}$ than the model with no variables, we use this new model. We repeat this procedure, adding one variable at a time, until we cannot nd a model with a larger $R^2_{adj}$. If we had done the forward-selection strategy using $R^2_{adj}$, we would have arrived at the model including cond new, stock photo, and wheels, which is a slightly larger model than we arrived at using the p-value approach and the same model we arrived at using the adjusted $R^2$ and backwards-elimination. Model selection strategies The backward-elimination strategy begins with the largest model and eliminates variables one-by-one until we are satis ed that all remaining variables are important to the model. The forward-selection strategy starts with no variables included in the model, then it adds in variables according to their importance until no other important variables are found. There is no guarantee that the backward-elimination and forward-selection strategies will arrive at the same nal model using the p-value or adjusted $R^2$ methods. If the backwards-elimination and forward-selection strategies are both tried and they arrive at different models, choose the model with the larger $R^2_{adj}$ as a tie-breaker; other tie-break options exist but are beyond the scope of this book. It is generally acceptable to use just one strategy, usually backward-elimination with either the p-value or adjusted $R^2$ criteria. However, before reporting the model results, we must verify the model conditions are reasonable.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./08%3A_Multiple_and_Logistic_Regression/8.02%3A_Model_Selection.txt
Multiple regression methods using the model $\hat {y} = \beta _0 + \beta _1x_1 + \beta _2x_2 + \dots + \beta _kx_k$ generally depend on the following four assumptions: 1. the residuals of the model are nearly normal, 2. the variability of the residuals is nearly constant, 3. the residuals are independent, and 4. each variable is linearly related to the outcome. Simple and effective plots can be used to check each of these assumptions. We will consider the model for the auction data that uses the game condition and number of wheels as predictors. Normal probability plot. A normal probability plot of the residuals is shown in Figure $1$. While the plot exhibits some minor irregularities, there are no outliers that might be cause for concern. In a normal probability plot for residuals, we tend to be most worried about residuals that appear to be outliers, since these indicate long tails in the distribution of residuals. Absolute values of residuals against fitted values. A plot of the absolute value of the residuals against their corresponding fitted values ($\hat {y}_i$) is shown in Figure $2$. This plot is helpful to check the condition that the variance of the residuals is approximately constant. We do not see any obvious deviations from constant variance in this example. Residuals in order of their data collection. A plot of the residuals in the order their corresponding auctions were observed is shown in Figure $3$. Such a plot is helpful in identifying any connection between cases that are close to one another, e.g. we could look for declining prices over time or if there was a time of the day when auctions tended to fetch a higher price. Here we see no structure that indicates a problem.12 Residuals against each predictor variable. We consider a plot of the residuals against the cond_new variable and the residuals against the wheels variable. These plots are shown in Figure $4$. For the two-level condition variable, we are guaranteed not to see any remaining trend, and instead we are checking that the variability does not fluctuate across groups. In this example, when we consider the residuals against the wheels variable, we see some possible structure. There appears to be curvature in the residuals, indicating the relationship is probably not linear. 12An especially rigorous check would use time series methods. For instance, we could check whether consecutive residuals are correlated. Doing so with these residuals yields no statistically significant correlations. It is necessary to summarize diagnostics for any model fit. If the diagnostics support the model assumptions, this would improve credibility in the ndings. If the diagnostic assessment shows remaining underlying structure in the residuals, we should try to adjust the model to account for that structure. If we are unable to do so, we may still report the model but must also note its shortcomings. In the case of the auction data, we report that there may be a nonlinear relationship between the total price and the number of wheels included for an auction. This information would be important to buyers and sellers; omitting this information could be a setback to the very people who the model might assist. "All models are wrong, but some are useful" -George E.P. Box The truth is that no model is perfect. However, even imperfect models can be useful. Reporting a awed model can be reasonable so long as we are clear and report the model's shortcomings. Caution: do not report results when assumptions are grossly violated While there is a little leeway in model assumptions, do not go too far. If model assumptions are very clearly violated, consider a new model, even if it means learning more statistical methods or hiring someone who can help. TIP: Confidence intervals in multiple regression Confidence intervals for coefficients in multiple regression can be computed using the same formula as in the single predictor model: $b_i \pm t^*_{df} SE_{b_i}$ where $t^*_{df}$ is the appropriate t value corresponding to the confidence level and model degrees of freedom, $df = n - k - 1$.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./08%3A_Multiple_and_Logistic_Regression/8.03%3A_Checking_Model_Assumptions_using_Graphs.txt
In this section we introduce logistic regression as a tool for building models when there is a categorical response variable with two levels. Logistic regression is a type of generalized linear model (GLM) for response variables where regular multiple regression does not work very well. In particular, the response variable in these settings often takes a form where residuals look completely different from the normal distribution. GLMs can be thought of as a two-stage modeling approach. We first model the response variable using a probability distribution, such as the binomial or Poisson distribution. Second, we model the parameter of the distribution using a collection of predictors and a special form of multiple regression. In Section 8.4 we will revisit the email data set from Chapter 1. These emails were collected from a single email account, and we will work on developing a basic spam filter using these data. The response variable, spam, has been encoded to take value 0 when a message is not spam and 1 when it is spam. Our task will be to build an appropriate model that classi es messages as spam or not spam using email characteristics coded as predictor variables. While this model will not be the same as those used in large-scale spam filters, it shares many of the same features. Table $1$: Descriptions for 11 variables in the email data set. Notice that all of the variables are indicator variables, which take the value 1 if the specified characteristic is present and 0 otherwise. variable description spam Specifies whether the message was spam. to_multiple An indicator variable for if more than one person was listed in the To field of the email. cc An indicator for if someone was CCed on the email. attach An indicator for if there was an attachment, such as a document or image. dollar An indicator for if the word "dollar" or dollar symbol (\$) appeared in the email. winner An indicator for if the word "winner" appeared in the email message. inherit An indicator for if the word "inherit" (or a variation, like "inheritance") appeared in the email. password An indicator for if the word "password" was present in the email. format Indicates if the email contained special formatting, such as bolding, tables, or links re_subj Indicates whether "Re:" was included at the the start of the email subject. exclaim_subj Indicates whether any exclamation point was included in the email subject. Email data The email data set was first presented in Chapter 1 with a relatively small number of variables. In fact, there are many more variables available that might be useful for classifying spam. Descriptions of these variables are presented in Table $1$. The spam variable will be the outcome, and the other 10 variables will be the model predictors. While we have limited the predictors used in this section to be categorical variables (where many are represented as indicator variables), numerical predictors may also be used in logistic regression. See the footnote for an additional discussion on this topic.13 Modeling the probability of an event TIP: Notation for a logistic regression model The outcome variable for a GLM is denoted by $Y_i$, where the index i is used to represent observation i. In the email application, $Y_i$ will be used to represent whether email i is spam ($Y_i = 1$) or not ($Y_i = 0$). The predictor variables are represented as follows: $x_{1;i}$ is the value of variable 1 for observation i, $x_{2;i}$ is the value of variable 2 for observation i, and so on. Logistic regression is a generalized linear model where the outcome is a two-level categorical variable. The outcome, $Y_i$, takes the value 1 (in our application, this represents a spam message) with probability $p_i$ and the value 0 with probability $1 - p_i$. It is the probability pi that we model in relation to the predictor variables. 13Recall from Chapter 7 that if outliers are present in predictor variables, the corresponding observations may be especially influential on the resulting model. This is the motivation for omitting the numerical variables, such as the number of characters and line breaks in emails, that we saw in Chapter 1. These variables exhibited extreme skew. We could resolve this issue by transforming these variables (e.g. using a log-transformation), but we will omit this further investigation for brevity. The logistic regression model relates the probability an email is spam ($p_i$) to the predictors $x_{1;i}, x_{2;i},\dots, x_{k;i}$ through a framework much like that of multiple regression: $\text {transformation(pi)} = \beta _0 + \beta _1x_{1;i} + \beta _2x_{2;i} + \dots \beta _kx_{k;i} \label {8.19}$ We want to choose a transformation in Equation \ref{8.19} that makes practical and mathematical sense. For example, we want a transformation that makes the range of possibilities on the left hand side of Equation \ref{8.19} equal to the range of possibilities for the right hand side; if there was no transformation for this equation, the left hand side could only take values between 0 and 1, but the right hand side could take values outside of this range. A common transformation for $p_i$ is the logit transformation, which may be written as $logit(p_i) = log_e (\dfrac {p_i}{1 - p_i})$ The logit transformation is shown in Figure 8.14. Below, we rewrite Equation \ref{8.19} using the logit transformation of $p_i$: $log_e (\dfrac {p_i}{1 - p_i}) = \beta _0 + \beta _1x_{1;i} + \beta _2x_{2;i} + \dots + \beta _kx_{k;i}$ In our spam example, there are 10 predictor variables, so k = 10. This model isn't very intuitive, but it still has some resemblance to multiple regression, and we can t this model using software. In fact, once we look at results from software, it will start to feel like we're back in multiple regression, even if the interpretation of the coefficients is more complex. Example $1$ Here we create a spam lter with a single predictor: to_multiple. This variable indicates whether more than one email address was listed in the To field of the email. The following logistic regression model was fit using statistical software: $log (\dfrac {p_i}{1 - p_i}) = -2.12 - 1.81 \times \text { to_multiple}$ If an email is randomly selected and it has just one address in the $T_o$ field, what is the probability it is spam? What if more than one address is listed in the $T_o$ field? Solution If there is only one email in the $T_o$ field, then to multiple takes value 0 and the right side of the model equation equals -2.12. Solving for $p_i: \dfrac {e^{2.12}}{1+e^{-2.12}} = 0.11$. Just as we labeled a tted value of $y_i$ with a "hat" in single-variable and multiple regression, we will do the same for this probability: $\hat {p}_i = 0.11$. If there is more than one address listed in the $T_o$ field, then the right side of the model equation is $-2.12 - 1.81 \times 1 = -3.93$, which corresponds to a probability $\hat {p}_i = 0.02$. Notice that we could examine -2.12 and -3.93 in Figure 8.14 to estimate the probability before formally calculating the value. To convert from values on the regression-scale (e.g. -2.12 and -3.93 in Example 8.20), use the following formula, which is the result of solving for $p_i$ in the regression model: $p_i = \dfrac {e^{\beta _0+ \beta _1x_{1;i}+ \dots+ \beta _kx_{k;i}}}{1 + e^{\beta _0+ \beta _1x_{1;i}+ \dots + \beta _kx_{k;i}}}$ As with most applied data problems, we substitute the point estimates for the parameters (the $\beta _i$) so that we may make use of this formula. In Example $1$, the probabilities were calculated as $\dfrac {e^{-2.12}}{1 + e^{-2.12}} = 0.11 \dfrac {e^{-2.12-1.81}}{1 + e^{-2.12-1.81}} = 0.02$ While the information about whether the email is addressed to multiple people is a helpful start in classifying email as spam or not, the probabilities of 11% and 2% are not dramatically different, and neither provides very strong evidence about which particular email messages are spam. To get more precise estimates, we'll need to include many more variables in the model. We used statistical software to fit the logistic regression model with all ten predictors described in Table 8.13. Like multiple regression, the result may be presented in a summary table, which is shown in Table $2$. The structure of this table is almost identical to that of multiple regression; the only notable difference is that the p-values are calculated using the normal distribution rather than the t distribution. Just like multiple regression, we could trim some variables from the model using the p-value. Using backwards elimination with a p-value cutoff of 0.05 (start with the full model and trim the predictors with p-values greater than 0.05), we ultimately eliminate the exclaim_subj, dollar, inherit, and cc predictors. The remainder of this section will rely on this smaller model, which is summarized in Table $3$. Exercise $1$ Examine the summary of the reduced model in Table $3$, and in particular, examine the to_multiple row. Is the point estimate the same as we found before, -1.81, or is it different? Explain why this might be. Solution The new estimate is different: -2.87. This new value represents the estimated coefficient when we are also accounting for other variables in the logistic regression model. Table $2$: Summary table for the full logistic regression model for the spam lter example. Estimate Std. Error z value Pr(>|z|) (Intercept) to multiple winner format re_subj exclaim_subj cc attach dollar inherit password -0.8362 -2.8836 1.7038 -1.5902 -2.9082 0.1355 -0.4863 0.9790 -0.0582 0.2093 -1.4929 0.0962 0.3121 0.3254 0.1239 0.3708 0.2268 0.3054 0.2170 0.1589 0.3197 0.5295 -8.69 -9.24 5.24 -12.84 -7.84 0.60 -1.59 4.51 -0.37 0.65 -2.82 0.0000 0.0000 0.0000 0.0000 0.0000 0.5503 0.1113 0.0000 0.7144 0.5127 0.0048 Table $3$: Summary table for the logistic regression model for the spam lter, where variable selection has been performed. Estimate Std. Error z value Pr(>|z|) (Intercept) to multiple winner format re_subj attach password -0.8595 -2.8836 1.7370 -1.5569 -3.0482 0.8643 -1.4871 0.0910 0.3092 0.3218 0.1207 0.3630 0.2042 0.5290 -9.44 -9.18 5.40 -12.90 -8.40 4.23 -2.81 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0049 Point estimates will generally change a little - and sometimes a lot - depending on which other variables are included in the model. This is usually due to colinearity in the predictor variables. We previously saw this in the Ebay auction example when we compared the coefficient of cond new in a single-variable model and the corresponding coefficient in the multiple regression model that used three additional variables (see Sections 8.1.1 and 8.1.2). Example $2$ Spam lters are built to be automated, meaning a piece of software is written to collect information about emails as they arrive, and this information is put in the form of variables. These variables are then put into an algorithm that uses a statistical model, like the one we've t, to classify the email. Suppose we write software for a spam lter using the reduced model shown in Table $3$. If an incoming email has the word "winner" in it, will this raise or lower the model's calculated probability that the incoming email is spam? Solution The estimated coefficient of winner is positive (1.7370). A positive coefficient estimate in logistic regression, just like in multiple regression, corresponds to a positive association between the predictor and response variables when accounting for the other variables in the model. Since the response variable takes value 1 if an email is spam and 0 otherwise, the positive coefficient indicates that the presence of "winner" in an email raises the model probability that the message is spam. Example $3$ Suppose the same email from Example $2$ was in HTML format, meaning the format variable took value 1. Does this characteristic increase or decrease the probability that the email is spam according to the model? Solution Since HTML corresponds to a value of 1 in the format variable and the coefficient of this variable is negative (-1.5569), this would lower the probability estimate returned from the model. Practical decisions in the email application Examples 8.22 and 8.23 highlight a key feature of logistic and multiple regression. In the spam lter example, some email characteristics will push an email's classification in the direction of spam while other characteristics will push it in the opposite direction. If we were to implement a spam filter using the model we have fit, then each future email we analyze would fall into one of three categories based on the email's characteristics: 1. The email characteristics generally indicate the email is not spam, and so the resulting probability that the email is spam is quite low, say, under 0.05. 2. The characteristics generally indicate the email is spam, and so the resulting probability that the email is spam is quite large, say, over 0.95. 3. The characteristics roughly balance each other out in terms of evidence for and against the message being classified as spam. Its probability falls in the remaining range, meaning the email cannot be adequately classified as spam or not spam. If we were managing an email service, we would have to think about what should be done in each of these three instances. In an email application, there are usually just two possibilities: filter the email out from the regular inbox and put it in a "spambox", or let the email go to the regular inbox. Exercise $2$ The first and second scenarios are intuitive. If the evidence strongly suggests a message is not spam, send it to the inbox. If the evidence strongly suggests the message is spam, send it to the spambox. How should we handle emails in the third category? Solution In this particular application, we should err on the side of sending more mail to the inbox rather than mistakenly putting good messages in the spambox. So, in summary: emails in the first and last categories go to the regular inbox, and those in the second scenario go to the spambox. Exercise $3$ Suppose we apply the logistic model we have built as a spam filter and that 100 messages are placed in the spambox over 3 months. If we used the guidelines above for putting messages into the spambox, about how many legitimate (non-spam) messages would you expect to find among the 100 messages? Solution First, note that we proposed a cutoff for the predicted probability of 0.95 for spam. In a worst case scenario, all the messages in the spambox had the minimum probability equal to about 0.95. Thus, we should expect to nd about 5 or fewer legitimate messages among the 100 messages placed in the spambox. Almost any classifier will have some error. In the spam lter guidelines above, we have decided that it is okay to allow up to 5% of the messages in the spambox to be real messages. If we wanted to make it a little harder to classify messages as spam, we could use a cutoff of 0.99. This would have two effects. Because it raises the standard for what can be classified as spam, it reduces the number of good emails that are classified as spam. However, it will also fail to correctly classify an increased fraction of spam messages. No matter the complexity and the confidence we might have in our model, these practical considerations are absolutely crucial to making a helpful spam filter. Without them, we could actually do more harm than good by using our statistical model. Diagnostics for the email classifier Logistic regression conditions There are two key conditions for fitting a logistic regression model: 1. The model relating the parameter $p_i$ to the predictors $x_{1;i}, x_{2;i},\dots, x_{k;i}$ closely resembles the true relationship between the parameter and the predictors. 2. Each outcome $Y_i$ is independent of the other outcomes. The first condition of the logistic regression model is not easily checked without a fairly sizable amount of data. Luckily, we have 3,921 emails in our data set! Let's first visualize these data by plotting the true classification of the emails against the model's fitted probabilities, as shown in Figure $2$. The vast majority of emails (spam or not) still have fitted probabilities below 0.5. This may at first seem very discouraging: we have t a logistic model to create a spam filter, but no emails have a fitted probability of being spam above 0.75. Don't despair; we will discuss ways to improve the model through the use of better variables in Section 8.4.5. We'd like to assess the quality of our model. For example, we might ask: if we look at emails that we modeled as having a 10% chance of being spam, do we nd about 10% of them actually are spam? To help us out, we'll borrow an advanced statistical method called natural splines that estimates the local probability over the region 0.00 to 0.75 (the largest predicted probability was 0.73, so we avoid extrapolating). All you need to know about natural splines to understand what we are doing is that they are used to fit flexible lines rather than straight lines. The curve fit using natural splines is shown in Figure $3$ as a solid black line. If the logistic model fits well, the curve should closely follow the dashed $y = x$ line. We have added shading to represent the confidence bound for the curved line to clarify what fluctuations might plausibly be due to chance. Even with this confidence bound, there are weaknesses in the first model assumption. The solid curve and its confidence bound dips below the dashed line from about 0.1 to 0.3, and then it drifts above the dashed line from about 0.35 to 0.55. These deviations indicate the model relating the parameter to the predictors does not closely resemble the true relationship. We could evaluate the second logistic regression model assumption - independence of the outcomes - using the model residuals. The residuals for a logistic regression model are calculated the same way as with multiple regression: the observed outcome minus the expected outcome. For logistic regression, the expected value of the outcome is the fitted probability for the observation, and the residual may be written as $e_i = Y_i - \hat {p}_i$ We could plot these residuals against a variety of variables or in their order of collection, as we did with the residuals in multiple regression. However, since we know the model will need to be revised to effective classify spam and you have already seen similar residual plots in Section 8.3, we won't investigate the residuals here. Improving the set of variables for a spam filter If we were building a spam filter for an email service that managed many accounts (e.g. Gmail or Hotmail), we would spend much more time thinking about additional variables that could be useful in classifying emails as spam or not. We also would use transformations or other techniques that would help us include strongly skewed numerical variables as predictors. Take a few minutes to think about additional variables that might be useful in identifying spam. Below is a list of variables we think might be useful: 1. An indicator variable could be used to represent whether there was prior two-way correspondence with a message's sender. For instance, if you sent a message to [email protected] and then John sent you an email, this variable would take value 1 for the email that John sent. If you had never sent John an email, then the variable would be set to 0. 2. A second indicator variable could utilize an account's past spam flagging information. The variable could take value 1 if the sender of the message has previously sent messages flagged as spam. 3. A third indicator variable could flag emails that contain links included in previous spam messages. If such a link is found, then set the variable to 1 for the email. otherwise, set it to 0. The variables described above take one of two approaches. Variable (1) is specially designed to capitalize on the fact that spam is rarely sent between individuals that have two-way communication. Variables (2) and (3) are specially designed to flag common spammers or spam messages. While we would have to verify using the data that each of the variables is effective, these seem like promising ideas. Table $4$ shows a contingency table for spam and also for the new variable described in (1) above. If we look at the 1,090 emails where there was correspondence with the sender in the preceding 30 days, not one of these message was spam. This suggests variable (1) would be very effective at accurately classifying some messages as not spam. With this single variable, we would be able to send about 28% of messages through to the inbox with confidence that almost none are spam. Table $4$: A contingency table for spam and a new variable that represents whether there had been correspondence with the sender in the preceding 30 days. prior correspondence no yes Total spam not spam 367 2464 0 1090 367 3554 Total 2831 1090 3921 The variables described in (2) and (3) would provide an excellent foundation for distinguishing messages coming from known spammers or messages that take a known form of spam. To utilize these variables, we would need to build databases: one holding email addresses of known spammers, and one holding URLs found in known spam messages. Our access to such information is limited, so we cannot implement these two variables in this textbook. However, if we were hired by an email service to build a spam filter, these would be important next steps. In addition to finding more and better predictors, we would need to create a customized logistic regression model for each email account. This may sound like an intimidating task, but its complexity is not as daunting as it may at first seem. We'll save the details for a statistics course where computer programming plays a more central role. For what is the extremely challenging task of classifying spam messages, we have made a lot of progress. We have seen that simple email variables, such as the format, inclusion of certain words, and other circumstantial characteristics, provide helpful information for spam classi cation. Many challenges remain, from better understanding logistic regression to carrying out the necessary computer programming, but completing such a task is very nearly within your reach.
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./08%3A_Multiple_and_Logistic_Regression/8.04%3A_Introduction_to_Logistic_Regression.txt
Introduction to multiple regression 8.1 Baby weights, Part I. The Child Health and Development Studies investigate a range of topics. One study considered all pregnancies between 1960 and 1967 among women in the Kaiser Foundation Health Plan in the San Francisco East Bay area. Here, we study the relationship between smoking and weight of the baby. The variable smoke is coded 1 if the mother is a smoker, and 0 if not. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, based on the smoking status of the mother.17 Estimate Std. Error t value Pr(>|t|) (Intercept) 123.05 0.65 189.60 0.0000 smoke -8.94 1.03 -8.65 0.0000 The variability within the smokers and non-smokers are about equal and the distributions are symmetric. With these conditions satisfied, it is reasonable to apply the model. (Note that we don't need to check linearity since the predictor has only two levels.) 1. (a) Write the equation of the regression line. 2. (b) Interpret the slope in this context, and calculate the predicted birth weight of babies born to smoker and non-smoker mothers. 3. (c) Is there a statistically signi cant relationship between the average birth weight and smoking? 8.2 Baby weights, Part II. Exercise 8.1 introduces a data set on birth weight of babies. Another variable we consider is parity, which is 0 if the child is the first born, and 1 otherwise. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, from parity. Estimate Std. Error t value Pr(>|t|) (Intercept) 120.07 0.60 199.94 0.0000 smoke -1.93 1.19 -1.62 0.1052 1. (a) Write the equation of the regression line. 2. (b) Interpret the slope in this context, and calculate the predicted birth weight of first borns and others. 3. (c) Is there a statistically signi cant relationship between the average birth weight and parity? 17Child Health and Development Studies, Baby weights data set. 8.3 Baby weights, Part III. We considered the variables smoke and parity, one at a time, in modeling birth weights of babies in Exercises 8.1 and 8.2. A more realistic approach to modeling infant weights is to consider all possibly related variables at once. Other variables of interest include length of pregnancy in days (gestation), mother's age in years (age), mother's height in inches (height), and mother's pregnancy weight in pounds (weight). Below are three observations from this data set. bwt gestation parity age height weight smoke 1 120 284 0 27 62 100 0 2 113 282 0 33 64 135 0 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 1236 117 297 0 38 65 129 0 The summary table below shows the results of a regression model for predicting the average birth weight of babies based on all of the variables included in the data set. Estimate Std. Error t value Pr(>|t|) (Intercept) -80.41 14.35 -5.60 0.0000 gestation 0.44 0.03 15.26 0.0000 parity -3.33 1.13 -2.95 0.0033 age -0.01 0.09 -0.10 0.9170 height 1.15 0.21 5.63 0.0000 weight 0.05 0.03 1.99 0.0471 smoke -8.40 0.95 -8.81 0.0000 1. (a) Write the equation of the regression line that includes all of the variables. 2. (b) Interpret the slopes of gestation and age in this context. 3. (c) The coefficient for parity is different than in the linear model shown in Exercise 8.2. Why might there be a difference? 4. (d) Calculate the residual for the rst observation in the data set. 5. (e) The variance of the residuals is 249.28, and the variance of the birth weights of all babies in the data set is 332.57. Calculate the R2 and the adjusted R2. Note that there are 1,236 observations in the data set. 8.4 Absenteeism. Researchers interested in the relationship between absenteeism from school and certain demographic characteristics of children collected data from 146 randomly sampled students in rural New SouthWales, Australia, in a particular school year. Below are three observations from this data set. eth sex lrn days 1 0 1 1 2 2 0 1 1 11 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ 146 1 0 0 37 The summary table below shows the results of a linear regression model for predicting the average number of days absent based on ethnic background (eth: 0 - aboriginal, 1 - not aboriginal), sex (sex: 0 - female, 1 - male), and learner status (lrn: 0 - average learner, 1 - slow learner).18 Estimate Std. Error t value Pr(>|t|) (Intercept) 18.93 2.57 7.37 0.0000 eth -9.11 2.60 -3.51 0.0000 sex 3.10 2.64 1.18 0.2411 lrn 2.15 2.65 0.81 0.4177 (a) Write the equation of the regression line. (b) Interpret each one of the slopes in this context. (c) Calculate the residual for the rst observation in the data set: a student who is aboriginal, male, a slow learner, and missed 2 days of school. (d) The variance of the residuals is 240.57, and the variance of the number of absent days for all students in the data set is 264.17. Calculate the R2 and the adjusted R2. Note that there are 146 observations in the data set. 8.5 GPA. A survey of 55 Duke University students asked about their GPA, number of hours they study at night, number of nights they go out, and their gender. Summary output of the regression model is shown below. Note that male is coded as 1. Estimate Std. Error t value Pr(>|t|) (Intercept) 3.45 0.35 9.85 0.00 studyweek 0.00 0.00 0.27 0.79 sleepnight 0.01 0.05 0.11 0.91 outnight 0.05 0.05 1.01 0.32 gender -0.08 0.12 -0.68 0.50 (a) Calculate a 95% con dence interval for the coefficient of gender in the model, and interpret it in the context of the data. (b) Would you expect a 95% con dence interval for the slope of the remaining variables to include 0? Explain 18W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Fourth Edition. Data can also be found in the R MASS package. New York: Springer, 2002. 8.6 Cherry trees. Timber yield is approximately equal to the volume of a tree, however, this value is difficult to measure without rst cutting the tree down. Instead, other variables, such as height and diameter, may be used to predict a tree's volume and yield. Researchers wanting to understand the relationship between these variables for black cherry trees collected data from 31 such trees in the Allegheny National Forest, Pennsylvania. Height is measured in feet, diameter in inches (at 54 inches above ground), and volume in cubic feet.19 Estimate Std. Error t value Pr(>|t|) (Intercept) -57.99 8.64 -6.71 0.00 height 0.34 0.13 2.61 0.01 diameter 4.71 0.26 17.82 0.00 (a) Calculate a 95% con dence interval for the coefficient of height, and interpret it in the context of the data. (b) One tree in this sample is 79 feet tall, has a diameter of 11.3 inches, and is 24.2 cubic feet in volume. Determine if the model overestimates or underestimates the volume of this tree, and by how much. Model selection 8.7 Baby weights, Part IV. Exercise 8.3 considers a model that predicts a newborn's weight using several predictors. Use the regression table below, which summarizes the model, to answer the following questions. If necessary, refer back to Exercise 8.3 for a reminder about the meaning of each variable. Estimate Std. Error t value Pr(>|t|) (Intercept) -80.41 14.35 -5.60 0.0000 gestation 0.44 0.03 15.26 0.0000 parity -3.33 1.13 -2.95 0.0033 age -0.01 0.09 -0.10 0.9170 height 1.15 0.21 5.63 0.0000 weight 0.05 0.03 1.99 0.0471 smoke -8.40 0.95 -8.81 0.0000 (a) Determine which variables, if any, do not have a signi cant linear relationship with the outcome and should be candidates for removal from the model. If there is more than one such variable, indicate which one should be removed first. (b) The summary table below shows the results of the model with the age variable removed. Determine if any other variable(s) should be removed from the model. Estimate Std. Error t value Pr(>|t|) (Intercept) -80.64 14.04 -5.74 0.0000 gestation 0.44 0.03 15.28 0.0000 parity -3.29 1.06 -3.10 0.0020 height 1.15 0.20 5.64 0.0000 weight 0.05 0.03 2.00 0.0459 smoke -8.38 0.95 -8.82 0.0000 19D.J. Hand. A handbook of small data sets. Chapman & Hall/CRC, 1994. 8.8 Absenteeism, Part II. Exercise 8.4 considers a model that predicts the number of days absent using three predictors: ethnic background (eth), gender (sex), and learner status (lrn). Use the regression table below to answer the following questions. If necessary, refer back to Exercise 8.4 for additional details about each variable. Estimate Std. Error t value Pr(>|t|) (Intercept) 18.93 2.57 7.37 0.0000 eth -9.11 2.60 -3.51 0.0000 sex 3.10 2.64 1.18 0.2411 lrn 2.15 2.65 0.81 0.4177 (a) Determine which variables, if any, do not have a signi cant linear relationship with the outcome and should be candidates for removal from the model. If there is more than one such variable, indicate which one should be removed first. (b) The summary table below shows the results of the regression we re t after removing learner status from the model. Determine if any other variable(s) should be removed from the model. Estimate Std. Error t value Pr(>|t|) (Intercept) 19.98 2.22 9.01 0.0000 eth -9.06 2.60 -3.49 0.0006 sex 2.78 2.60 1.07 0.2878 8.9 Baby weights, Part V. Exercise 8.3 provides regression output for the full model (including all explanatory variables available in the data set) for predicting birth weight of babies. In this exercise we consider a forward-selection algorithm and add variables to the model one-at-a-time. The table below shows the p-value and adjusted $R^2$ of each model where we include only the corresponding predictor. Based on this table, which variable should be added to the model first? variable gestation parity age height weight smoke p-value $2.2\times 10^{-16}$ 0.1052 0.2375 $2.97 \times 10^{-12}$ $8.2 \times 10^{-8}$ $2.2 \times 10^{-16}$ $R^2_{adj}$ 0.1657 0.0013 0.0003 0.0386 0.0229 0.0569 8.10 Absenteeism, Part III. Exercise 8.4 provides regression output for the full model, including all explanatory variables available in the data set, for predicting the number of days absent from school. In this exercise we consider a forward-selection algorithm and add variables to the model one-at-a-time. The table below shows the p-value and adjusted R2 of each model where we include only the corresponding predictor. Based on this table, which variable should be added to the model first? variable ethnicity sex learner status p-value 0.0007 0.3142 0.5870 $R^2_{adj}$ 0.0714 0.0001 0 Checking model assumptions using graphs 8.11 Baby weights, Part V. Exercise 8.7 presents a regression model for predicting the average birth weight of babies based on length of gestation, parity, height, weight, and smoking status of the mother. Determine if the model assumptions are met using the plots below. If not, describe how to proceed with the analysis. 8.12 GPA and IQ. A regression model for predicting GPA from gender and IQ was fit, and both predictors were found to be statistically signi cant. Using the plots given below, determine if this regression model is appropriate for these data. Logistic regression 8.13 Possum classi cation, Part I. The common brushtail possum of the Australia region is a bit cuter than its distant cousin, the American opossum (see Figure 7.5 on page 318). We consider 104 brushtail possums from two regions in Australia, where the possums may be considered a random sample from the population. The rst region is Victoria, which is in the eastern half of Australia and traverses the southern coast. The second region consists of New South Wales and Queensland, which make up eastern and northeastern Australia. We use logistic regression to differentiate between possums in these two regions. The outcome variable, called population, takes value 1 when a possum is from Victoria and 0 when it is from New South Wales or Queensland. We consider ve predictors: sex male (an indicator for a possum being male), head length, skull width, total length, and tail length. Each variable is summarized in a histogram. The full logistic regression model and a reduced model after variable selection are summarized in the table. Full Model Estimate SE Z Pr(>|Z|) (Intercept) 39.2349 11.5368 3.40 0.0007 sex male -1.2376 0.6662 -1.86 0.0632 head length -0.1601 0.1386 -1.16 0.2480 skull width -0.2012 0.1327 -1.52 0.1294 total length 0.6488 0.1531 4.24 0.0000 tail length -1.8708 0.3741 -5.00 0.0000 Reduced Model Estimate SE Z Pr(>|Z|) (Intercept) 33.5095 9.9053 3.38 0.0007 sex male -1.4207 0.6457 -2.20 0.0278 head length skull width -0.2787 0.1226 -2.27 0.0231 total length 0.5687 0.1322 4.30 0.0000 tail length -1.8057 0.3599 -5.02 0.0000 (a) Examine each of the predictors. Are there any outliers that are likely to have a very large inuence on the logistic regression model? (b) The summary table for the full model indicates that at least one variable should be eliminated when using the p-value approach for variable selection: head length. The second component of the table summarizes the reduced model following variable selection. Explain why the remaining estimates change between the two models. 8.14 Challenger disaster, Part I. On January 28, 1986, a routine launch was anticipated for the Challenger space shuttle. Seventy-three seconds into the ight, disaster happened: the shuttle broke apart, killing all seven crew members on board. An investigation into the cause of the disaster focused on a critical seal called an O-ring, and it is believed that damage to these O-rings during a shuttle launch may be related to the ambient temperature during the launch. The table below summarizes observational data on O-rings for 23 shuttle missions, where the mission order is based on the temperature at the time of the launch. Temp gives the temperature in Fahrenheit, Damaged represents the number of damaged O-rings, and Undamaged represents the number of O-rings that were not damaged. Shuttle Mission 1 2 3 4 5 6 7 8 9 10 11 12 Temperature 53 57 58 63 66 67 67 67 68 69 70 70 Damaged 5 1 1 1 0 0 0 0 0 0 1 0 Undamaged 1 5 5 5 6 6 6 6 6 6 5 6 Shuttle Mission 13 14 15 16 17 18 19 20 21 22 23 Temperature 70 70 72 73 75 75 76 76 78 79 81 Damaged 1 0 0 0 0 1 0 0 0 0 0 Undamaged 5 6 6 6 6 5 6 6 6 6 6 (a) Each column of the table above represents a different shuttle mission. Examine these data and describe what you observe with respect to the relationship between temperatures and damaged O-rings. (b) Failures have been coded as 1 for a damaged O-ring and 0 for an undamaged O-ring, and a logistic regression model was t to these data. A summary of this model is given below. Describe the key components of this summary table in words. Estimate Std. Error z value Pr(>|z|) (Intercept) 11.6630 3.2963 3.54 0.0004 Temperature -0.2162 0.0532 -4.07 0.0000 (c) Write out the logistic model using the point estimates of the model parameters. (d) Based on the model, do you think concerns regarding O-rings are justi ed? Explain. 8.15 Possum classi cation, Part II. A logistic regression model was proposed for classifying common brushtail possums into their two regions in Exercise 8.13. Use the results of the summary table for the reduced model presented in Exercise 8.13 for the questions below. The outcome variable took value 1 if the possum was from Victoria and 0 otherwise. (a) Write out the form of the model. Also identify which of the following variables are positively associated (when controlling for other variables) with a possum being from Victoria: skull width, total length, and tail length. (b) Suppose we see a brushtail possum at a zoo in the US, and a sign says the possum had been captured in the wild in Australia, but it doesn't say which part of Australia. However, the sign does indicate that the possum is male, its skull is about 63 mm wide, its tail is 37 cm long, and its total length is 83 cm. What is the reduced model's computed probability that this possum is from Victoria? How confident are you in the model's accuracy of this probability calculation? 8.16 Challenger disaster, Part II. Exercise 8.14 introduced us to O-rings that were identified as a plausible explanation for the breakup of the Challenger space shuttle 73 seconds into takeo in 1986. The investigation found that the ambient temperature at the time of the shuttle launch was closely related to the damage of O-rings, which are a critical component of the shuttle. See this earlier exercise if you would like to browse the original data. (a) The data provided in the previous exercise are shown in the plot. The logistic model fit to these data may be written as $log (\frac {\hat {p}}{1 - \hat {p}} = 11.6630 - 0.2162 \times \text {Temperature}$ where $\hat {p}$ is the model-estimated probability that an O-ring will become damaged. Use the model to calculate the probability that an O-ring will become damaged at each of the following ambient temperatures: 51, 53, and 55 degrees Fahrenheit. The model-estimated probabilities for several additional ambient temperatures are provided below, where subscripts indicate the temperature: $\hat {p}_{57} = 0.341 \hat {p}_{59} = 0.251 \hat {p}_{61} = 0.179 \hat {p}_{63} = 0.124$ $\hat {p}_{65} = 0.084 \hat {p}_{67} = 0.056 \hat {p}_{69} = 0.037 \hat {p}_{71} = 0.024$ (b) Add the model-estimated probabilities from part (a) on the plot, then connect these dots using a smooth curve to represent the model-estimated probabilities. (c) Describe any concerns you may have regarding applying logistic regression in this application, and note any assumptions that are required to accept the model's validity. Contributors David M Diez (Google/YouTube), Christopher D Barr (Harvard School of Public Health), Mine Çetinkaya-Rundel (Duke University)
textbooks/stats/Introductory_Statistics/OpenIntro_Statistics_(Diez_et_al)./08%3A_Multiple_and_Logistic_Regression/8.05%3A_Exercises.txt
Learning Objectives Having read this chapter, you should be able to: • Describe the central goals and fundamental concepts of statistics • Describe the difference between experimental and observational research with regard to what can be inferred about causality • Explain how randomization provides the ability to make inferences about causation. “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.” - H.G. Wells • 1.1: What Is Statistical Thinking? Statistical thinking is a way of understanding a complex world by describing it in relatively simple terms that nonetheless capture essential aspects of its structure, and that also provide us some idea of how uncertain we are about our knowledge. The foundations of statistical thinking come primarily from mathematics and statistics, but also from computer science, psychology, and other fields of study. • 1.2: Dealing with Statistics Anxiety Anxiety feels uncomfortable, but psychology tells us that this kind of emotional arousal can actually help us perform better on many tasks, by focusing our attention So if you start to feel anxious about the material in this course, remind yourself that many others in the class are feeling similarly, and that the arousal could actually help you perform better (even if it doesn’t seem like it!). • 1.3: What Can Statistics Do for Us? There are three major things that we can do with statistics: (1) Describe: The world is complex and we often need to describe it in a simplified way that we can understand. (2) Decide: We often need to make decisions based on data, usually in the face of uncertainty. (3) Predict: We often wish to make predictions about new situations based on our knowledge of previous situations. • 1.4: The Big Ideas of Statistics There are a number of very basic ideas that cut through nearly all aspects of statistical thinking. Several of these are outlined by Stigler (2016) in his outstanding book “The Seven Pillars of Statistical Wisdom”, which I have augmented here. • 1.5: Causality and Statistics • 1.6: Suggested Readings 01: Introduction Statistical thinking is a way of understanding a complex world by describing it in relatively simple terms that nonetheless capture essential aspects of its structure, and that also provide us some idea of how uncertain we are about our knowledge. The foundations of statistical thinking come primarily from mathematics and statistics, but also from computer science, psychology, and other fields of study. We can distinguish statistical thinking from other forms of thinking that are less likely to describe the world accurately. In particular, human intuition often tries to answer the same questions that we can answer using statistical thinking, but often gets the answer wrong. For example, in recent years most Americans have reported that they think that violent crime was worse compared to the previous year (Pew Research Center). However, a statistical analysis of the actual crime data shows that in fact violent crime has steadily decreased since the 1990’s. Intuition fails us because we rely upon best guesses (which psychologists refer to as heuristics) that can often get it wrong. For example, humans often judge the prevalence of some event (like violent crime) using an availability heuristic – that is, how easily can we think of an example of violent crime. For this reason, our judgments of increasing crime rates may be more reflective of increasing news coverage, in spite of an actual decrease in the rate of crime. Statistical thinking provides us with the tools to more accurately understand the world and overcome the fallibility of human intuition. 1.02: Dealing with Statistics Anxiety Many people come to their first statistics class with a lot of trepidation and anxiety, especially once they hear that they will also have to learn to code in order to analyze data. In my class I give students a survey prior to the first session in order to measure their attitude towards statistics, asking them to rate a number of statments on a scale of 1 (strongly disagree) to 7 (strongly agree). One of the items on the survey is “The thought of being enrolled in a statistics course makes me nervous”. In the most recent class, almost two-thirds of the class responded with a five or higher, and about one-fourth of the students said that they strongly agreed with the statement. So if you feel nervous about starting to learn statistics, you are not alone. Anxiety feels uncomfortable, but psychology tells us that this kind of emotional arousal can actually help us perform better on many tasks, by focusing our attention So if you start to feel anxious about the material in this course, remind yourself that many others in the class are feeling similarly, and that the arousal could actually help you perform better (even if it doesn’t seem like it!).
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/01%3A_Introduction/1.01%3A_What_Is_Statistical_Thinking%3F.txt
There are three major things that we can do with statistics: • Describe: The world is complex and we often need to describe it in a simplified way that we can understand. • Decide: We often need to make decisions based on data, usually in the face of uncertainty. • Predict: We often wish to make predictions about new situations based on our knowledge of previous situations. Let’s look at an example of these in action, centered on a question that many of us are interested in: How do we decide what’s healthy to eat? There are many different sources of guidance, from government dietary guidelines to diet books to bloggers. Let’s focus in on a specific question: Is saturated fat in our diet a bad thing? One way that we might answer this question is common sense. If we eat fat then it’s going to turn straight into fat in our bodies, right? And we have all seen photos of arteries clogged with fat, so eating fat is going to clog our arteries, right? Another way that we might answer this question is by listening to authority figures. The Dietary Guidelines from the US Food and Drug Administration have as one of their Key Recommendations that “A healthy eating pattern limits saturated fats”.You might hope that these guidelines would be based on good science, and in some cases they are, but as Nina Teicholz outlined in her book “Big Fat Surprise”(Teicholz 2014), this particular recommendation seems to be based more on the dogma of nutrition researchers than on actual evidence. Finally, we might look at actual scientific research. Let’s start by looking at a large study called the PURE study, which has examined diets and health outcomes (including death) in more than 135,000 people from 18 different countries. In one of the analyses of this dataset (published in The Lancet in 2017; Dehghan et al. (2017)), the PURE investigators reported an analysis of how intake of various classes of macronutrients (including saturated fats and carbohydrates) was related to the likelihood of dying during the time that people were followed. People were followed for a median of 7.4 years, meaning that half of the people in the study were followed for less and half were followed for more than 7.4 years. Figure 1.1 plots some of the data from the study (extracted from the paper), showing the relationship between the intake of both saturated fats and carbohydrates and the risk of dying from any cause. This plot is based on ten numbers. To obtain these numbers, the researchers split the group of 135,335 study participants (which we call the “sample”) into 5 groups (“quintiles”) after ordering them in terms of their intake of either of the nutrients; the first quintile contains the 20% of people with the lowest intake, and the 5th quintile contains the 20% with the highest intake. The researchers then computed how often people in each of those groups died during the time they were being followed. The figure expresses this in terms of the relative risk of dying in comparison to the lowest quintile: If this number is greater than 1 it means that people in the group are more likely to die than are people in the lowest quintile, whereas if it’s less than one it means that people in the group are less likely to die. The figure is pretty clear: People who ate more saturated fat were less likely to die during the study, with the lowest death rate seen for people who were in the fourth quintile (that is, who ate more fat than the lowest 60% but less than the top 20%). The opposite is seen for carbohydrates; the more carbs a person ate, the more likely they were to die during the study. This example shows how we can use statistics to describe a complex dataset in terms of a much simpler set of numbers; if we had to look at the data from each of the study participants at the same time, we would be overloaded with data and it would be hard to see the pattern that emerges when they are described more simply. The numbers in Figure 1.1 seem to show that deaths decrease with saturated fat and increase with carbohydrate intake, but we also know that there is a lot of uncertainty in the data; there are some people who died early even though they ate a low-carb diet, and, similarly, some people who ate a ton of carbs but lived to a ripe old age. Given this variability, we want to decide whether the relationships that we see in the data are large enough that we wouldn’t expect them to occur randomly if there was not truly a relationship between diet and longevity. Statistics provide us with the tools to make these kinds of decisions, and often people from the outside view this as the main purpose of statistics. But as we will see throughout the book, this need for black-and-white decisions based on fuzzy evidence has often led researchers astray. Based on the data we would also like to make predictions about future outcomes. For example, a life insurance company might want to use data about a particular person’s intake of fat and carbohydrate to predict how long they are likely to live. An important aspect of prediction is that it requires us to generalize from the data we already have to some other situation, often in the future; if our conclusions were limited to the specific people in the study at a particular time, then the study would not be very useful. In general, researchers must assume that their particular sample is representative of a larger population, which requires that they obtain the sample in a way that provides an unbiased picture of the population. For example, if the PURE study had recruited all of its participants from religious sects that practice vegetarianism, then we probably wouldn’t want to generalize the results to people who follow different dietary standards.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/01%3A_Introduction/1.03%3A_What_Can_Statistics_Do_for_Us%3F.txt
There are a number of very basic ideas that cut through nearly all aspects of statistical thinking. Several of these are outlined by Stigler (2016) in his outstanding book “The Seven Pillars of Statistical Wisdom”, which I have augmented here. 1.4.1 Learning from data One way to think of statistics is as a set of tools that enable us to learn from data. In any situation, we start with a set of ideas or hypotheses about what might be the case. In the PURE study, the researchers may have started out with the expectation that eating more fat would lead to higher death rates, given the prevailing negative dogma about saturated fats. Later in the course we will introduce the idea of prior knowledge, which is meant to reflect the knowledge that we bring to a situation. This prior knowledge can vary in its strength, often based on our amount of experience; if I visit a restaurant for the first time I am likely to have a weak expectation of how good it will be, but if I visit a restaurant where I have eaten ten times before, my expectations will be much stronger. Similarly, if I look at a restaurant review site and see that a restaurant’s average rating of four stars is only based on three reviews, I will have a weaker expectation than I would if it was based on 300 reviews. Statistics provides us with a way to describe how new data can be best used to update our beliefs, and in this way there are deep links between statistics and psychology. In fact, many theories of human and animal learning from psychology are closely aligned with ideas from the new field of machine learning. Machine learning is a field at the interface of statistics and computer science that focuses on how to build computer algorithms that can learn from experience. While statistics and machine learning often try to solve the same problems, researchers from these fields often take very different approaches; the famous statistician Leo Breiman once referred to them as “The Two Cultures” to reflect how different their approaches can be (Breiman 2001). In this book I will try to blend the two cultures together because both approaches provide useful tools for thinking about data. 1.4.2 Aggregation Another way to think of statistics is “the science of throwing away data”. In the example of the PURE study above, we took more than 100,000 numbers and condensed them into ten. It is this kind of aggregation that is one of the most important concepts in statistics. When it was first advanced, this was revolutionary: If we throw out all of the details about every one of the participants, then how can we be sure that we aren’t missing something important? As we will see, statistics provides us ways to characterize the structure of aggregates of data, and with theoretical foundations that explain why this usually works well. However, it’s also important to keep in mind that aggregation can go too far, and later we will encounter cases where a summary can provide a misleading picture of the data being summarized. 1.4.3 Uncertainty The world is an uncertain place. We now know that cigarette smoking causes lung cancer, but this causation is probabilistic: A 68-year-old man who smoked two packs a day for the past 50 years and continues to smoke has a 15% (1 out of 7) risk of getting lung cancer, which is much higher than the chance of lung cancer in a nonsmoker. However, it also means that there will be many people who smoke their entire lives and never get lung cancer. Statistics provides us with the tools to characterize uncertainty, to make decisions under uncertainty, and to make predictions whose uncertainty we can quantify. One often sees journalists write that scientific researchers have “proven” some hypothesis. But statistical analysis can never “prove” a hypothesis, in the sense of demonstrating that it must be true (as one would in a logical or mathematical proof). Statistics can provide us with evidence, but it’s always tentative and subject to the uncertainty that is always present in the real world. 1.4.4 Sampling The concept of aggregation implies that we can make useful insights by collapsing across data – but how much data do we need? The idea of sampling says that we can summarize an entire population based on just a small number of samples from the population, as long as those samples are obtained in the right way. For example, the PURE study enrolled a sample of about 135,000 people, but its goal was to provide insights about the billions of humans who make up the population from which those people were sampled. As we already discussed above, the way that the study sample is obtained is critical, as it determines how broadly we can generalize the results. Another fundamental insight about sampling is that while larger samples are always better (in terms of their ability to accurately represent the entire population), there are diminishing returns as the sample gets larger. In fact, the rate at which the benefit of larger samples decreases follows a simple mathematical rule, growing as the square root of the sample size, such that in order to double the quality of our data we need to quadruple the size of our sample.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/01%3A_Introduction/1.04%3A_The_Big_Ideas_of_Statistics.txt
The PURE study seemed to provide pretty strong evidence for a positive relationship between eating saturated fat and living longer, but this doesn’t tell us what we really want to know: If we eat more saturated fat, will that cause us to live longer? This is because we don’t know whether there is a direct causal relationship between eating saturated fat and living longer. The data are consistent with such a relationship, but they are equally consistent with some other factor causing both higher saturated fat and longer life. For example, it is likely that people who are richer eat more saturated fat and richer people tend to live longer, but their longer life is not necessarily due to fat intake — it could instead be due to better health care, reduced psychological stress, better food quality, or many other factors. The PURE study investigators tried to account for these factors, but we can’t be certain that their efforts completely removed the effects of other variables. The fact that other factors may explain the relationship between saturated fat intake and death is an example of why introductory statistics classes often teach that “correlation does not imply causation”, though the renowned data visualization expert Edward Tufte has added, “but it sure is a hint.” Although observational research (like the PURE study) cannot conclusively demonstrate causal relations, we generally think that causation can be demonstrated using studies that experimentally control and manipulate a specific factor. In medicine, such a study is referred to as a randomized controlled trial (RCT). Let’s say that we wanted to do an RCT to examine whether increasing saturated fat intake increases life span. To do this, we would sample a group of people, and then assign them to either a treatment group (which would be told to increase their saturated fat intake) or a control group (who would be told to keep eating the same as before). It is essential that we assign the individuals to these groups randomly. Otherwise, people who choose the treatment might be different in some way than people who choose the control group – for example, they might be more likely to engage in other healthy behaviors as well. We would then follow the participants over time and see how many people in each group died. Because we randomized the participants to treatment or control groups, we can be reasonably confident that there are no other differences between the groups that would confound the treatment effect; however, we still can’t be certain because sometimes randomization yields treatment versus control groups that do vary in some important way. Researchers often try to address these confounds using statistical analyses, but removing the influence of a confound from the data can be very difficult. A number of RCTs have examined the question of whether changing saturated fat intake results in better health and longer life. These trials have focused on reducing saturated fat because of the strong dogma amongst nutrition researchers that saturated fat is deadly; most of these researchers would have probably argued that it was not ethical to cause people to eat more saturated fat! However, the RCTs have show a very consistent pattern: Overall there is no appreciable effect on death rates of reducing saturated fat intake. 1.06: Suggested Readings • The Seven Pillars of Statistical Wisdom, by Stephen Stigler • The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century, by David Salsburg • Naked Statistics: Stripping the Dread from the Data, by Charles Wheelan
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/01%3A_Introduction/1.05%3A__Causality_and_Statistics.txt
Learning Objectives Having read this chapter, you should be able to: • Distinguish between different types of variables (quantitative/qualitative, binary/integer/real, discrete/continuous) and give examples of each of these kinds of variables • Distinguish between the concepts of reliability and validity and apply each concept to a particular dataset • 2.1: What Are Data? The first important point about data is that data are - meaning that the word “data” is plural (though some people disagree with me on this). You might also wonder how to pronounce “data” – I say “day-tah” but I know many people who say “dah-tah” and I have been able to remain friends with them in spite of this. Now if I heard them say “the data is” then that would be bigger issue… • 2.2: Discrete Versus Continuous Measurements A discrete measurement is one that takes one of a set of particular values. These could be qualitative values (for example, different breeds of dogs) or numerical values (for example, how many friends one has on Facebook). A continuous measurement is one that is defined in terms of a real number. It could fall anywhere in a particular range of values, though usually our measurement tools will limit the precision with which we can measure. • 2.3: Suggested Readings • 2.4: Appendix • 2.5: What Makes a Good Measurement? It is usually impossible to measure a construct without some amount of error. In the example above, you might know the answer but you might mis-read the question and get it wrong. In other cases there is error intrinsic to the thing being measured, such as when we measure how long it takes a person to respond on a simple reaction time test, which will vary from trial to trial for many reasons. We generally want our measurement error to be as low as possible. 02: Working with Data The first important point about data is that data are - meaning that the word “data” is plural (though some people disagree with me on this). You might also wonder how to pronounce “data” – I say “day-tah” but I know many people who say “dah-tah” and I have been able to remain friends with them in spite of this. Now if I heard them say “the data is” then that would be bigger issue… 2.1.1 Qualitative data Data are composed of variables, where a variable reflects a unique measurement or quantity. Some variables are qualitative, meaning that they describe a quality rather than a numeric quantity. For example, in my stats course I generally give an introductory survey, both to obtain data to use in class and to learn more about the students. One of the questions that I ask is “What is your favorite food?”, to which some of the answers have been: blueberries, chocolate, tamales, pasta, pizza, and mango. Those data are not intrinsically numerical; we could assign numbers to each one (1=blueberries, 2=chocolate, etc), but we would just be using the numbers as labels rather than as real numbers; for example, it wouldn’t make sense to add the numbers together in this case. However, we will often code qualitative data using numbers in order to make them easier to work with, as you will see later. 2.1.2 Quantitative data More commonly in statistics we will work with quantitative data, meaning data that are numerical. For example, here Table 2.1 shows the results from another question that I ask in my introductory class, which is “Why are you taking this class?” Table 2.1: Counts of the prevalence of different responses to the question “Why are you taking this class?” Why are you taking this class? Number of students It fulfills a degree plan requirement 105 It fulfills a General Education Breadth Requirement 32 It is not required but I am interested in the topic 11 Other 4 Note that the students’ answers were qualitative, but we generated a quantitative summary of them by counting how many students gave each response. 2.1.2.1 Types of numbers There are several different types of numbers that we work with in statistics. It’s important to understand these differences, in part because programming languages like R often distinguish between them. Binary numbers. The simplest are binary numbers – that is, zero or one. We will often use binary numbers to represent whether something is true or false, or present or absent. For example, I might ask 10 people if they have ever experienced a migraine headache, recording their answers as “Yes” or “No”. It’s often useful to instead use logical values, which take the value of either `TRUE` or `FALSE`. We can create these by testing whether each value is equal to “Yes”, which we can do using the `==` symbol. This will return the value `TRUE` for any matching “Yes” values, and `FALSE` otherwise. These are useful to R knows how to interpret them natively, whereas it doesn’t know what “Yes” and “No” mean. In general, most programming languages treat truth values and binary numbers equivalently. The number 1 is equal to the logical value `TRUE`, and the number zero is equal to the logical value `FALSE`. Integers. Integers are whole numbers with no fractional or decimal part. We most commonly encounter integers when we count things, but they also often occur in psychological measurement. For example, in my introductory survey I administer a set of questions about attitudes towards statistics (such as “Statistics seems very mysterious to me.”), on which the students respond with a number between 1 (“Disagree strongly”) and 7 (“Agree strongly”). Real numbers. Most commonly in statistics we work with real numbers, which have a fractional/decimal part. For example, we might measure someone’s weight, which can be measured to an arbitrary level of precision, from whole pounds down to micrograms.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/02%3A_Working_with_Data/2.01%3A_What_Are_Data.txt
A discrete measurement is one that takes one of a set of particular values. These could be qualitative values (for example, different breeds of dogs) or numerical values (for example, how many friends one has on Facebook). Importantly, there is no middle ground between the measurements; it doesn’t make sense to say that one has 33.7 friends. A continuous measurement is one that is defined in terms of a real number. It could fall anywhere in a particular range of values, though usually our measurement tools will limit the precision with which we can measure; for example, a floor scale might measure weight to the nearest pound, even though weight could in theory be measured with much more precision. It is common in statistics courses to go into more detail about different “scales” of measurement, which are discussed in more detail in the Appendix to this chapter. The most important takeaway from this is that some kinds of statistics don’t make sense on some kinds of data. For example, imagine that we were to collect postal Zip Code data from a number of individuals. Those numbers are represented as integers, but they don’t actually refer to a numeric scale; each zip code basically serves as a label for a different region. For this reason, it wouldn’t make sense to talk about the average zip code, for example. 2.03: Suggested Readings An introduction to psychometric theory with applications in R - A free online textbook on psychological measurement 2.04: Appendix 2.4.1 Scales of measurement All variables must take on at least two different possible values (otherwise they would be a constant rather than a variable), but different values of the variable can relate to each other in different ways, which we refer to as scales of measurement. There are four ways in which the different values of a variable can differ. • Identity: Each value of the variable has a unique meaning. • Magnitude: The values of the variable reflect different magnitudes and have an ordered relationship to one another — that is, some values are larger and some are smaller. • Equal intervals: Units along the scale of measurement are equal to one another. This means, for example, that the difference between 1 and 2 would be equal in its magnitude to the difference between 19 and 20. • Absolute zero: The scale has a true meaningful zero point. For example, for many measurements of physical quantities such as height or weight, this is the complete absence of the thing being measured. There are four different scales of measurement that go along with these different ways that values of a variable can differ. Nominal scale. A nominal variable satisfies the criterion of identity, such that each value of the variable represents something different, but the numbers simply serve as qualitative labels as discussed above. For example, we might ask people for their political party affiliation, and then code those as numbers: 1 = “Republican”, 2 = “Democrat”, 3 = “Libertarian”, and so on. However, the different numbers do not have any ordered relationship with one another. Ordinal scale. An ordinal variable satisfies the criteria of identity and magnitude, such that the values can be ordered in terms of their magnitude. For example, we might ask a person with chronic pain to complete a form every day assessing how bad their pain is, using a 1-7 numeric scale. Note that while the person is presumably feeling more pain on a day when they report a 6 versus a day when they report a 3, it wouldn’t make sense to say that their pain is twice as bad on the former versus the latter day; the ordering gives us information about relative magnitude, but the differences between values are not necessarily equal in magnitude. Interval scale. An interval scale has all of the features of an ordinal scale, but in addition the intervals between units on the measurement scale can be treated as equal. A standard example is physical temperature measured in Celsius or Farenheit; the physical difference between 10 and 20 degrees is the same as the physical difference between 90 and 100 degrees, but each scale can also take on negative values. Ratio scale. A ratio scale variable has all four of the features outlined above: identity, magnitude, equal intervals, and absolute zero. The difference between a ratio scale variable and an interval scale variable is that the ratio scale variable has a true zero point. Examples of ratio scale variables include physical height and weight, along with temperature measured in Kelvin. There are two important reasons that we must pay attention to the scale of measurement of a variable. First, the scale determines what kind of mathematical operations we can apply to the data (see Table 2.2). A nominal variable can only be compared for equality; that is, do two observations on that variable have the same numeric value? It would not make sense to apply other mathematical operations to a nominal variable, since they don’t really function as numbers in a nominal variable, but rather as labels. With ordinal variables, we can also test whether one value is greater or lesser than another, but we can’t do any arithmetic. Interval and ratio variables allow us to perform arithmetic; with interval variables we can only add or subtract values, whereas with ratio variables we can also multiply and divide values. Table 2.2: Different scales of measurement admit different types of numeric operations Equal/not equal >/< +/- Multiply/divide Nominal OK Ordinal OK OK Interval OK OK OK Ratio OK OK OK OK These constraints also imply that there are certain kinds of statistics that we can compute on each type of variable. Statistics that simply involve counting of different values (such as the most common value, known as the mode), can be calculated on any of the variable types. Other statistics are based on ordering or ranking of values (such as the median, which is the middle value when all of the values are ordered by their magnitude), and these require that the value at least be on an ordinal scale. Finally, statistics that involve adding up values (such as the average, or mean), require that the variables be at least on an interval scale. Having said that, we should note that it’s quite common for researchers to compute the mean of variables that are only ordinal (such as responses on personality tests), but this can sometimes be problematic.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/02%3A_Working_with_Data/2.02%3A_Discrete_Versus_Continuous_Measurements.txt
In many fields such as psychology, the thing that we are measuring is not a physical feature, but instead is an unobservable theoretical concept, which we usually refer to as a construct. For example, let’s say that I want to test how well you understand the distinction between the four different scales of measurement described above. I could give you a pop quiz that would ask you several questions about these concepts and count how many you got right. This test might or might not be a good measurement of the construct of your actual knowledge — for example, if I were to write the test in a confusing way or use language that you don’t understand, then the test might suggest you don’t understand the concepts when really you do. On the other hand, if I give a multiple choice test with very obvious wrong answers, then you might be able to perform well on the test even if you don’t actually understand the material. It is usually impossible to measure a construct without some amount of error. In the example above, you might know the answer but you might mis-read the question and get it wrong. In other cases there is error intrinsic to the thing being measured, such as when we measure how long it takes a person to respond on a simple reaction time test, which will vary from trial to trial for many reasons. We generally want our measurement error to be as low as possible. Sometimes there is a standard against which other measurements can be tested, which we might refer to as a “gold standard” — for example, measurement of sleep can be done using many different devices (such as devices that measure movement in bed), but they are generally considered inferior to the gold standard of polysomnography (which uses measurement of brain waves to quantify the amount of time a person spends in each stage of sleep). Often the gold standard is more difficult or expensive to perform, and the cheaper method is used even though it might have greater error. When we think about what makes a good measurement, we usually distinguish two different aspects of a good measurement. 2.5.1 Reliability Reliability refers to the consistency of our measurements. One common form of reliability, known as “test-retest reliability”, measures how well the measurements agree if the same measurement is performed twice. For example, I might give you a questionnaire about your attitude towards statistics today, repeat this same questionnaire tomorrow, and compare your answers on the two days; we would hope that they would be very similar to one another, unless something happened in between the two tests that should have changed your view of statistics (like reading this book!). Another way to assess reliability comes in cases where the data includes subjective judgments. For example, let’s say that a researcher wants to determine whether a treatment changes how well an autistic child interacts with other children, which is measured by having experts watch the child and rate their interactions with the other children. In this case we would like to make sure that the answers don’t depend on the individual rater — that is, we would like for there to be high inter-rater reliability. This can be assessed by having more than one rater perform the rating, and then comparing their ratings to make sure that they agree well with one another. Reliability is important if we want to compare one measurement to another. The relationship between two different variables can’t be any stronger than the relationship between either of the variables and itself (i.e., its reliability). This means that an unreliable measure can never have a strong statistical relationship with any other measure. For this reason, researchers developing a new measurement (such as a new survey) will often go to great lengths to establish and improve its reliability. 2.5.2 Validity Reliability is important, but on its own it’s not enough: After all, I could create a perfectly reliable measurement on a personality test by re-coding every answer using the same number, regardless of how the person actually answers. We want our measurements to also be valid — that is, we want to make sure that we are actually measuring the construct that we think we are measuring (Figure 2.1). There are many different types of validity that are commonly discussed; we will focus on three of them. Face validity. Does the measurement make sense on its face? If I were to tell you that I was going to measure a person’s blood pressure by looking at the color of their tongue, you would probably think that this was not a valid measure on its face. On the other hand, using a blood pressure cuff would have face validity. This is usually a first reality check before we dive into more complicated aspects of validity. Construct validity. Is the measurement related to other measurements in an appropriate way? This is often subdivided into two aspects. Convergent validity means that the measurement should be closely related to other measures that are thought to reflect the same construct. Let’s say that I am interested in measuring how extroverted a person is using a questionnaire or an interview. Convergent validity would be demonstrated if both of these different measurements are closely related to one another. On the other hand, measurements thought to reflect different constructs should be unrelated, known as divergent validity. If my theory of personality says that extraversion and conscientiousness are two distinct constructs, then I should also see that my measurements of extraversion are unrelated to measurements of conscientiousness. Predictive validity. If our measurements are truly valid, then they should also be predictive of other outcomes. For example, let’s say that we think that the psychological trait of sensation seeking (the desire for new experiences) is related to risk taking in the real world. To test for predictive validity of a measurement of sensation seeking, we would test how well scores on the test predict scores on a different survey that measures real-world risk taking.
textbooks/stats/Introductory_Statistics/Statistical_Thinking_for_the_21st_Century_(Poldrack)/02%3A_Working_with_Data/2.05%3A_What_Makes_a_Good_Measurement.txt